Skip to Content.
Sympa Menu

charm - Re: [charm] Distributed memory sorting.

charm AT lists.cs.illinois.edu

Subject: Charm++ parallel programming system

List archive

Re: [charm] Distributed memory sorting.


Chronological Thread 
  • From: "Mark F. Adams" <mark.adams AT columbia.edu>
  • To: Nikhil Jain <nikhil.life AT gmail.com>
  • Cc: charm AT cs.illinois.edu, Edgar Solomonik <solomon AT eecs.berkeley.edu>, Francesco Miniati <fminiati AT hpcrd.lbl.gov>, Brian Van Straalen <bvstraalen AT lbl.gov>, "Kale, Laxmikant V" <kale AT illinois.edu>
  • Subject: Re: [charm] Distributed memory sorting.
  • Date: Sat, 22 Jun 2013 15:46:02 -0000
  • List-archive: <http://lists.cs.uiuc.edu/pipermail/charm/>
  • List-id: CHARM parallel programming system <charm.cs.uiuc.edu>

I just pulled charm and I get this error. This worked a few weeks ago.
Mark

$ ./build charm++ mpi-crayxc
[snip]
ar: creating ../lib/libconv-cplus-n.a
../bin/charmc -c -I. pup_util.C
../bin/charmc -c -I. pup_toNetwork.C
../bin/charmc -c -I. pup_toNetwork4.C
../bin/charmc -c -I. pup_xlater.C
../bin/charmc -c -I. pup_c.C
../bin/charmc -c -I. pup_paged.C
../bin/charmc -c -I. pup_cmialloc.C
../bin/charmc -c -I. ckimage.C
../bin/charmc -c -I. ckdll.C
../bin/charmc -c -I. ckhashtable.C
../bin/charmc -c -I. sockRoutines.c
../bin/charmc -c -I. conv-lists.C
../bin/charmc -c -I. RTH.C
../bin/charmc -c -I. persist-comm.c
../bin/charmc -c -I. mempool.c
../bin/charmc -c -I. graph.c
gmake: *** No rule to make target `partitioning_strategies.h', needed by
`TopoManager.o'. Stop.
-------------------------------------------------
Charm++ NOT BUILT. Either cd into mpi-crayxc/tmp and try
to resolve the problems yourself, visit
http://charm.cs.illinois.edu/
for more information. Otherwise, email the developers at
charm AT cs.illinois.edu



On Jun 22, 2013, at 2:18 AM, Nikhil Jain
<nikhil.life AT gmail.com>
wrote:

> Hi Mark,
>
> Edgar and I were able to locate the bug and fix it. I tested on
> Hopper, and sorting seems to work fine. However, the code crashes
> later on in because of some assertion failure. Please have a look and
> tell us if it is related to sorting.
>
> --Nikhil
>
> On Wed, Jun 19, 2013 at 1:36 PM, Nikhil Jain
> <nikhil.life AT gmail.com>
> wrote:
>> Hi Mark,
>>
>> The permissions for petsc folder in your home seems to have changed. I
>> will able to use it day before yesterday, but couldn't do it yesterday
>> and today. Can you have a look? Given the heavy occupancy of Edison, I
>> am debugging using Hopper (using xe6 arch).
>>
>> --Nikhil
>>
>> On Mon, Jun 17, 2013 at 1:45 PM, Nikhil Jain
>> <nikhil.life AT gmail.com>
>> wrote:
>>> The code was replicated because we wanted to bring all benchmarks
>>> under one umbrella "benchmarks/*", but did not want to remove the
>>> original code for those who wants to use pure charm++ version.
>>>
>>> StartCharmScheduler() is being invoked correctly; it is meant to be
>>> invoked in each sorting call. Give me a day or so, and I hope to
>>> resolve the issue.
>>>
>>> --Nikhil
>>>
>>>
>>> On Mon, Jun 17, 2013 at 1:34 PM, Edgar Solomonik
>>> <solomon AT eecs.berkeley.edu>
>>> wrote:
>>>> I find it a bit odd that the parallel sorting library was moved to a
>>>> different place in the charm repo, especially since the old library is
>>>> still
>>>> there, which is the one I developed and was looking at. Code should not
>>>> be
>>>> replicated in this fashion. It looks like this error is happening on the
>>>> StartCharmScheduler() call in the new MPI interface (I had thought the
>>>> error
>>>> was in the mainData.C code, since I am used to seeing the stack in the
>>>> reverse order). I have no experience with invoking Charm in this
>>>> fashion,
>>>> but I would think that it should only happen once during application
>>>> execution, while it looks like this call is made every time the code
>>>> sorts.
>>>>
>>>> Edgar
>>>>
>>>>
>>>> On Mon, Jun 17, 2013 at 11:25 AM, Mark F. Adams
>>>> <mark.adams AT columbia.edu>
>>>> wrote:
>>>>>
>>>>> Note, you need to edit files in psorting so you might just want to copy
>>>>> my
>>>>> version, which is built for Edison.
>>>>>
>>>>> On Jun 17, 2013, at 2:18 PM, Nikhil Jain
>>>>> <nikhil.life AT gmail.com>
>>>>> wrote:
>>>>>
>>>>>> Edgar,
>>>>>>
>>>>>> You can checkout the psorting lib from
>>>>>>
>>>>>> git://charm.cs.uiuc.edu/benchmarks/psorting
>>>>>>
>>>>>> I am not sure if the bug crept in when I modified the interface to be
>>>>>> used in interoperable mode, or it is something from earlier. I have
>>>>>> been short on for time recently, and hope to visit this soon.
>>>>>>
>>>>>> --Nikhil
>>>>>>
>>>>>> On Mon, Jun 17, 2013 at 12:48 PM, Edgar Solomonik
>>>>>> <solomon AT eecs.berkeley.edu>
>>>>>> wrote:
>>>>>>> Thanks, I can now access the files in those two directories. Could
>>>>>>> you
>>>>>>> also
>>>>>>> open access to
>>>>>>>
>>>>>>> /global/u2/m/madams/psorting/sortinglib/mainData.C
>>>>>>>
>>>>>>> or tell me where I can find this code in the directories you've
>>>>>>> opened.
>>>>>>>
>>>>>>> Thanks,
>>>>>>> Edgar
>>>>>>>
>>>>>>>
>>>>>>> On Mon, Jun 17, 2013 at 10:21 AM, Mark F. Adams
>>>>>>> <mark.adams AT columbia.edu>
>>>>>>> wrote:
>>>>>>>>
>>>>>>>> Humm, thought I did this but here it is again:
>>>>>>>>
>>>>>>>> m/madams> chmod a+r -R Charm
>>>>>>>> m/madams> chmod a+x -R Charm
>>>>>>>> m/madams> chmod a+x -R Chombo
>>>>>>>> m/madams> chmod a+r -R Chombo
>>>>>>>>
>>>>>>>>
>>>>>>>> On Jun 17, 2013, at 12:41 PM, Edgar Solomonik
>>>>>>>> <solomon AT eecs.berkeley.edu>
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>> It seems you set the permission on the directory and but not
>>>>>>>> everything
>>>>>>>> inside Charm/ and Chombo/, so I cannot access anything. I also
>>>>>>>> cannot
>>>>>>>> interpret the stack trace, since I don't know what mainData.C is.
>>>>>>>>
>>>>>>>> If I can get access to the code, I should be able to do some
>>>>>>>> debugging
>>>>>>>> of
>>>>>>>> the sorting library this week, as I am back from travelling.
>>>>>>>>
>>>>>>>> Edgar
>>>>>>>>
>>>>>>>>
>>>>>>>> On Mon, Jun 17, 2013 at 9:30 AM, Mark F. Adams
>>>>>>>> <mark.adams AT columbia.edu>
>>>>>>>> wrote:
>>>>>>>>>
>>>>>>>>> Nikhil,
>>>>>>>>>
>>>>>>>>> I've redacted my copy of this code so you can now grab it and build
>>>>>>>>> it if
>>>>>>>>> you wish.
>>>>>>>>>
>>>>>>>>> First copy:
>>>>>>>>>
>>>>>>>>> /global/u2/m/madams/Charm
>>>>>>>>> /global/u2/m/madams/Chombo
>>>>>>>>>
>>>>>>>>> Then set:
>>>>>>>>>
>>>>>>>>> PETSC_DIR=/global/homes/m/madams/petsc
>>>>>>>>> PETSC_ARCH=arch-xc30-opt64
>>>>>>>>>
>>>>>>>>> Charm/BUILD_PSORT describes how to setup to run the code. I have
>>>>>>>>> been
>>>>>>>>> building with a slightly different configuration than is described
>>>>>>>>> in
>>>>>>>>> this
>>>>>>>>> doc so you want to build with:
>>>>>>>>>
>>>>>>>>> make all DIM=3 -j22 USE_CHARMPP=TRUE MPI=TRUE USE_PETSC=TRUE
>>>>>>>>>
>>>>>>>>> in Charm/exec/.
>>>>>>>>>
>>>>>>>>> Now go into Charm/exec/SCALE and run with something like
>>>>>>>>>
>>>>>>>>> aprun -n 512 ../amrCharm3d.Linux.64.CC.ftn.DEBUG.MPI.PETSC.ex
>>>>>>>>> input512.inputs
>>>>>>>>>
>>>>>>>>> The 64 core version of this (input64.inputs) runs fine. (Charm++
>>>>>>>>> actually prints some spurious stuff that would be nice to have
>>>>>>>>> cleaned up).
>>>>>>>>>
>>>>>>>>> Mark
>>>>>>>>>
>>>>>>>>> On Jun 16, 2013, at 2:20 PM, Nikhil Jain
>>>>>>>>> <nikhil.life AT gmail.com>
>>>>>>>>> wrote:
>>>>>>>>>
>>>>>>>>>> I will have a look at it today. Any progress on getting access to
>>>>>>>>>> this
>>>>>>>>>> particular code? It will be good to be able to reproduce the error
>>>>>>>>>> case for debugging and performance.
>>>>>>>>>>
>>>>>>>>>> On Sun, Jun 16, 2013 at 8:42 AM, Mark F. Adams
>>>>>>>>>> <mark.adams AT columbia.edu>
>>>>>>>>>> wrote:
>>>>>>>>>>> Nikhil,
>>>>>>>>>>>
>>>>>>>>>>> The sorter runs a weak scaling test with 8 and 64 cores but is
>>>>>>>>>>> hanging
>>>>>>>>>>> with 512 cores.
>>>>>>>>>>>
>>>>>>>>>>> I've looked at this in ddt on Edison and see stack traces like
>>>>>>>>>>> this:
>>>>>>>>>>>
>>>>>>>>>>> #8 HistSorting (in_elems_=0, dataIn_=0x0,
>>>>>>>>>>> out_elems_=0xfffffe00055f,
>>>>>>>>>>> dataOut_=0x7fffffff5be0) at
>>>>>>>>>>> /global/u2/m/madams/psorting/sortinglib/mainData.C:111 (at
>>>>>>>>>>> 0x00000000006c595d)
>>>>>>>>>>> #7 StartCharmScheduler () (at 0x0000000000e47c8b)
>>>>>>>>>>> #6 CsdScheduler () (at 0x0000000000e52148)
>>>>>>>>>>> #5 CmiGetNonLocal () (at 0x0000000000e4b022)
>>>>>>>>>>> #4 PumpMsgs () (at 0x0000000000e4bf20)
>>>>>>>>>>> #3 PMPI_Iprobe () (at 0x0000000000ed0beb)
>>>>>>>>>>> #2 MPID_Iprobe () (at 0x0000000000ef5ea9)
>>>>>>>>>>> #1 MPIDI_CH3I_Progress () (at 0x0000000000effbe3)
>>>>>>>>>>> #0 MPID_nem_gni_poll () (at 0x0000000000f12de1)
>>>>>>>>>>>
>>>>>>>>>>> I noticed that 'in_elems_' starts at ~32K but it changes as the
>>>>>>>>>>> code
>>>>>>>>>>> runs, which is puzzling because I do not see any place in the code
>>>>>>>>>>> where
>>>>>>>>>>> this is modified.
>>>>>>>>>>>
>>>>>>>>>>> Mark
>>>>>>>>>>>
>>>>>>>>>>> On Jun 12, 2013, at 12:17 PM, Nikhil Jain
>>>>>>>>>>> <nikhil.life AT gmail.com>
>>>>>>>>>>> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> Hi Mark,
>>>>>>>>>>>>
>>>>>>>>>>>> Having given a significant thought to the interface, I am
>>>>>>>>>>>> inclined
>>>>>>>>>>>> towards the C interface. The C++ interface can be done at the
>>>>>>>>>>>> expense
>>>>>>>>>>>> of an extra indirection to C interface. What do you think?
>>>>>>>>>>>>
>>>>>>>>>>>> Separately, I have begun exploration on sorting algorithms for
>>>>>>>>>>>> near-sorted data. I will give an update if some progress is made
>>>>>>>>>>>> on
>>>>>>>>>>>> it.
>>>>>>>>>>>>
>>>>>>>>>>>> --Nikhil
>>>>>>>>>>>>
>>>>>>>>>>>> On Tue, Jun 4, 2013 at 10:39 AM, Mark F. Adams
>>>>>>>>>>>> <mark.adams AT columbia.edu>
>>>>>>>>>>>> wrote:
>>>>>>>>>>>>> Nikhil,
>>>>>>>>>>>>>
>>>>>>>>>>>>> I might recommend a model similar to what PETSc uses so let me
>>>>>>>>>>>>> describe it briefly:
>>>>>>>>>>>>>
>>>>>>>>>>>>> 1) set env vars like CHARM_DIR PSORT_DIR and CHARM_ARCH.
>>>>>>>>>>>>> CHAMR_ARCH
>>>>>>>>>>>>> is just a name.
>>>>>>>>>>>>>
>>>>>>>>>>>>> 2) have build make a file ${PSORT_DIR}/${PSORT_ARCH}/variables.
>>>>>>>>>>>>> This file has things like PSORT_LIBS and perhaps
>>>>>>>>>>>>> compilers/flags/etc.
>>>>>>>>>>>>>
>>>>>>>>>>>>> 3) The all includes this file in their make file and added
>>>>>>>>>>>>> ${PSORT_LIBS} (or whatever) to their link like.
>>>>>>>>>>>>>
>>>>>>>>>>>>> This is proratable, multiple "ARCH" (e.g., xc30, xe6; debug or
>>>>>>>>>>>>> opt,
>>>>>>>>>>>>> single double, etc.) can exist in the same installation. You can
>>>>>>>>>>>>> change your
>>>>>>>>>>>>> internal structure w/o effecting apps makefiles.
>>>>>>>>>>>>>
>>>>>>>>>>>>> Anyway just a suggestion.
>>>>>>>>>>>>>
>>>>>>>>>>>>> Mark
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> On Jun 3, 2013, at 3:44 PM, Nikhil Jain
>>>>>>>>>>>>> <nikhil.life AT gmail.com>
>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>>> Hi Mark,
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> I have made changes to the compilation/link setup to remove the
>>>>>>>>>>>>>> link
>>>>>>>>>>>>>> time dependence on charmc. If you checkout the latest charm and
>>>>>>>>>>>>>> psorting, you will find the Makefile modified per the new
>>>>>>>>>>>>>> scheme.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Once the sorting library is compiled, we wrap it (with other
>>>>>>>>>>>>>> libraries
>>>>>>>>>>>>>> if needed) into say another lib, say libcharm.a (passing -mpi
>>>>>>>>>>>>>> option
>>>>>>>>>>>>>> to charmc). This step echoes the link time path that should be
>>>>>>>>>>>>>> appended to help the linker find Charm related stuff. It also
>>>>>>>>>>>>>> writes
>>>>>>>>>>>>>> the same to a file called charm_all_libs.sh. What do you think
>>>>>>>>>>>>>> of
>>>>>>>>>>>>>> this
>>>>>>>>>>>>>> set up? I am still considering further changes, and your
>>>>>>>>>>>>>> feedback
>>>>>>>>>>>>>> will
>>>>>>>>>>>>>> be useful.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> --Nikhil
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> On Sat, Jun 1, 2013 at 1:39 PM, Brian Van Straalen
>>>>>>>>>>>>>> <bvstraalen AT lbl.gov>
>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Our main issue is that 8 sorts, of mostly sorted data take up
>>>>>>>>>>>>>>> expontentially
>>>>>>>>>>>>>>> more time in the application. going from .3% at 256 processor
>>>>>>>>>>>>>>> to
>>>>>>>>>>>>>>> 25% at
>>>>>>>>>>>>>>> 4096 processors, for a weak scaling run. most of that in
>>>>>>>>>>>>>>> communication. We
>>>>>>>>>>>>>>> need to get on board with some experts here :-)
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Brian
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On Jun 1, 2013, at 11:15 AM, Mark F. Adams wrote:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> From a practical point of view I don't anticipate this new
>>>>>>>>>>>>>>> sort
>>>>>>>>>>>>>>> code taking
>>>>>>>>>>>>>>> more than 10% of the apps run time (an maybe far less) so
>>>>>>>>>>>>>>> interfacing to
>>>>>>>>>>>>>>> lower levels of the comm fabric is probably not needed. That
>>>>>>>>>>>>>>> said, faster
>>>>>>>>>>>>>>> is better and while we may not deploy the lower level stuff it
>>>>>>>>>>>>>>> would be good
>>>>>>>>>>>>>>> to: 1) know what the options are and 2) write papers, which is
>>>>>>>>>>>>>>> something you
>>>>>>>>>>>>>>> need to think about, obviously.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> And our main user uses late model Crays at the main center in
>>>>>>>>>>>>>>> Switzerland so
>>>>>>>>>>>>>>> it might not be hard to deploy the exotic stuff anyway.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On Jun 1, 2013, at 2:04 PM, Nikhil Jain
>>>>>>>>>>>>>>> <nikhil.life AT gmail.com>
>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Also, this reminds me - if you are working on Edison, the
>>>>>>>>>>>>>>> build
>>>>>>>>>>>>>>> target
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> should be mpi-crayxc. crayxc and crayxe targets are similar
>>>>>>>>>>>>>>> due
>>>>>>>>>>>>>>> to
>>>>>>>>>>>>>>> a
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> common gemini layer but I am working on certain aspects such
>>>>>>>>>>>>>>> as
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> topology discovery that will be different. Ah, this may be
>>>>>>>>>>>>>>> something
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> you may be interested it - Charm has a topology interface that
>>>>>>>>>>>>>>> works
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> on Blue Genes and Crays. I am currently working on a
>>>>>>>>>>>>>>> generalized
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> interface for different kind of networks, but given the
>>>>>>>>>>>>>>> current
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> prevalence of tori, it may be useful.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> --Nikhil
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On Sat, Jun 1, 2013 at 12:56 PM, Nikhil Jain
>>>>>>>>>>>>>>> <nikhil.life AT gmail.com>
>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Yes, I have an account on Edison.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> I couldn't get the drift of your comment on STL interface.
>>>>>>>>>>>>>>> Were
>>>>>>>>>>>>>>> you
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> referring to the current interface with <key, value>, or the
>>>>>>>>>>>>>>> one
>>>>>>>>>>>>>>> which
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> lets us customize the length of value, or something else?
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> --Nikhil
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On Sat, Jun 1, 2013 at 12:49 PM, Mark F. Adams
>>>>>>>>>>>>>>> <mark.adams AT columbia.edu>
>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Do you have an account on Edison at NERSC? This is where I am
>>>>>>>>>>>>>>> built now and
>>>>>>>>>>>>>>> so would be the easiest place to get you stet up.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> And also as far as interfaces, I am thinking that the STL sort
>>>>>>>>>>>>>>> interface is
>>>>>>>>>>>>>>> as good as any and it has apparently proven to be a good
>>>>>>>>>>>>>>> interface, i.e.,
>>>>>>>>>>>>>>> expressive, easy to use, implementable, etc.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On Jun 1, 2013, at 1:40 PM, Nikhil Jain
>>>>>>>>>>>>>>> <nikhil.life AT gmail.com>
>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Cray is good for me. Thanks,
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Nikhil
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On Sat, Jun 1, 2013 at 10:34 AM, Mark F. Adams
>>>>>>>>>>>>>>> <mark.adams AT columbia.edu>
>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Nikhil,
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> I think the best way to proceed is for Francesco tell me what
>>>>>>>>>>>>>>> to
>>>>>>>>>>>>>>> redact from
>>>>>>>>>>>>>>> my repo and I can send you Chombo and Charm--, and work with
>>>>>>>>>>>>>>> you
>>>>>>>>>>>>>>> on getting
>>>>>>>>>>>>>>> it built. I was thinking that we would want to get you access
>>>>>>>>>>>>>>> to
>>>>>>>>>>>>>>> our repo
>>>>>>>>>>>>>>> but I can just manually migrate any changes that you make in
>>>>>>>>>>>>>>> Charm.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> What machine do you want to use? Crays are probably the
>>>>>>>>>>>>>>> easiest
>>>>>>>>>>>>>>> to build
>>>>>>>>>>>>>>> but if you have a preferred development platform I'm sure we
>>>>>>>>>>>>>>> can
>>>>>>>>>>>>>>> make it
>>>>>>>>>>>>>>> work.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Mark
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On Jun 1, 2013, at 10:07 AM, Francesco Miniati
>>>>>>>>>>>>>>> <fminiati AT hpcrd.lbl.gov>
>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Is the code charm, not just Chombo right?
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> If so, I think it's possible, but there is a couple of
>>>>>>>>>>>>>>> routines
>>>>>>>>>>>>>>> (
>>>>>>>>>>>>>>> physics
>>>>>>>>>>>>>>> that has nothing to do with particles) that I would like to
>>>>>>>>>>>>>>> remove
>>>>>>>>>>>>>>> first.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> -fm
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Sent from my iPhone
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On Jun 1, 2013, at 16:02, "Mark F. Adams"
>>>>>>>>>>>>>>> <mark.adams AT columbia.edu>
>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Nikhil,
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Getting access to this code might be hard. I've cc'ed those
>>>>>>>>>>>>>>> concerned.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Brian & Terry: Nikhil has been building the Charm++ MPI
>>>>>>>>>>>>>>> interface
>>>>>>>>>>>>>>> for
>>>>>>>>>>>>>>> sorting in Charm. He would like to be able to run the code to
>>>>>>>>>>>>>>> work and
>>>>>>>>>>>>>>> improving the interface, which is pretty awful. It works, it
>>>>>>>>>>>>>>> is
>>>>>>>>>>>>>>> just
>>>>>>>>>>>>>>> fragile and complex.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Mark
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On May 31, 2013, at 9:24 PM, Nikhil Jain
>>>>>>>>>>>>>>> <nikhil.life AT gmail.com>
>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Will it be possible to provide the same code to me, so that I
>>>>>>>>>>>>>>> can
>>>>>>>>>>>>>>> work
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> with it, and look at the best possible way of linking
>>>>>>>>>>>>>>> integration.
>>>>>>>>>>>>>>> It
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> also will help in bug fixing and interface design.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> --Nikhil
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On Fri, May 31, 2013 at 8:21 PM, Mark F. Adams
>>>>>>>>>>>>>>> <mark.adams AT columbia.edu>
>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On May 31, 2013, at 7:06 PM, Nikhil Jain
>>>>>>>>>>>>>>> <nikhil.life AT gmail.com>
>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> 1. I am working on the Charm linker issue, and should be able
>>>>>>>>>>>>>>> to
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> resolve it in near future.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> 2. As for hard wiring the value data, I can add a template
>>>>>>>>>>>>>>> parameter
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> to fix it. Will that be ideal?
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Lets think about that. I don't know what the best interface
>>>>>>>>>>>>>>> is.
>>>>>>>>>>>>>>> I'm not
>>>>>>>>>>>>>>> wild about templates … but we can take our time and think
>>>>>>>>>>>>>>> about
>>>>>>>>>>>>>>> it. The
>>>>>>>>>>>>>>> linker issue, on the other hand, would be good to figure out
>>>>>>>>>>>>>>> so
>>>>>>>>>>>>>>> that we can
>>>>>>>>>>>>>>> start integrating Charm++ into the Chombo make system more
>>>>>>>>>>>>>>> easily
>>>>>>>>>>>>>>> and less
>>>>>>>>>>>>>>> fragilely.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> I am on travel for most of the next two weeks and the code is
>>>>>>>>>>>>>>> working and
>>>>>>>>>>>>>>> I've made it available to the user.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Mark
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Edgar, Mark is using the interoperability feature that I added
>>>>>>>>>>>>>>> to
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Charm recently.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On Fri, May 31, 2013 at 6:05 PM, Mark F. Adams
>>>>>>>>>>>>>>> <mark.adams AT columbia.edu>
>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On May 31, 2013, at 6:58 PM, Edgar Solomonik
>>>>>>>>>>>>>>> <edgar.solomonik AT gmail.com>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Wait, how are you actually using the library? Was the
>>>>>>>>>>>>>>> Chombo/particle code
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> ported to Charm++ or AMPI?
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> No we use an MPI interface that Nikhil maintains (wrote?)
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> The code will try to give each processor roughly n/p
>>>>>>>>>>>>>>> particles,
>>>>>>>>>>>>>>> or
>>>>>>>>>>>>>>> within
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> some threshold of that. Do you want a sort that gives each
>>>>>>>>>>>>>>> processor the
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> same number of elements it entered?
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> No. we want load balancing and fortunately our object all
>>>>>>>>>>>>>>> have
>>>>>>>>>>>>>>> the same
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> weight (or close enough).
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> This can be done, though the best algorithm for doing that
>>>>>>>>>>>>>>> might
>>>>>>>>>>>>>>> be Radix
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Sort, since it requires exact splitting.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Edgar
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On Fri, May 31, 2013 at 3:52 PM, Mark F. Adams
>>>>>>>>>>>>>>> <mark.adams AT columbia.edu>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On May 31, 2013, at 6:30 PM, Nikhil Jain
>>>>>>>>>>>>>>> <nikhil.life AT gmail.com>
>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> That is true. I am adding Edgar to get his view (since he
>>>>>>>>>>>>>>> originally
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> implemented it), and based on that will modify the code to
>>>>>>>>>>>>>>> handle
>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> given scenario.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> I've added code to add a dummy particle on empty processors.
>>>>>>>>>>>>>>> This
>>>>>>>>>>>>>>> is not
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> a bad solution but a library should be able to deal with
>>>>>>>>>>>>>>> degenerate cases
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> (they happen all the time).
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Edgar, Any comment on handling empty processor? Also, I am a
>>>>>>>>>>>>>>> bit
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> surprised that a core ended with zero particles even though
>>>>>>>>>>>>>>> there
>>>>>>>>>>>>>>> were
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> 1000 particles/core.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> I'm not sure exactly what is going on here but these particles
>>>>>>>>>>>>>>> migrate via
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> other mechanism than the sort. I think. So the sort not only
>>>>>>>>>>>>>>> provides
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> improved data locality but it also load balances. So the load
>>>>>>>>>>>>>>> got
>>>>>>>>>>>>>>> very
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> unbalanced, maybe. Also you can get situations were you have
>>>>>>>>>>>>>>> a
>>>>>>>>>>>>>>> very sparse
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> level and you might not want to have less than say 100
>>>>>>>>>>>>>>> particles
>>>>>>>>>>>>>>> per core
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> so you leave some cores empty and don't go the the trouble of
>>>>>>>>>>>>>>> making a sub
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> communicator.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Anyway we will start pounding on this and see what we get. I
>>>>>>>>>>>>>>> think we
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> might be done as far as getting it working.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> This library is pretty painful to work with at this point.
>>>>>>>>>>>>>>> But
>>>>>>>>>>>>>>> I'm
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> grateful that you packaged something up for us to start
>>>>>>>>>>>>>>> working
>>>>>>>>>>>>>>> with. In
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> the future there are two things that need to be fixed: 1)
>>>>>>>>>>>>>>> having
>>>>>>>>>>>>>>> to use the
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Charm++ linker (if at all possible) and 2) hardwiring the
>>>>>>>>>>>>>>> value
>>>>>>>>>>>>>>> data type in
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> the sort library.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Mark
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> --Nikhil
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On Fri, May 31, 2013 at 5:25 PM, Mark F. Adams
>>>>>>>>>>>>>>> <mark.adams AT columbia.edu>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On May 31, 2013, at 6:13 PM, Nikhil Jain
>>>>>>>>>>>>>>> <nikhil.life AT gmail.com>
>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> To confirm the status, does it work with multiple sorters now
>>>>>>>>>>>>>>> in
>>>>>>>>>>>>>>> debug
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> and production version? Were you required to put in any hacks
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> anywhere?
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> I have tested with and without -with-production.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> I did not change any Charm++ code.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> I did run the test that I have with 256 cores. And even
>>>>>>>>>>>>>>> though
>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> there an average of ~1000 particles / core the application has
>>>>>>>>>>>>>>> many empty
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> cores and the sorter died with:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> [112] Assertion "num_elements > 0" failed in file Bucket.C
>>>>>>>>>>>>>>> line
>>>>>>>>>>>>>>> 61.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Empty processors are the kinds of things that you have to deal
>>>>>>>>>>>>>>> with in
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> the real world and this application will have them most
>>>>>>>>>>>>>>> likely.
>>>>>>>>>>>>>>> Some AMR
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> levels might not have a lot of particles and forming sub
>>>>>>>>>>>>>>> communicators to
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> filter out empty processors would be ugly.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Mark
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> --Nikhil
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On Fri, May 31, 2013 at 5:09 PM, Mark F. Adams
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> <mark.adams AT columbia.edu>
>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Ok, it is actually working. It just prints stuff out.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Mark
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> "Success may ditch you most of the times, but if you ditch it
>>>>>>>>>>>>>>> once
>>>>>>>>>>>>>>> :
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> your job is done."
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Nikhil Jain,
>>>>>>>>>>>>>>> nikhil.life AT gmail.com,
>>>>>>>>>>>>>>> +1-217-979-0918
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> "Success may ditch you most of the times, but if you ditch it
>>>>>>>>>>>>>>> once
>>>>>>>>>>>>>>> :
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> your job is done."
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Nikhil Jain,
>>>>>>>>>>>>>>> nikhil.life AT gmail.com,
>>>>>>>>>>>>>>> +1-217-979-0918
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> "Success may ditch you most of the times, but if you ditch it
>>>>>>>>>>>>>>> once
>>>>>>>>>>>>>>> :
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> your job is done."
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Nikhil Jain,
>>>>>>>>>>>>>>> nikhil.life AT gmail.com,
>>>>>>>>>>>>>>> +1-217-979-0918
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> "Success may ditch you most of the times, but if you ditch it
>>>>>>>>>>>>>>> once
>>>>>>>>>>>>>>> :
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> your job is done."
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Nikhil Jain,
>>>>>>>>>>>>>>> nikhil.life AT gmail.com,
>>>>>>>>>>>>>>> +1-217-979-0918
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> "Success may ditch you most of the times, but if you ditch it
>>>>>>>>>>>>>>> once
>>>>>>>>>>>>>>> :
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> your job is done."
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Nikhil Jain,
>>>>>>>>>>>>>>> nikhil.life AT gmail.com,
>>>>>>>>>>>>>>> +1-217-979-0918
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> "Success may ditch you most of the times, but if you ditch it
>>>>>>>>>>>>>>> once
>>>>>>>>>>>>>>> :
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> your job is done."
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Nikhil Jain,
>>>>>>>>>>>>>>> nikhil.life AT gmail.com,
>>>>>>>>>>>>>>> +1-217-979-0918
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> "Success may ditch you most of the times, but if you ditch it
>>>>>>>>>>>>>>> once
>>>>>>>>>>>>>>> :
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> your job is done."
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Nikhil Jain,
>>>>>>>>>>>>>>> nikhil.life AT gmail.com,
>>>>>>>>>>>>>>> +1-217-979-0918
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Brian Van Straalen Lawrence Berkeley Lab
>>>>>>>>>>>>>>> BVStraalen AT lbl.gov
>>>>>>>>>>>>>>> Computational Research
>>>>>>>>>>>>>>> (510) 486-4976 Division (crd.lbl.gov)
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> --
>>>>>>>>>>>>>> "Success may ditch you most of the times, but if you ditch it
>>>>>>>>>>>>>> once
>>>>>>>>>>>>>> :
>>>>>>>>>>>>>> your job is done."
>>>>>>>>>>>>>> Nikhil Jain,
>>>>>>>>>>>>>> nikhil.life AT gmail.com,
>>>>>>>>>>>>>> +1-217-979-0918
>>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> --
>>>>>>>>>>>> "Success may ditch you most of the times, but if you ditch it
>>>>>>>>>>>> once
>>>>>>>>>>>> :
>>>>>>>>>>>> your job is done."
>>>>>>>>>>>> Nikhil Jain,
>>>>>>>>>>>> nikhil.life AT gmail.com,
>>>>>>>>>>>> +1-217-979-0918
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> --
>>>>>>>>>> "Success may ditch you most of the times, but if you ditch it once
>>>>>>>>>> :
>>>>>>>>>> your job is done."
>>>>>>>>>> Nikhil Jain,
>>>>>>>>>> nikhil.life AT gmail.com,
>>>>>>>>>> +1-217-979-0918
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> "Success may ditch you most of the times, but if you ditch it once :
>>>>>> your job is done."
>>>>>> Nikhil Jain,
>>>>>> nikhil.life AT gmail.com,
>>>>>> +1-217-979-0918
>>>>>>
>>>>>
>>>>
>>>
>>>
>>>
>>> --
>>> "Success may ditch you most of the times, but if you ditch it once :
>>> your job is done."
>>> Nikhil Jain,
>>> nikhil.life AT gmail.com,
>>> +1-217-979-0918
>>
>>
>>
>> --
>> "Success may ditch you most of the times, but if you ditch it once :
>> your job is done."
>> Nikhil Jain,
>> nikhil.life AT gmail.com,
>> +1-217-979-0918
>
>
>
> --
> "Success may ditch you most of the times, but if you ditch it once :
> your job is done."
> Nikhil Jain,
> nikhil.life AT gmail.com,
> +1-217-979-0918
>






Archive powered by MHonArc 2.6.16.

Top of Page