Skip to Content.
Sympa Menu

charm - Re: [charm] [ppl] MPI_Barrier(MPI_COMM_WORLD) equivalent

charm AT lists.cs.illinois.edu

Subject: Charm++ parallel programming system

List archive

Re: [charm] [ppl] MPI_Barrier(MPI_COMM_WORLD) equivalent


Chronological Thread 
  • From: Phil Miller <mille121 AT illinois.edu>
  • To: "Kale, Laxmikant V" <kale AT illinois.edu>
  • Cc: "charm AT cs.uiuc.edu" <charm AT cs.uiuc.edu>, Evghenii Gaburov <e-gaburov AT northwestern.edu>
  • Subject: Re: [charm] [ppl] MPI_Barrier(MPI_COMM_WORLD) equivalent
  • Date: Thu, 22 Sep 2011 22:20:08 -0500
  • List-archive: <http://lists.cs.uiuc.edu/pipermail/charm>
  • List-id: CHARM parallel programming system <charm.cs.uiuc.edu>

There's actually a mechanism that Jonathan and I added recently to do
localized completion detection, a sort of 'counting barrier'.
Simdemics is actually using it, and it seems to be working well for
them.
The documentation for it can be found at
http://charm.cs.illinois.edu/manuals/html/charm++/3_15.html#SECTION000315100000000000000
That needs to be revised slightly, to indicate that production and
consumption are both incremental processes that can potentially
overlap. In the case posited, sending a request would correspond to a
produce() call, while receiving a request and exporting data would
correspond to a consume() call. When a given chare is done making
requests, it calls done(), and eventually the library sees that all
requests have been served.

Phil

On Thu, Sep 22, 2011 at 22:00, Kale, Laxmikant V
<kale AT illinois.edu>
wrote:
> I think, in your context,  quiescence detection is the most elegant
> solution. I am curious: Why do you not like it?
>
> If you have multiple modules active at the time this exchange is
> happening, and messaging form the other modules should continue across
> this "import-export" activity, that would be one reason why QD is not a
> good solution. But that¹s not the case for you.
>
> May be the non-threaded version (CkStartQD with a callback) is better
> suited?
>
> Incidentally, how can you use MPI_Barrier in your MPI code? You wouldn't
> know when to call it, since you don't know another process is not sending
> a request your way next.
>
> --
> Laxmikant (Sanjay) Kale         http://charm.cs.uiuc.edu
> <http://charm.cs.uiuc.edu/>
> Professor, Computer Science    
> kale AT illinois.edu
> 201 N. Goodwin Avenue           Ph:  (217) 244-0094
> Urbana, IL  61801-2302          FAX: (217) 265-6582
>
>
>
>
>
>
> On 9/22/11 9:15 PM, "Evghenii Gaburov"
> <e-gaburov AT northwestern.edu>
> wrote:
>
>>Dear All,
>>
>>As a new user, who is porting his MPI code to Charm++, I have the
>>following question:
>>
>>I have a snippet of the code that requests data from remote chares, and
>>these chares need to send data to the requesting chare. There is no way
>>to know how many messages a give chare receives from remote chares with a
>>request to export data. In other words, a given chare may need to export
>>(different) data to many remote chares that request this, and this chare
>>does not know how many remote chares request the data.
>>
>>For logical consistency it is not possible to proceed with further
>>computations unless all data requested is imported/exported. This leads
>>me to issue with a global barrier, an equivalent if which,
>>MPI_Barrier(MPI_COMM_WORLD), I use in my MPI code (there does not seem to
>>be a way around such a global barrier, since this step established
>>communication graph between MPI tasks, or for Charm++ between chares,
>>which later use point-to-point communication).
>>
>>Regretfully, I fail to find the most optimal way to issue such a barrier.
>>Using CkCallbackResumeThread() won't work because the calling code (from
>>MainChare that is a [threaded] entry) also sends messages to other remote
>>chares with request to import/export data, and those themselves
>>recursively send Msg to other remote chares to export data until close
>>condition is satisified. (the depth of recursion is 2 or 3 calls to same
>>function).
>>
>>Now I use CkWaitQD() in the MainChare as a global synchronization point.
>>I was wondering if there is a more elegant solution to issue a barrier so
>>that all previous issued message completed before proceeding further.
>>
>>Thanks!
>>
>>Cheers,
>> Evghenii
>>
>>
>>
>>_______________________________________________
>>charm mailing list
>>charm AT cs.uiuc.edu
>>http://lists.cs.uiuc.edu/mailman/listinfo/charm
>
>
> _______________________________________________
> charm mailing list
> charm AT cs.uiuc.edu
> http://lists.cs.uiuc.edu/mailman/listinfo/charm
>
> _______________________________________________
> ppl mailing list
> ppl AT cs.uiuc.edu
> http://lists.cs.uiuc.edu/mailman/listinfo/ppl
>





Archive powered by MHonArc 2.6.16.

Top of Page