Skip to Content.
Sympa Menu

charm - Re: [charm] MPI_Barrier(MPI_COMM_WORLD) equivalent

charm AT lists.cs.illinois.edu

Subject: Charm++ parallel programming system

List archive

Re: [charm] MPI_Barrier(MPI_COMM_WORLD) equivalent


Chronological Thread 
  • From: "Kale, Laxmikant V" <kale AT illinois.edu>
  • To: Evghenii Gaburov <e-gaburov AT northwestern.edu>, "charm AT cs.uiuc.edu" <charm AT cs.uiuc.edu>
  • Subject: Re: [charm] MPI_Barrier(MPI_COMM_WORLD) equivalent
  • Date: Fri, 23 Sep 2011 03:00:58 +0000
  • Accept-language: en-US
  • List-archive: <http://lists.cs.uiuc.edu/pipermail/charm>
  • List-id: CHARM parallel programming system <charm.cs.uiuc.edu>

I think, in your context, quiescence detection is the most elegant
solution. I am curious: Why do you not like it?

If you have multiple modules active at the time this exchange is
happening, and messaging form the other modules should continue across
this "import-export" activity, that would be one reason why QD is not a
good solution. But that¹s not the case for you.

May be the non-threaded version (CkStartQD with a callback) is better
suited?

Incidentally, how can you use MPI_Barrier in your MPI code? You wouldn't
know when to call it, since you don't know another process is not sending
a request your way next.

--
Laxmikant (Sanjay) Kale http://charm.cs.uiuc.edu
<http://charm.cs.uiuc.edu/>
Professor, Computer Science
kale AT illinois.edu
201 N. Goodwin Avenue Ph: (217) 244-0094
Urbana, IL 61801-2302 FAX: (217) 265-6582






On 9/22/11 9:15 PM, "Evghenii Gaburov"
<e-gaburov AT northwestern.edu>
wrote:

>Dear All,
>
>As a new user, who is porting his MPI code to Charm++, I have the
>following question:
>
>I have a snippet of the code that requests data from remote chares, and
>these chares need to send data to the requesting chare. There is no way
>to know how many messages a give chare receives from remote chares with a
>request to export data. In other words, a given chare may need to export
>(different) data to many remote chares that request this, and this chare
>does not know how many remote chares request the data.
>
>For logical consistency it is not possible to proceed with further
>computations unless all data requested is imported/exported. This leads
>me to issue with a global barrier, an equivalent if which,
>MPI_Barrier(MPI_COMM_WORLD), I use in my MPI code (there does not seem to
>be a way around such a global barrier, since this step established
>communication graph between MPI tasks, or for Charm++ between chares,
>which later use point-to-point communication).
>
>Regretfully, I fail to find the most optimal way to issue such a barrier.
>Using CkCallbackResumeThread() won't work because the calling code (from
>MainChare that is a [threaded] entry) also sends messages to other remote
>chares with request to import/export data, and those themselves
>recursively send Msg to other remote chares to export data until close
>condition is satisified. (the depth of recursion is 2 or 3 calls to same
>function).
>
>Now I use CkWaitQD() in the MainChare as a global synchronization point.
>I was wondering if there is a more elegant solution to issue a barrier so
>that all previous issued message completed before proceeding further.
>
>Thanks!
>
>Cheers,
> Evghenii
>
>
>
>_______________________________________________
>charm mailing list
>charm AT cs.uiuc.edu
>http://lists.cs.uiuc.edu/mailman/listinfo/charm






Archive powered by MHonArc 2.6.16.

Top of Page