Skip to Content.
Sympa Menu

charm - Re: [charm] [ppl] MPI_Barrier(MPI_COMM_WORLD) equivalent

charm AT lists.cs.illinois.edu

Subject: Charm++ parallel programming system

List archive

Re: [charm] [ppl] MPI_Barrier(MPI_COMM_WORLD) equivalent


Chronological Thread 
  • From: Evghenii Gaburov <e-gaburov AT northwestern.edu>
  • To: "Kale, Laxmikant V" <kale AT illinois.edu>
  • Cc: "Miller, Philip B" <mille121 AT illinois.edu>, "charm AT cs.uiuc.edu" <charm AT cs.uiuc.edu>
  • Subject: Re: [charm] [ppl] MPI_Barrier(MPI_COMM_WORLD) equivalent
  • Date: Fri, 23 Sep 2011 04:29:46 +0000
  • Accept-language: en-US
  • List-archive: <http://lists.cs.uiuc.edu/pipermail/charm>
  • List-id: CHARM parallel programming system <charm.cs.uiuc.edu>

> I wasn't sure what you meant by "recursive". If a request can trigger
> another request, then, of course, you don't know when you are done making
> requests by yourself; QD is useful in this context. And if you know when
> you are done sending messages, but you don't have "responses" then you can
> use the module Phil mentioned.
Correct, a request does trigger another request. By recursively I mean same
function call itself on remote or local chare
(System::localMesh_tesselateII(..) in previous email), so I do not know when
these functions are done making requests, but I guarantee that at some point
they will be done doing this.

I fill first try Phil's module first, and then CkStartQD() method, and then
will communicate the result back to this thread.

I have another question though:

The scenario is the following: a chare array is scheduled to execute some
computations and later to request part of the newly compute data from few
remote chares. It is possible, that these remote chares have not yet started
computing new data when the request arrives, and therefore are not capable to
complete this request. Here, I would like this chare to queue the request and
then execute it (i.e.e return newly compute data) as soon as the chare is
done computing, and therefore capable of completing this send request. In
this way, I can avoid global synchronization among all chares.

The pseudo-code would look like this:

void myChare::do_work()
{
/* here we do some work */

for (int k = 0; k < numChares; k++)
myChareArray.requestResult(part_of_new_results_on_remote_chare[k],
send_to_thisIndex); // in this case, this chare knows what part remote
result it needs

processQueue();
}

void myChare::processQueue()
{
while(!queueRequest.empty())
{
which_results_to_return, send_to_recvIndex = queueRequest.pull();
results_to_return = prepare_new_results(which_results_to_return);
/* send requested result to recvIndex in queue */

myChareArray[send_to_recvIndex].recvResults_from_remote_chare(results_to_return);
}
}

void myChare::requestResult(which_result_to_return, send_to_recvIndex)
{
if(thisChareIsNotDoneComputing)
{
queueReqeust.push_back(which_result_to_return, recvIndex);
}
else
{
results_to_return = prepare_new_results(which_results_to_return);
/* send requested result to recvIndex */

myChareArray[send_to_recvIndex].recvResults_from_remote_chare(results_to_return);
}
}

void myChare::recvResult_from_remote_chare(results)
{
}




Thanks for help!

Cheers,
Evghenii
>
> --
> Laxmikant (Sanjay) Kale http://charm.cs.uiuc.edu
> <http://charm.cs.uiuc.edu/>
> Professor, Computer Science
> kale AT illinois.edu
> 201 N. Goodwin Avenue Ph: (217) 244-0094
> Urbana, IL 61801-2302 FAX: (217) 265-6582
>
>
>
>
>
>
> On 9/22/11 10:48 PM, "Evghenii Gaburov"
> <e-gaburov AT northwestern.edu>
> wrote:
>
>>> There's actually a mechanism that Jonathan and I added recently to do
>>> localized completion detection, a sort of 'counting barrier'.
>>> Simdemics is actually using it, and it seems to be working well for
>>> them.
>>> The documentation for it can be found at
>>>
>>> http://charm.cs.illinois.edu/manuals/html/charm++/3_15.html#SECTION000315
>>> 100000000000000
>>> That needs to be revised slightly, to indicate that production and
>>> consumption are both incremental processes that can potentially
>>> overlap. In the case posited, sending a request would correspond to a
>>> produce() call, while receiving a request and exporting data would
>>> correspond to a consume() call. When a given chare is done making
>>> requests, it calls done(), and eventually the library sees that all
>>> requests have been served.
>> I can manually trace when a given chare is done making requests (i.e. by
>> sending a message to remote chare and receiving ticket back from this
>> remote chare that the message is processed).
>> What I cannot do is to have a chare figuring out how many request it is
>> expecting to process from other remote chares. For this reason I need a
>> globalSync. I will checkout this module to see if it can solve my
>> problem. The most important issue here is when only a few out of many
>> hundreds or thousands chares are active, I would like to have low latency
>> of such communication (i.e. equivalent to having one or two MPI_Alltoall
>> with singe integer).
>>
>> Cheers,
>> Evghenii
>>
>>> Phil
>>>
>>> On Thu, Sep 22, 2011 at 22:00, Kale, Laxmikant V
>>> <kale AT illinois.edu>
>>> wrote:
>>>> I think, in your context, quiescence detection is the most elegant
>>>> solution. I am curious: Why do you not like it?
>>>>
>>>> If you have multiple modules active at the time this exchange is
>>>> happening, and messaging form the other modules should continue across
>>>> this "import-export" activity, that would be one reason why QD is not a
>>>> good solution. But that¹s not the case for you.
>>>>
>>>> May be the non-threaded version (CkStartQD with a callback) is better
>>>> suited?
>>>>
>>>> Incidentally, how can you use MPI_Barrier in your MPI code? You
>>>> wouldn't
>>>> know when to call it, since you don't know another process is not
>>>> sending
>>>> a request your way next.
>>>>
>>>> --
>>>> Laxmikant (Sanjay) Kale http://charm.cs.uiuc.edu
>>>> <http://charm.cs.uiuc.edu/>
>>>> Professor, Computer Science
>>>> kale AT illinois.edu
>>>> 201 N. Goodwin Avenue Ph: (217) 244-0094
>>>> Urbana, IL 61801-2302 FAX: (217) 265-6582
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> On 9/22/11 9:15 PM, "Evghenii Gaburov"
>>>> <e-gaburov AT northwestern.edu>
>>>> wrote:
>>>>
>>>>> Dear All,
>>>>>
>>>>> As a new user, who is porting his MPI code to Charm++, I have the
>>>>> following question:
>>>>>
>>>>> I have a snippet of the code that requests data from remote chares,
>>>>> and
>>>>> these chares need to send data to the requesting chare. There is no
>>>>> way
>>>>> to know how many messages a give chare receives from remote chares
>>>>> with a
>>>>> request to export data. In other words, a given chare may need to
>>>>> export
>>>>> (different) data to many remote chares that request this, and this
>>>>> chare
>>>>> does not know how many remote chares request the data.
>>>>>
>>>>> For logical consistency it is not possible to proceed with further
>>>>> computations unless all data requested is imported/exported. This
>>>>> leads
>>>>> me to issue with a global barrier, an equivalent if which,
>>>>> MPI_Barrier(MPI_COMM_WORLD), I use in my MPI code (there does not
>>>>> seem to
>>>>> be a way around such a global barrier, since this step established
>>>>> communication graph between MPI tasks, or for Charm++ between chares,
>>>>> which later use point-to-point communication).
>>>>>
>>>>> Regretfully, I fail to find the most optimal way to issue such a
>>>>> barrier.
>>>>> Using CkCallbackResumeThread() won't work because the calling code
>>>>> (from
>>>>> MainChare that is a [threaded] entry) also sends messages to other
>>>>> remote
>>>>> chares with request to import/export data, and those themselves
>>>>> recursively send Msg to other remote chares to export data until close
>>>>> condition is satisified. (the depth of recursion is 2 or 3 calls to
>>>>> same
>>>>> function).
>>>>>
>>>>> Now I use CkWaitQD() in the MainChare as a global synchronization
>>>>> point.
>>>>> I was wondering if there is a more elegant solution to issue a
>>>>> barrier so
>>>>> that all previous issued message completed before proceeding further.
>>>>>
>>>>> Thanks!
>>>>>
>>>>> Cheers,
>>>>> Evghenii
>>>>>
>>>>>
>>>>>
>>>>> _______________________________________________
>>>>> charm mailing list
>>>>> charm AT cs.uiuc.edu
>>>>> http://lists.cs.uiuc.edu/mailman/listinfo/charm
>>>>
>>>>
>>>> _______________________________________________
>>>> charm mailing list
>>>> charm AT cs.uiuc.edu
>>>> http://lists.cs.uiuc.edu/mailman/listinfo/charm
>>>>
>>>> _______________________________________________
>>>> ppl mailing list
>>>> ppl AT cs.uiuc.edu
>>>> http://lists.cs.uiuc.edu/mailman/listinfo/ppl
>>>>
>>
>> --
>> Evghenii Gaburov,
>> e-gaburov AT northwestern.edu
>>
>>
>>
>>
>>
>>
>

--
Evghenii Gaburov,
e-gaburov AT northwestern.edu











Archive powered by MHonArc 2.6.16.

Top of Page