Skip to Content.
Sympa Menu

charm - Re: [charm] [ppl] MPI_Barrier(MPI_COMM_WORLD) equivalent

charm AT lists.cs.illinois.edu

Subject: Charm++ parallel programming system

List archive

Re: [charm] [ppl] MPI_Barrier(MPI_COMM_WORLD) equivalent


Chronological Thread 
  • From: Evghenii Gaburov <e-gaburov AT northwestern.edu>
  • To: Phil Miller <mille121 AT illinois.edu>
  • Cc: "charm AT cs.uiuc.edu" <charm AT cs.uiuc.edu>, "Kale, Laxmikant V" <kale AT illinois.edu>
  • Subject: Re: [charm] [ppl] MPI_Barrier(MPI_COMM_WORLD) equivalent
  • Date: Fri, 23 Sep 2011 17:05:47 +0000
  • Accept-language: en-US
  • List-archive: <http://lists.cs.uiuc.edu/pipermail/charm>
  • List-id: CHARM parallel programming system <charm.cs.uiuc.edu>

> Errr, yeah. That module is not in Charm++ 6.2. Since you're actively
> developing this application, I'd recommend using the development
> version of Charm++, from our repository. On mainstream platforms, it's
> quite stable.

I got a git version of Charm

$ git clone git://charm.cs.uiuc.edu/charm.git

and recompiled my code first as is.

However it fails to work with this git version of Charm++ (see error below).

I compiled in same way as charm-6.2:

$ ./build charm++ mpi-linux-x86_64


Is there anything I am doing wrong here?

Cheers,
Evghenii




------------- Processor 0 Exiting: Called CmiAbort ------------
Reason: Array index length (nInts) is too long-- did you use bytes instead of
integers?

--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
with errorcode 1.

NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--------------------------------------------------------------------------
[0] Stack Traceback:
[0:0] CmiAbort+0x6b [0x5d25f2]
[0:1] _ZNK23CProxyElement_ArrayBase6ckSendEP14CkArrayMessageii+0x50
[0x55211c]
[0:2]
_ZN7fvmhd3d20CProxyElement_System17generate_geometryEiPK14CkEntryOptions+0x121
[0x4b8d01]
[0:3] _ZN7fvmhd3d4MainC1EP8CkArgMsg+0x3b8 [0x4c1ba8]
[0:4] _ZN7fvmhd3d12CkIndex_Main19_call_Main_CkArgMsgEPvPNS_4MainE+0x12
[0x4c1e42]
[0:5] _Z10_initCharmiPPc+0xa95 [0x5150bb]
[0:6] [0x5d1601]
[0:7] ConverseInit+0x1ac [0x5d1538]
[0:8] main+0x48 [0x52c022]
[0:9] __libc_start_main+0xfe [0x7ffff5ee6d8e]
[0:10] [0x4b6919]
--------------------------------------------------------------------------
mpirun has exited due to process rank 0 with PID 12084 on
node darkstar exiting without calling "finalize". This may
have caused other processes in the application to be
terminated by signals sent by mpirun (as reported here).
--------------------------------------------------------------------------

>
>> Quick "$ find . " in charm-6.2 source files does not reveal any filenames
>> with completion word in them.
>>
>> Anything I am missing here?
>>
>> Cheers,
>> Evghenii
>>
>>> The documentation is missing the note on how to include that module.
>>> In your ci file, you should be add the line 'extern module
>>> completion;' to tell the interface generator that you'll be using the
>>> completion-detection module. Then include completion.h in your source
>>> file where you access it, and you'll be set.
>>>
>>> On Fri, Sep 23, 2011 at 10:40, Evghenii Gaburov
>>> <e-gaburov AT northwestern.edu>
>>> wrote:
>>>> I seeking help with completion detection module:
>>>>
>>>> As described in documentation, I have the following constructor:
>>>>
>>>> CProxy_CompletionDetector detector = CProxy_CompletionDetector::ckNew();
>>>>
>>>> but at this the code fails to compile with the following error:
>>>>
>>>> myMainChare.cpp: error: 'CProxy_CompletionDetector' was not declared in
>>>> this scope.
>>>> myMainChare.cpp: error: expected ";" before 'detector'
>>>>
>>>> I compile with $charmc -module completion.
>>>>
>>>> Is there something I am missing?
>>>>
>>>> Thanks,
>>>> Evghenii
>>>>
>>>>
>>>>
>>>>
>>>> On Sep 22, 2011, at 10:20 PM, Phil Miller wrote:
>>>>
>>>>> There's actually a mechanism that Jonathan and I added recently to do
>>>>> localized completion detection, a sort of 'counting barrier'.
>>>>> Simdemics is actually using it, and it seems to be working well for
>>>>> them.
>>>>> The documentation for it can be found at
>>>>> http://charm.cs.illinois.edu/manuals/html/charm++/3_15.html#SECTION000315100000000000000
>>>>> That needs to be revised slightly, to indicate that production and
>>>>> consumption are both incremental processes that can potentially
>>>>> overlap. In the case posited, sending a request would correspond to a
>>>>> produce() call, while receiving a request and exporting data would
>>>>> correspond to a consume() call. When a given chare is done making
>>>>> requests, it calls done(), and eventually the library sees that all
>>>>> requests have been served.
>>>>>
>>>>> Phil
>>>>>
>>>>> On Thu, Sep 22, 2011 at 22:00, Kale, Laxmikant V
>>>>> <kale AT illinois.edu>
>>>>> wrote:
>>>>>> I think, in your context, quiescence detection is the most elegant
>>>>>> solution. I am curious: Why do you not like it?
>>>>>>
>>>>>> If you have multiple modules active at the time this exchange is
>>>>>> happening, and messaging form the other modules should continue across
>>>>>> this "import-export" activity, that would be one reason why QD is not a
>>>>>> good solution. But that¹s not the case for you.
>>>>>>
>>>>>> May be the non-threaded version (CkStartQD with a callback) is better
>>>>>> suited?
>>>>>>
>>>>>> Incidentally, how can you use MPI_Barrier in your MPI code? You
>>>>>> wouldn't
>>>>>> know when to call it, since you don't know another process is not
>>>>>> sending
>>>>>> a request your way next.
>>>>>>
>>>>>> --
>>>>>> Laxmikant (Sanjay) Kale http://charm.cs.uiuc.edu
>>>>>> <http://charm.cs.uiuc.edu/>
>>>>>> Professor, Computer Science
>>>>>> kale AT illinois.edu
>>>>>> 201 N. Goodwin Avenue Ph: (217) 244-0094
>>>>>> Urbana, IL 61801-2302 FAX: (217) 265-6582
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> On 9/22/11 9:15 PM, "Evghenii Gaburov"
>>>>>> <e-gaburov AT northwestern.edu>
>>>>>> wrote:
>>>>>>
>>>>>>> Dear All,
>>>>>>>
>>>>>>> As a new user, who is porting his MPI code to Charm++, I have the
>>>>>>> following question:
>>>>>>>
>>>>>>> I have a snippet of the code that requests data from remote chares,
>>>>>>> and
>>>>>>> these chares need to send data to the requesting chare. There is no
>>>>>>> way
>>>>>>> to know how many messages a give chare receives from remote chares
>>>>>>> with a
>>>>>>> request to export data. In other words, a given chare may need to
>>>>>>> export
>>>>>>> (different) data to many remote chares that request this, and this
>>>>>>> chare
>>>>>>> does not know how many remote chares request the data.
>>>>>>>
>>>>>>> For logical consistency it is not possible to proceed with further
>>>>>>> computations unless all data requested is imported/exported. This
>>>>>>> leads
>>>>>>> me to issue with a global barrier, an equivalent if which,
>>>>>>> MPI_Barrier(MPI_COMM_WORLD), I use in my MPI code (there does not
>>>>>>> seem to
>>>>>>> be a way around such a global barrier, since this step established
>>>>>>> communication graph between MPI tasks, or for Charm++ between chares,
>>>>>>> which later use point-to-point communication).
>>>>>>>
>>>>>>> Regretfully, I fail to find the most optimal way to issue such a
>>>>>>> barrier.
>>>>>>> Using CkCallbackResumeThread() won't work because the calling code
>>>>>>> (from
>>>>>>> MainChare that is a [threaded] entry) also sends messages to other
>>>>>>> remote
>>>>>>> chares with request to import/export data, and those themselves
>>>>>>> recursively send Msg to other remote chares to export data until close
>>>>>>> condition is satisified. (the depth of recursion is 2 or 3 calls to
>>>>>>> same
>>>>>>> function).
>>>>>>>
>>>>>>> Now I use CkWaitQD() in the MainChare as a global synchronization
>>>>>>> point.
>>>>>>> I was wondering if there is a more elegant solution to issue a
>>>>>>> barrier so
>>>>>>> that all previous issued message completed before proceeding further.
>>>>>>>
>>>>>>> Thanks!
>>>>>>>
>>>>>>> Cheers,
>>>>>>> Evghenii
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> _______________________________________________
>>>>>>> charm mailing list
>>>>>>> charm AT cs.uiuc.edu
>>>>>>> http://lists.cs.uiuc.edu/mailman/listinfo/charm
>>>>>>
>>>>>>
>>>>>> _______________________________________________
>>>>>> charm mailing list
>>>>>> charm AT cs.uiuc.edu
>>>>>> http://lists.cs.uiuc.edu/mailman/listinfo/charm
>>>>>>
>>>>>> _______________________________________________
>>>>>> ppl mailing list
>>>>>> ppl AT cs.uiuc.edu
>>>>>> http://lists.cs.uiuc.edu/mailman/listinfo/ppl
>>>>>>
>>>>
>>>> --
>>>> Evghenii Gaburov,
>>>> e-gaburov AT northwestern.edu
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>
>> --
>> Evghenii Gaburov,
>> e-gaburov AT northwestern.edu
>>
>>
>>
>>
>>
>>
>>

--
Evghenii Gaburov,
e-gaburov AT northwestern.edu











Archive powered by MHonArc 2.6.16.

Top of Page