Skip to Content.
Sympa Menu

charm - Re: [charm] [ppl] multi-thread in taub

charm AT lists.cs.illinois.edu

Subject: Charm++ parallel programming system

List archive

Re: [charm] [ppl] multi-thread in taub


Chronological Thread 
  • From: Fernando Stump <fernando.stump AT gmail.com>
  • To: Phil Miller <mille121 AT illinois.edu>
  • Cc: Charm Mailing List <charm AT cs.illinois.edu>
  • Subject: Re: [charm] [ppl] multi-thread in taub
  • Date: Fri, 7 Oct 2011 14:50:02 -0500
  • List-archive: <http://lists.cs.uiuc.edu/pipermail/charm>
  • List-id: CHARM parallel programming system <charm.cs.uiuc.edu>

Phil,

The output from several "cout <<" along the code are not shuffled.

As if it is calling one driver() first and then the other driver(). But I
know that this is no a a strong evidence. I'm running a small test problem
so it may just be a coincidence.

I will let you know if I see the same pattern on a larger problem or if I
don't get any speed up.

Thanks
Fernando

" On Oct 7, 2011, at 2:40 PM, Phil Miller wrote:

> On Fri, Oct 7, 2011 at 14:22, Fernando Stump
> <fernando.stump AT gmail.com>
> wrote:
>> Hi,
>>
>> I'm running my ParFUMized version of my code at the taub cluster at uiuc.
>> Each node contains 12 processors. I'm running in one node, with option
>> +p2, but I have the feeling that the code is running in a single
>> processor. My clue is that this is related with this "warning"
>>
>> Charm++> Running on MPI version: 2.2 multi-thread support: 0 (max
>> supported: -1)
>
> This is a detail of the underlying MPI implementation. It doesn't mean
> Charm++ is running on only 1 thread.
>
>> My question is:
>>
>> Where it is the issue? Is it on how MPI was compiled or on how charm++
>> was compiled or on how I call charmrun?
>>
>> Here it is the full call.
>>
>> [fstump2@taubh2
>> io]$ ../yafeq/build/debug/yafeq/charmrun
>> ../yafeq/build/debug/yafeq/pfem.out +p2
>>
>> Running on 2 processors: ../yafeq/build/debug/yafeq/pfem.out
>> charmrun> /usr/bin/setarch x86_64 -R mpirun -np 2
>> ../yafeq/build/debug/yafeq/pfem.out
>> Charm++> Running on MPI version: 2.2 multi-thread support: 0 (max
>> supported: -1)
>> Charm++> Running on 1 unique compute nodes (12-way SMP).
>> Charm++> Cpu topology info:
>> PE to node map: 0 0
>> Node to PE map:
>> Chip #0: 0 1
>> Charm++> cpu topology info is gathered in 0.003 seconds.
>
> This output seems to indicate that things are working correctly. You
> got two cores, 0 and 1, on chip 0 of node 0. Did some other indication
> lead you to the conclusion that only one core was doing work?
>
> Phil






Archive powered by MHonArc 2.6.16.

Top of Page