Skip to Content.
Sympa Menu

charm - Re: [charm] [ppl] multi-thread in taub

charm AT lists.cs.illinois.edu

Subject: Charm++ parallel programming system

List archive

Re: [charm] [ppl] multi-thread in taub


Chronological Thread 
  • From: Phil Miller <mille121 AT illinois.edu>
  • To: Fernando Stump <fernando.stump AT gmail.com>
  • Cc: Charm Mailing List <charm AT cs.illinois.edu>
  • Subject: Re: [charm] [ppl] multi-thread in taub
  • Date: Fri, 7 Oct 2011 14:40:51 -0500
  • List-archive: <http://lists.cs.uiuc.edu/pipermail/charm>
  • List-id: CHARM parallel programming system <charm.cs.uiuc.edu>

On Fri, Oct 7, 2011 at 14:22, Fernando Stump
<fernando.stump AT gmail.com>
wrote:
> Hi,
>
> I'm running my ParFUMized version of my code at the taub cluster at uiuc.  
> Each node contains 12 processors. I'm running in one node, with option +p2,
> but I have the feeling that the code is running in a single processor. My
> clue is that this is related with this "warning"
>
> Charm++> Running on MPI version: 2.2 multi-thread support: 0 (max
> supported: -1)

This is a detail of the underlying MPI implementation. It doesn't mean
Charm++ is running on only 1 thread.

> My question is:
>
> Where it is the issue?  Is it on how MPI was compiled or on how charm++ was
> compiled or on how I call charmrun?
>
> Here it is the full call.
>
> [fstump2@taubh2
> io]$ ../yafeq/build/debug/yafeq/charmrun
> ../yafeq/build/debug/yafeq/pfem.out +p2
>
> Running on 2 processors:  ../yafeq/build/debug/yafeq/pfem.out
> charmrun>  /usr/bin/setarch x86_64 -R  mpirun -np 2  
> ../yafeq/build/debug/yafeq/pfem.out
> Charm++> Running on MPI version: 2.2 multi-thread support: 0 (max
> supported: -1)
> Charm++> Running on 1 unique compute nodes (12-way SMP).
> Charm++> Cpu topology info:
> PE to node map: 0 0
> Node to PE map:
> Chip #0: 0 1
> Charm++> cpu topology info is gathered in 0.003 seconds.

This output seems to indicate that things are working correctly. You
got two cores, 0 and 1, on chip 0 of node 0. Did some other indication
lead you to the conclusion that only one core was doing work?

Phil





Archive powered by MHonArc 2.6.16.

Top of Page