Skip to Content.
Sympa Menu

charm - Re: [charm] [ppl] Using Charm AMPI

charm AT lists.cs.illinois.edu

Subject: Charm++ parallel programming system

List archive

Re: [charm] [ppl] Using Charm AMPI


Chronological Thread 
  • From: Scott Field <sfield AT astro.cornell.edu>
  • To: Leonardo Duarte <leo.duarte AT gmail.com>
  • Cc: Charm Mailing List <charm AT cs.illinois.edu>
  • Subject: Re: [charm] [ppl] Using Charm AMPI
  • Date: Fri, 30 Oct 2015 12:16:08 -0400

Hi,

  I'm glad to hear it worked out!

  I have a follow up question (one which may be most appropriate for another thread). On blue waters I've been launching 32 (or 31) threads per nodes for smp builds and 32 (or 31) processes per node for non-smp builds. Should I be using 16 instead? In Sam's example he uses 16 cores/node. 

  Does anyone have experience comparing the charm++ on bluewaters when viewing each node as having 16 vs 32 cores? Documentation seems to suggest viewing the system as having 32 cores (https://bluewaters.ncsa.illinois.edu/charm).

Best,
Scott 

On Fri, Oct 30, 2015 at 3:52 AM, Leonardo Duarte <leo.duarte AT gmail.com> wrote:
Hello everyone,

Thanks a lot for all the answers.
You were right. The problem was the +pemap and +commap parameters.
Since I was not defining them, the same thread was been used for worker and communication.
Now my simple example that was spending 11 min is taking just 3 secs. I know I have to improve this a lot but the 11 min was too weird to be right.
I was seeking for some build or running command line mistake, and that was it.

Just for the record, I changed back to the PrgEnv-gnu since all of you said there is no reason to be slow.

Thanks you all for the help.

Leonardo.




Archive powered by MHonArc 2.6.16.

Top of Page