Skip to Content.
Sympa Menu

charm - Re: [charm] [ppl] Building charm++ with SMP and on GNU/Linux x86_64

charm AT lists.cs.illinois.edu

Subject: Charm++ parallel programming system

List archive

Re: [charm] [ppl] Building charm++ with SMP and on GNU/Linux x86_64


Chronological Thread 
  • From: "Mei, Chao" <chaomei2 AT illinois.edu>
  • To: "Gupta, Abhishek" <gupta59 AT illinois.edu>, Shad Kirmani <sxk5292 AT cse.psu.edu>
  • Cc: "charm AT cs.uiuc.edu" <charm AT cs.uiuc.edu>
  • Subject: Re: [charm] [ppl] Building charm++ with SMP and on GNU/Linux x86_64
  • Date: Sat, 10 Mar 2012 02:57:08 +0000
  • Accept-language: en-US
  • List-archive: <http://lists.cs.uiuc.edu/pipermail/charm>
  • List-id: CHARM parallel programming system <charm.cs.uiuc.edu>

Hi Shad,

What's the network of the cluster that you use to run your program? If it is infiniband? If it is, then it is better to build charm++ with "./build net-linux-x86_64 smp ibverbs --with-production" than the MPI one. If you could tell us more about the cluster you used, it would be more helpful for us to point out the best option for building charm.

Secondly, have you tried to run a simple charm program to see if it works on your cluster?

Regards,
Chao Mei


From: ppl-bounces AT cs.uiuc.edu [ppl-bounces AT cs.uiuc.edu] on behalf of Abhishek Gupta [gupta59 AT illinois.edu]
Sent: Friday, March 09, 2012 8:08 PM
To: Shad Kirmani
Cc: charm AT cs.uiuc.edu
Subject: Re: [ppl] [charm] Building charm++ with SMP and on GNU/Linux x86_64

Hi Shad,

Is there a particular reason why you are trying to use mpi-linux instead of net-linux. In general, I would recommend that you use net-linux since it has better performance compared to mpi-linux. You can use the following command for building:

./build charm++ net-linux-x86_64 smp

Also, please tell the command that you are using to run your program. For SMP version, you need to specify a
+ppn workerThreadPerNode  runtime argument . You should leave one core for communication thread.e.g for 16 core node, you can use +ppn 15 to specify 15 worker threads per node.

Thanks,

Abhishek


On Fri, Mar 9, 2012 at 7:48 PM, Shad Kirmani <sxk5292 AT cse.psu.edu> wrote:
Hello,

I am trying to build charm++ with SMP support. My machine specifications are:

[sxk5292@cyberstar84 test]$ uname -a
Linux cyberstar84.hpc.rcc.psu.edu 2.6.18-274.7.1.el5 #1 SMP Mon Oct 17 11:57:14 EDT 2011 x86_64 x86_64 x86_64 GNU/Linux

Please also visit http://www.ics.psu.edu/infrast/specs.html for more detailed specifications.

To build charm++, I chose the version mpi-linux-x86_64 with smp option set. 

When I run my code I get the following output from charm++:
[sxk5292@cyberstar84 test]$ ./pbsall.sh 
Charm++> Running on MPI version: 2.1
Charm++> level of thread support used: MPI_THREAD_SINGLE (desired: MPI_THREAD_FUNNELED)
Charm++> Running on SMP mode, 1 worker threads per process
Charm++> The comm. thread both sends and receives messages
Converse/Charm++ Commit ID: v6.3.0-1293-g7f245d0
Warning> Randomization of stack pointer is turned on in kernel.
------------- Processor 14 Exiting: Caught Signal ------------
Signal: 11
------------- Processor 49 Exiting: Caught Signal ------------
Signal: 11
--------------------------------------------------------------------------
mpirun noticed that process rank 14 with PID 21394 on node cyberstar83 exited on signal 11 (Segmentation fault).
--------------------------------------------------------------------------
...
...

I have tried building charm++ with the following command line arguments:
./build charm++ mpi-linux-x86_64 smp 
and
./build charm++ mpi-linux-x86_64 smp -DCMK_SMP=1 -DCMK_MPI_INIT_THREAD=1

Can anybody please help with the command line arguments to build charm++.

Thanks,
Shad


_______________________________________________
charm mailing list
charm AT cs.uiuc.edu
http://lists.cs.uiuc.edu/mailman/listinfo/charm

_______________________________________________
ppl mailing list
ppl AT cs.uiuc.edu
http://lists.cs.uiuc.edu/mailman/listinfo/ppl





Archive powered by MHonArc 2.6.16.

Top of Page