Skip to Content.
Sympa Menu

charm - [charm] Charm++ and MPI

charm AT lists.cs.illinois.edu

Subject: Charm++ parallel programming system

List archive

[charm] Charm++ and MPI


Chronological Thread 
  • From: François Tessier <francois.tessier AT inria.fr>
  • To: "charm AT cs.uiuc.edu" <charm AT cs.uiuc.edu>
  • Cc: Emmanuel Jeannot <emmanuel.jeannot AT inria.fr>, Guillaume Mercier <mercier AT labri.fr>
  • Subject: [charm] Charm++ and MPI
  • Date: Thu, 30 Jan 2014 14:07:10 +0100
  • List-archive: <http://lists.cs.uiuc.edu/pipermail/charm/>
  • List-id: CHARM parallel programming system <charm.cs.uiuc.edu>

Hello,

I built Charm++ to use MPI (./build charm++ mpi-linux-x86_64) and I have
a few questions about that.
When I run kNeighbor with 512 chares on 64 cores (8 chares per core), I
can see that the size of the MPI communicator is 64. So, here are my
questions :
- Does each MPI process manage the 8 chares binded on the
corresponding core?
- If I create a new communicator with differents MPI ranks, can I
apply it during the execution? How does it work exactly?

Thanks for your help,

François

--
___________________
François TESSIER
PhD Student at University of Bordeaux
Inria - Runtime Team
Tel : 0033.5.24.57.41.52
francois.tessier AT inria.fr
http://runtime.bordeaux.inria.fr/ftessier/
PGP 0x8096B5FA


Attachment: signature.asc
Description: OpenPGP digital signature




Archive powered by MHonArc 2.6.16.

Top of Page