Skip to Content.
Sympa Menu

charm - [charm] MPI functions used in CHARM++ (with arch/mpi)

charm AT lists.cs.illinois.edu

Subject: Charm++ parallel programming system

List archive

[charm] MPI functions used in CHARM++ (with arch/mpi)


Chronological Thread 
  • From: "Sébastien Boisvert" <sebhtml AT gmail.com>
  • To: charm AT cs.uiuc.edu
  • Subject: [charm] MPI functions used in CHARM++ (with arch/mpi)
  • Date: Wed, 17 Oct 2012 02:48:01 -0000
  • List-archive: <http://lists.cs.uiuc.edu/pipermail/charm/>
  • List-id: CHARM parallel programming system <charm.cs.uiuc.edu>

Hello,

According to the benchmark

http://www.hpcadvisorycouncil.com/pdf/NAMD_analysis.pdf

CHARM++ utilizes these MPI functions:

- MPI_Get_count
- MPI_Iprobe
- MPI_Isend
- MPI_Recv
- MPI_Test
- MPI_Wtime


With most of the time being spent in MPI_Iprobe.


>From the CHARM++ source code file

git-clones/charm/src/arch/mpi/machine.c

In MPISendOneMsg(), I understand that messages are sent with MPI_Isend and
that MPI_Test is used to see whether or not buffers can be safely
reused.

In PumpMsgs(), it seems to me that there are at least two code paths:

First, the code attempts to use non-blocking reception with a bunch
of MPI_Irecv + MPI_Testany + MPI_Get_count.

If that does not work, the code probes for incoming messages, and read one
if any with MPI_Iprobe + MPI_Get_count + MPI_Recv.

But in the benchmarks above on NAMD, they report only the code path with
MPI_Iprobe, MPI_Get_count and MPI_Recv and none for the code path with
MPI_Testany, MPI_Irecv and MPI_Get_count.

It seems to me that non-blocking communication should be better as the MPI
library can put the incoming data directly in the buffer provided by the user
whereas with MPI_Iprobe it is not necessarily the case.


So what's going on ? Is it because NAMD was compiled with special options
for the benchmarks ?

Thank you.

***
Sébastien Boisvert
Ph.D. student, Laval University
http://boisvert.info




Archive powered by MHonArc 2.6.16.

Top of Page