Skip to Content.
Sympa Menu

charm - Re: [charm] [ppl] AMPI Application + MPI-based load balancer

charm AT lists.cs.illinois.edu

Subject: Charm++ parallel programming system

List archive

Re: [charm] [ppl] AMPI Application + MPI-based load balancer


Chronological Thread 
  • From: Phil Miller <mille121 AT illinois.edu>
  • To: François Tessier <francois.tessier AT inria.fr>
  • Cc: "charm AT cs.uiuc.edu" <charm AT cs.uiuc.edu>
  • Subject: Re: [charm] [ppl] AMPI Application + MPI-based load balancer
  • Date: Wed, 25 Feb 2015 12:12:31 -0600
  • List-archive: <http://lists.cs.uiuc.edu/pipermail/charm/>
  • List-id: CHARM parallel programming system <charm.cs.uiuc.edu>

When compiling your LB code and associated routines, double-check that the mpi.h that they're picking up is in fact the host's and not the one provided by AMPI. This is a problem we just saw for another user as well.

On Wed, Feb 25, 2015 at 11:58 AM, François Tessier <francois.tessier AT inria.fr> wrote:
I'm going to try to answer everyone :-)

@Ehsan : No, I compiled the libraries (libtopomap, ParMETIS) with MPICH2-Cray. May be I can try with AMPI.

@Gengbin : I'm using libtopomap which is using MPI. In my code, I call libtopomap functions and some MPI methods like MPI_CommSplit.

@Celso and Abhinav (and all) : My load balancer works well on applications like kNeighbor but with a Charm++ build (not only AMPI). May be there is something here ?

Thank you for your help !

François

Dr. François TESSIER
University of Bordeaux
Inria - TADaaM Team
Tel : 0033524574152
francois.tessier AT inria.fr
http://runtime.bordeaux.inria.fr/ftessier/
PGP 0x8096B5FA
On 25/02/2015 18:31, Abhinav Bhatele wrote:
We have demonstrated the use of ParMetisLB (which uses MPI calls) from within a Charm++ application (LeanMD) through the interoperation framework.

But Francois' case is slightly different because he would like to compile the whole thing as one big AMPI program.


On Wed, Feb 25, 2015 at 9:10 AM, Celso L. Mendes <cmendes AT illinois.edu> wrote:
Francois,

Since you say "I'm able to run the application without load
balancer or with the native ones.", it seems to me that the
problem arises because you're using MPI calls *inside* your
load balancer, is that correct? Are those calls really
essential to your balancer?

I don't think such a balancer (based on MPI) has been built
in the past, so this might be new territory in Charm++, but
probably someone at PPL could tell better than me.

-Celso



On 2/25/2015 9:49 AM, François Tessier wrote:
Hello,

I've just started again to work on that and the problem is still
there... Here is a summary :

I'm working on a topology-aware load balancer. My algorithm contains a
part of code written with MPI.

I would like to try this load balancer on the AMPI version of Ondes3D, a
simulator of seisimical wave propagation, on Blue Waters. To run this
application, I build AMPI like that on a new Charm++ checkout : ./build
AMPI mpi-crayxe (while linking with some libraries). I'm able to run the
application without load balancer or with the native ones. However, when
I carry out this experiment with my load balancer, it fails with this
error :

Reason: Cannot call MPI routines before AMPI is initialized.

Is there something special to do to use MPI functions in a load balancer
with AMPI ?

Thanks for you help

François

Dr. François TESSIER
University of Bordeaux
Inria - TADaaM Team
Tel : 0033524574152
francois.tessier AT inria.fr
http://runtime.bordeaux.inria.fr/ftessier/
PGP 0x8096B5FA

On 09/12/2014 19:25, François Tessier wrote:
Celso,

Yes, the declaration is in the file containing the "main()" function.
But I noticed a new behavior today. On a new allocation on Blue Waters,
I was able to run the application and my load balancer with success.
I've just carried out the application on another node and I have the
same problem as exposed yesterday. It seems to be dependant of the Blue
Waters nodes... That's weird.

François

François TESSIER
PhD Student at University of Bordeaux
Inria - Runtime Team
Tel : 0033524574152
francois.tessier AT inria.fr
http://runtime.bordeaux.inria.fr/ftessier/
PGP 0x8096B5FA

On 09/12/2014 00:01, Celso L. Mendes wrote:
Francois,

Do you have the declaration of the #include for "mpi.h" in the
same file where MPI_Init() is invoked? Is this really the same
file where "main()" is defined?

-Celso


On 12/8/2014 4:30 PM, François Tessier wrote:
There is a MPI_Init() at the beginning of the main function in the
application's code. Is it enough to use MPI in my load balancer ?
There is no MPI_Finalize() during the execution. The only one is at the
end of the main function.

++

François



_______________________________________________
charm mailing list
charm AT cs.uiuc.edu
http://lists.cs.uiuc.edu/mailman/listinfo/charm



_______________________________________________
ppl mailing list
ppl AT cs.uiuc.edu
http://lists.cs.uiuc.edu/mailman/listinfo/ppl




_______________________________________________
charm mailing list
charm AT cs.uiuc.edu
http://lists.cs.uiuc.edu/mailman/listinfo/charm



_______________________________________________
charm mailing list
charm AT cs.uiuc.edu
http://lists.cs.uiuc.edu/mailman/listinfo/charm



_______________________________________________
ppl mailing list
ppl AT cs.uiuc.edu
http://lists.cs.uiuc.edu/mailman/listinfo/ppl



_______________________________________________
charm mailing list
charm AT cs.uiuc.edu
http://lists.cs.uiuc.edu/mailman/listinfo/charm

_______________________________________________
ppl mailing list
ppl AT cs.uiuc.edu
http://lists.cs.uiuc.edu/mailman/listinfo/ppl



--
Abhinav Bhatele, people.llnl.gov/bhatele
Center for Applied Scientific Computing, Lawrence Livermore National Laboratory


_______________________________________________
charm mailing list
charm AT cs.uiuc.edu
http://lists.cs.uiuc.edu/mailman/listinfo/charm





Archive powered by MHonArc 2.6.16.

Top of Page