Skip to Content.
Sympa Menu

charm - Re: [charm] [ppl] AMPI Application + MPI-based load balancer

charm AT lists.cs.illinois.edu

Subject: Charm++ parallel programming system

List archive

Re: [charm] [ppl] AMPI Application + MPI-based load balancer


Chronological Thread 
  • From: "Celso L. Mendes" <cmendes AT illinois.edu>
  • To: François Tessier <francois.tessier AT inria.fr>, <charm AT cs.uiuc.edu>
  • Subject: Re: [charm] [ppl] AMPI Application + MPI-based load balancer
  • Date: Wed, 25 Feb 2015 11:10:03 -0600
  • List-archive: <http://lists.cs.uiuc.edu/pipermail/charm/>
  • List-id: CHARM parallel programming system <charm.cs.uiuc.edu>

Francois,

Since you say "I'm able to run the application without load
balancer or with the native ones.", it seems to me that the
problem arises because you're using MPI calls *inside* your
load balancer, is that correct? Are those calls really
essential to your balancer?

I don't think such a balancer (based on MPI) has been built
in the past, so this might be new territory in Charm++, but
probably someone at PPL could tell better than me.

-Celso


On 2/25/2015 9:49 AM, François Tessier wrote:
Hello,

I've just started again to work on that and the problem is still
there... Here is a summary :

I'm working on a topology-aware load balancer. My algorithm contains a
part of code written with MPI.

I would like to try this load balancer on the AMPI version of Ondes3D, a
simulator of seisimical wave propagation, on Blue Waters. To run this
application, I build AMPI like that on a new Charm++ checkout : ./build
AMPI mpi-crayxe (while linking with some libraries). I'm able to run the
application without load balancer or with the native ones. However, when
I carry out this experiment with my load balancer, it fails with this
error :

Reason: Cannot call MPI routines before AMPI is initialized.

Is there something special to do to use MPI functions in a load balancer
with AMPI ?

Thanks for you help

François

Dr. François TESSIER
University of Bordeaux
Inria - TADaaM Team
Tel : 0033524574152
francois.tessier AT inria.fr
http://runtime.bordeaux.inria.fr/ftessier/
PGP 0x8096B5FA

On 09/12/2014 19:25, François Tessier wrote:
Celso,

Yes, the declaration is in the file containing the "main()" function.
But I noticed a new behavior today. On a new allocation on Blue Waters,
I was able to run the application and my load balancer with success.
I've just carried out the application on another node and I have the
same problem as exposed yesterday. It seems to be dependant of the Blue
Waters nodes... That's weird.

François

François TESSIER
PhD Student at University of Bordeaux
Inria - Runtime Team
Tel : 0033524574152
francois.tessier AT inria.fr
http://runtime.bordeaux.inria.fr/ftessier/
PGP 0x8096B5FA

On 09/12/2014 00:01, Celso L. Mendes wrote:
Francois,

Do you have the declaration of the #include for "mpi.h" in the
same file where MPI_Init() is invoked? Is this really the same
file where "main()" is defined?

-Celso


On 12/8/2014 4:30 PM, François Tessier wrote:
There is a MPI_Init() at the beginning of the main function in the
application's code. Is it enough to use MPI in my load balancer ?
There is no MPI_Finalize() during the execution. The only one is at the
end of the main function.

++

François



_______________________________________________
charm mailing list
charm AT cs.uiuc.edu
http://lists.cs.uiuc.edu/mailman/listinfo/charm



_______________________________________________
ppl mailing list
ppl AT cs.uiuc.edu
http://lists.cs.uiuc.edu/mailman/listinfo/ppl




_______________________________________________
charm mailing list
charm AT cs.uiuc.edu
http://lists.cs.uiuc.edu/mailman/listinfo/charm



_______________________________________________
charm mailing list
charm AT cs.uiuc.edu
http://lists.cs.uiuc.edu/mailman/listinfo/charm



_______________________________________________
ppl mailing list
ppl AT cs.uiuc.edu
http://lists.cs.uiuc.edu/mailman/listinfo/ppl







Archive powered by MHonArc 2.6.16.

Top of Page