Skip to Content.
Sympa Menu

charm - [charm] MPI interop on top of AMPI

charm AT lists.cs.illinois.edu

Subject: Charm++ parallel programming system

List archive

[charm] MPI interop on top of AMPI


Chronological Thread 
  • From: Jozsef Bakosi <jbakosi AT lanl.gov>
  • To: charm <charm AT lists.cs.illinois.edu>
  • Subject: [charm] MPI interop on top of AMPI
  • Date: Mon, 16 Jul 2018 15:34:14 -0600
  • Authentication-results: illinois.edu; spf=pass smtp.mailfrom=jbakosi AT lanl.gov; dmarc=pass header.from=lanl.gov

Hi folks,

I'm converting a Charm++ app that has so far successfully interoperated with
MPI
using time division, previously built using Charm++ on top of MPI
(mpi-linux-x86_64), into now using Charm++ and AMPI (netlrts-linux-x86_64).

As a new AMPI user, I have a couple of questions:

1. I noticed that when I try to include mpi-interoperate.h, I get:

#error "Trying to compile Charm++/MPI interoperation against AMPI built atop
Charm++"

I guess this means that AMPI and MPI interoperation is not intended to be used
in such a way. If that is so, how should I interoperate with code that
directly
calls MPI functions on top of Charm++ code that is mostly written using native
Charm++ but also contains direct MPI calls?

This is a test harness that has a main Charm++ module as well as a main()
routine which calls MPI_Init() followed by CharmLibInit() and CharmLibExit().
In
particular, can I no longer call CharmLibInit() and CharmLibExit()? I did
comment out these calls together with including mpi-interoperate.h, and then
the
MPI-part of the code, the one in main(), does not run at all. How should I do
MPI interoperation within native Charm++ that is built with AMPI?

I bit more background on my motivation: I so far have been using Charm++ on
top
of MPI because I want to use a number of MPI-only third-party libraries. (I
interface these TPLs from within chare groups and they work great.) Now I
would
like to explore building Charm++ and AMPI and use the AMPI MPI libraries to
link
MPI in so that (1) I don't need MPI, (2) use the advanced AMPI load balancing
features to do thread migration within the MPI-only libraries. So far, I
managed to build all my MPI TPLs with AMPI as well as all my executables, but
ran into the above issue. (I have not yet tried running those executables that
have calls to the MPI-only libraries.)

Thanks for the insight in advance,
Jozsef



Archive powered by MHonArc 2.6.19.

Top of Page