Skip to Content.
Sympa Menu

charm - Re: [charm] MPI interop on top of AMPI

charm AT lists.cs.illinois.edu

Subject: Charm++ parallel programming system

List archive

Re: [charm] MPI interop on top of AMPI


Chronological Thread 
  • From: Sam White <white67 AT illinois.edu>
  • To: Jozsef Bakosi <jbakosi AT lanl.gov>
  • Cc: charm <charm AT lists.cs.illinois.edu>
  • Subject: Re: [charm] MPI interop on top of AMPI
  • Date: Tue, 17 Jul 2018 16:16:44 -0500
  • Authentication-results: illinois.edu; spf=pass smtp.mailfrom=samt.white AT gmail.com; dkim=pass header.d=gmail.com header.s=20161025; dkim=pass header.d=illinois-edu.20150623.gappssmtp.com header.s=20150623; dmarc=none header.from=illinois.edu

Hi Jozsef,

Interoperating AMPI with Charm++ in the way that you would like to is definitely something we are interested in, but we do not support it fully yet. Ideally, you'd be able to build your TPLs on AMPI, then make calls directly into those libraries from regular chare array elements, with a one-to-one mapping from chare array elements to AMPI ranks. In practice, this is a bit complicated because AMPI must create user-level threads for AMPI ranks to run on, so it can suspend and resume them to support the blocking semantics of MPI and also to migrate the thread stacks. AMPI also binds chare arrays to these user-level threads so that AMPI's internal state migrates along with the threads. AMPI's implementation currently is tied to it managing its own user-level threads in this way.

We have a hacky proof of concept of AMPI + Charm++ interoperation, wherein you can launch 'N' chare array elements and 'M' AMPI ranks (for arbitrary N and M) and send messages between the two collections, here: https://charm.cs.illinois.edu/gerrit/#/c/charm/+/4366/

This means that the chare array elements cannot themselves make MPI calls, but they can send messages to MPI ranks which can then make those calls. There is also more work to be done to bind the MPI ranks to the chare array elements. We have plans for that in the future, and can stay in touch with you regarding updates on it. It's certainly motivating for us to hear from users who want this feature too.

Thanks!
Sam

On Mon, Jul 16, 2018 at 4:34 PM, Jozsef Bakosi <jbakosi AT lanl.gov> wrote:
Hi folks,

I'm converting a Charm++ app that has so far successfully interoperated with MPI
using time division, previously built using Charm++ on top of MPI
(mpi-linux-x86_64), into now using Charm++ and AMPI (netlrts-linux-x86_64).

As a new AMPI user, I have a couple of questions:

1. I noticed that when I try to include mpi-interoperate.h, I get:

#error "Trying to compile Charm++/MPI interoperation against AMPI built atop Charm++"

I guess this means that AMPI and MPI interoperation is not intended to be used
in such a way. If that is so, how should I interoperate with code that directly
calls MPI functions on top of Charm++ code that is mostly written using native
Charm++ but also contains direct MPI calls?

This is a test harness that has a main Charm++ module as well as a main()
routine which calls MPI_Init() followed by CharmLibInit() and CharmLibExit(). In
particular, can I no longer call CharmLibInit() and CharmLibExit()? I did
comment out these calls together with including mpi-interoperate.h, and then the
MPI-part of the code, the one in main(), does not run at all. How should I do
MPI interoperation within native Charm++ that is built with AMPI?

I bit more background on my motivation: I so far have been using Charm++ on top
of MPI because I want to use a number of MPI-only third-party libraries. (I
interface these TPLs from within chare groups and they work great.) Now I would
like to explore building Charm++ and AMPI and use the AMPI MPI libraries to link
MPI in so that (1) I don't need MPI, (2) use the advanced AMPI load balancing
features to do thread migration within the MPI-only libraries. So far, I
managed to build all my MPI TPLs with AMPI as well as all my executables, but
ran into the above issue. (I have not yet tried running those executables that
have calls to the MPI-only libraries.)

Thanks for the insight in advance,
Jozsef




Archive powered by MHonArc 2.6.19.

Top of Page