Skip to Content.
Sympa Menu

charm - Re: [charm] AMPI automatic code transformation

charm AT lists.cs.illinois.edu

Subject: Charm++ parallel programming system

List archive

Re: [charm] AMPI automatic code transformation


Chronological Thread 
  • From: Jeff Hammond <jeff.science AT gmail.com>
  • To: Phil Miller <mille121 AT illinois.edu>
  • Cc: Maksym Planeta <mplaneta AT os.inf.tu-dresden.de>, charm <charm AT lists.cs.illinois.edu>
  • Subject: Re: [charm] AMPI automatic code transformation
  • Date: Fri, 21 Oct 2016 15:55:22 -0700


Currently, I just wanted to compare AMPI vs MPI and for that I planned to port several benchmarks (NPB or miniapps). I will be happy to have any tool which makes porting smoother.

I believe we actually have ported versions of the NPB that we could readily share with you. We've also already ported and tested parts of the Mantevo suite and most of the Lawrence Livermore ASC proxy applications.


The Parallel Research Kernel (PRK) project already supports AMPI, in addition to Charm++ and approximately a dozen other programming models.  See https://github.com/ParRes/Kernels/ for details.  The AMPI builds are part of our CI system (https://travis-ci.org/ParRes/Kernels) so I know they are working.

We didn't publish AMPI results in http://dx.doi.org/10.1007/978-3-319-41321-1_17 but it is a good overview of the PRK project in general.  I can provide more details offline if you want.
 

Mostly, I wanted to distinguish between commodity clusters and more proprietary supercomputers, like IBM Blue Gene and Cray. The specialized systems have more quirks that make AMPI a bit harder to use. SLURM on a common Linux cluster is perfectly straightforward.


Charm++ runs wonderfully on both Blue Gene and Cray machines.  I thought we tested AMPI on Cray as part of our study, but perhaps my memory is suffering from bitflips.  I guess Blue Gene may have issues related to virtual memory and compiler support.

Best,

Jeff

--



Archive powered by MHonArc 2.6.19.

Top of Page