Skip to Content.
Sympa Menu

charm - Re: [charm] AMPI automatic code transformation

charm AT lists.cs.illinois.edu

Subject: Charm++ parallel programming system

List archive

Re: [charm] AMPI automatic code transformation


Chronological Thread 
  • From: Jeff Hammond <jeff.science AT gmail.com>
  • To: Maksym Planeta <mplaneta AT os.inf.tu-dresden.de>
  • Cc: Phil Miller <mille121 AT illinois.edu>, charm <charm AT lists.cs.illinois.edu>
  • Subject: Re: [charm] AMPI automatic code transformation
  • Date: Mon, 24 Oct 2016 10:57:21 -0700



On Fri, Oct 21, 2016 at 5:11 PM, Maksym Planeta <mplaneta AT os.inf.tu-dresden.de> wrote:
Dear Jeff,

do I understand correctly that it is fair to compare MPI1 implementations with AMPI implementations (the code looks to be the same)?


Yes.  MPI1, AMPI, and FG_MPI implementations are either identical or trivially different (e.g. MPI1 uses a little C99, which is a recent change).  It is on my list to merge them (https://github.com/ParRes/Kernels/issues/85).  Any non-trivial differences are in the build system.
 
Are these applications too small to benefit from AMPI load balancing?


It's not that they are small, but rather that the algorithms implemented there use a homogeneous and constant decomposition.  This is a known issue and is the subject of active investigation (e.g. http://dx.doi.org/10.1109/IPDPS.2016.65).  Both PIC and AMR kernels are in development.
 
Can AMPI automatically serialize individual ranks of PRK to employ transparent load balancing, using isomalloc?


We do not implement anything special for this today, but it should be trivial to do, since the PRKs are designed to be trivial to port to new models.

Best,

Jeff
 

On 10/22/2016 12:55 AM, Jeff Hammond wrote:

        Currently, I just wanted to compare AMPI vs MPI and for that I
        planned to port several benchmarks (NPB or miniapps). I will be
        happy to have any tool which makes porting smoother.


    I believe we actually have ported versions of the NPB that we could
    readily share with you. We've also already ported and tested parts
    of the Mantevo suite and most of the Lawrence Livermore ASC proxy
    applications.


The Parallel Research Kernel (PRK) project already supports AMPI, in
addition to Charm++ and approximately a dozen other programming models.
See https://github.com/ParRes/Kernels/ for details.  The AMPI builds are
part of our CI system (https://travis-ci.org/ParRes/Kernels) so I know
they are working.

We didn't publish AMPI results
in http://dx.doi.org/10.1007/978-3-319-41321-1_17 but it is a good
overview of the PRK project in general.  I can provide more details
offline if you want.



    Mostly, I wanted to distinguish between commodity clusters and more
    proprietary supercomputers, like IBM Blue Gene and Cray. The
    specialized systems have more quirks that make AMPI a bit harder to
    use. SLURM on a common Linux cluster is perfectly straightforward.


Charm++ runs wonderfully on both Blue Gene and Cray machines.  I thought
we tested AMPI on Cray as part of our study, but perhaps my memory is
suffering from bitflips.  I guess Blue Gene may have issues related to
virtual memory and compiler support.

Best,

Jeff

--
Jeff Hammond
jeff.science AT gmail.com <mailto:jeff.science AT gmail.com>
http://jeffhammond.github.io/

--
Regards,
Maksym Planeta




--



Archive powered by MHonArc 2.6.19.

Top of Page