Skip to Content.
Sympa Menu

charm - Re: [charm] AMPI automatic code transformation

charm AT lists.cs.illinois.edu

Subject: Charm++ parallel programming system

List archive

Re: [charm] AMPI automatic code transformation


Chronological Thread 
  • From: Maksym Planeta <mplaneta AT os.inf.tu-dresden.de>
  • To: Jeff Hammond <jeff.science AT gmail.com>, Phil Miller <mille121 AT illinois.edu>
  • Cc: charm <charm AT lists.cs.illinois.edu>
  • Subject: Re: [charm] AMPI automatic code transformation
  • Date: Sat, 22 Oct 2016 02:11:12 +0200

Dear Jeff,

do I understand correctly that it is fair to compare MPI1 implementations with AMPI implementations (the code looks to be the same)?

Are these applications too small to benefit from AMPI load balancing?

Can AMPI automatically serialize individual ranks of PRK to employ transparent load balancing, using isomalloc?

On 10/22/2016 12:55 AM, Jeff Hammond wrote:

Currently, I just wanted to compare AMPI vs MPI and for that I
planned to port several benchmarks (NPB or miniapps). I will be
happy to have any tool which makes porting smoother.


I believe we actually have ported versions of the NPB that we could
readily share with you. We've also already ported and tested parts
of the Mantevo suite and most of the Lawrence Livermore ASC proxy
applications.


The Parallel Research Kernel (PRK) project already supports AMPI, in
addition to Charm++ and approximately a dozen other programming models.
See https://github.com/ParRes/Kernels/ for details. The AMPI builds are
part of our CI system (https://travis-ci.org/ParRes/Kernels) so I know
they are working.

We didn't publish AMPI results
in http://dx.doi.org/10.1007/978-3-319-41321-1_17 but it is a good
overview of the PRK project in general. I can provide more details
offline if you want.



Mostly, I wanted to distinguish between commodity clusters and more
proprietary supercomputers, like IBM Blue Gene and Cray. The
specialized systems have more quirks that make AMPI a bit harder to
use. SLURM on a common Linux cluster is perfectly straightforward.


Charm++ runs wonderfully on both Blue Gene and Cray machines. I thought
we tested AMPI on Cray as part of our study, but perhaps my memory is
suffering from bitflips. I guess Blue Gene may have issues related to
virtual memory and compiler support.

Best,

Jeff

--
Jeff Hammond
jeff.science AT gmail.com

<mailto:jeff.science AT gmail.com>
http://jeffhammond.github.io/

--
Regards,
Maksym Planeta

Attachment: smime.p7s
Description: S/MIME Cryptographic Signature




Archive powered by MHonArc 2.6.19.

Top of Page