Skip to Content.
Sympa Menu

charm - [charm] barrier without reduction

charm AT lists.cs.illinois.edu

Subject: Charm++ parallel programming system

List archive

[charm] barrier without reduction


Chronological Thread 
  • From: Robert Steinke <rsteinke AT uwyo.edu>
  • To: <charm AT cs.illinois.edu>
  • Subject: [charm] barrier without reduction
  • Date: Tue, 12 Aug 2014 10:43:44 -0600
  • List-archive: <http://lists.cs.uiuc.edu/pipermail/charm/>
  • List-id: CHARM parallel programming system <charm.cs.uiuc.edu>

Is there a way in Charm++ to do a barrier without doing a reduction?

I have a situation where I need a chare group to do six barriers through a sequence of code. Having them do a reduction to the main chare and then have the main chare send a message to proceed is bloating the code. I don't need to do a reduction to produce a value. I just need a barrier. It would be nice if I could just put a barrier statement in the charm group code.

I suppose one solution would be to call MPI_Barrier directly, but I don't like that solution. First of all, is it safe to do that from within my Charm++ code? Will it always be safe in future versions of Charm++? It seems like a dangerous use of undocumented functionality. Also, the code would not be portable to Charm++ installs that were not compiled to use MPI.

For anyone who is curious why I need six barriers, this group is using a parallel library to write NetCDF files. Every member must perform define actions where they specify what variables are in the file. Then they all perform data actions where they write values to those variables. Then they all must close the file. The define actions of all group members must be done before any group member performs a data action, and the data actions of all group members must be done before any group member closes the file. I need two barriers per file and they are doing this for three files so I need six barriers. This is an infrequent operation so speed is not much of an issue, just code bloat.

Bob Steinke





Archive powered by MHonArc 2.6.16.

Top of Page