Skip to Content.
Sympa Menu

charm - Re: [charm] Is AMPI support MPI_Waitall?

charm AT lists.cs.illinois.edu

Subject: Charm++ parallel programming system

List archive

Re: [charm] Is AMPI support MPI_Waitall?


Chronological Thread 
  • From: Gengbin Zheng <zhenggb AT gmail.com>
  • To: Phil Miller <mille121 AT illinois.edu>
  • Cc: 张凯 <zhangk1985 AT gmail.com>, charm AT cs.uiuc.edu
  • Subject: Re: [charm] Is AMPI support MPI_Waitall?
  • Date: Thu, 28 Jan 2010 13:20:45 -0600
  • List-archive: <http://lists.cs.uiuc.edu/pipermail/charm>
  • List-id: CHARM parallel programming system <charm.cs.uiuc.edu>

Looks like there is a bug in AMPI_Cart_shift.
The understanding of rank_source is not correct in our implementation.
rank_source is not "myrank", it is the "left nearby" neighbor processor
rank.
I already have a fix that makes it working. I will check it in soon.

Gengbin


On Thu, Jan 28, 2010 at 11:42 AM, Phil Miller
<mille121 AT illinois.edu>
wrote:

> By modifying the program to make its own periodic domain on
> MPI_COMM_WORLD, rather than using a cartesian communicator
> (Cart_create, Cart_shift), the program ran successfully. The modified
> code is attached. I'll investigate why the Cart_create and Cart_shift
> functions are going awry.
>
> Phil
>
> On Thu, Jan 28, 2010 at 10:54, Phil Miller
> <mille121 AT illinois.edu>
> wrote:
> > On Thu, Jan 28, 2010 at 07:50, 张凯
> > <zhangk1985 AT gmail.com>
> > wrote:
> >> hi:
> >>
> >> I am a beginner of AMPI and trying to run a MPI program using it. But i
> >> found a little problem.
> >>
> >> Here(
> >>
> http://www.mcs.anl.gov/research/projects/mpi/usingmpi/examples/advmsg/nbodypipe_c.htm
> )
> >> you can find an example of a MPI program. I have successfully built
> >> and
> >> run it using both MPICH and intel MPI.
> >>
> >> However, when i running it with AMPI, i found that the program was
> blocked
> >> by MPI_Waitall function and never return again.
> >>
> >> I just run it with ++local +p2 +vp2 options. Did i miss other options?
> or
> >> misconfig AMPI?
> >
> > I'm seeing the same effect as you describe on a net-linux-x86_64 build
> > of AMPI from the latest charm sources. We'll look into this and get
> > back to you.
> >
> > For reference, the attached code (with added prints) produces the
> following:
> >
> > $ ./charmrun nbp +vp 4 20 +p4
> > Charm++: scheduler running in netpoll mode.
> > Charm++> cpu topology info is being gathered.
> > Charm++> Running on 1 unique compute nodes (8-way SMP).
> > Iteration 9
> > Iteration 9:0 a
> > Iteration 9
> > Iteration 9:0 a
> > Iteration 9
> > Iteration 9:0 a
> > Iteration 9
> > Iteration 9:0 a
> > Iteration 9:0 b
> > Iteration 9:0 b
> > Iteration 9:0 b
> > Iteration 9:0 b
> >
>
> _______________________________________________
> charm mailing list
> charm AT cs.uiuc.edu
> http://lists.cs.uiuc.edu/mailman/listinfo/charm
>
>


--
--------------------------------------------
Gengbin Zheng, Ph.D.
Research Scientist
Parallel Programming Laboratory
Department of Computer Science
University of Illinois at Urbana-Champaign
201 N. Goodwin Ave.
Urbana, IL 61801




Archive powered by MHonArc 2.6.16.

Top of Page