Skip to Content.
Sympa Menu

charm - Re: [charm] [ppl] MPI_Allgather

charm AT lists.cs.illinois.edu

Subject: Charm++ parallel programming system

List archive

Re: [charm] [ppl] MPI_Allgather


Chronological Thread 
  • From: Shad Kirmani <sxk5292 AT cse.psu.edu>
  • To: "Kale, Laxmikant V" <kale AT illinois.edu>
  • Cc: "charm AT cs.uiuc.edu" <charm AT cs.uiuc.edu>, "Venkataraman, Ramprasad" <ramv AT illinois.edu>
  • Subject: Re: [charm] [ppl] MPI_Allgather
  • Date: Fri, 2 Mar 2012 18:58:13 -0500
  • Authentication-results: mr.google.com; spf=pass (google.com: domain of shad.kirmani AT gmail.com designates 10.68.218.98 as permitted sender) smtp.mail=shad.kirmani AT gmail.com; dkim=pass header.i=shad.kirmani AT gmail.com
  • List-archive: <http://lists.cs.uiuc.edu/pipermail/charm>
  • List-id: CHARM parallel programming system <charm.cs.uiuc.edu>

Hello Dr. Kale,

We can also do our job by all-reducing an array of doubles. So instead of AllGather, AllReduce  with sum operation will also work for us. Whatever works out easier for you.

We appreciate you going out of your way to provide us with extra functionality in charm.

Thanks,
Shad

On Fri, Mar 2, 2012 at 4:22 AM, Shad Kirmani <sxk5292 AT cse.psu.edu> wrote:
Hello Dr. Kale,

Thanks a lot your generosity. :)

I have an array of structures distributed over PEs. Each of the structures have three doubles and an integer. The total size of this array ranges from 100,000 to 1-2 million.

  • The size of each of the contribution should be roughly equal to the array size/number of cores.
  • The size of the contribution from each core would not be very different, with a variability of, say, 10-15% between them.
  • We are planning to run this code on 10 - 1000 cores.
Thanks again,
Shad

On Thu, Mar 1, 2012 at 12:47 PM, Kale, Laxmikant V <kale AT illinois.edu> wrote:
We often tend to do this things on demand :-)
Actually, right now we are engaged in re-doing collectives with sections (sort of like communicators). So this is a timely request. I will see if we can get a quick implementation for you.

Since the best algorithms are different depending on the message and distributions, can you say: 

. Whats the size of contribution form each core?
. Is it same or very different for different processors? (min, max, average, ..)
. How many nodes and cores are you targeting for now?

Sanjay

-- 
Laxmikant (Sanjay) Kale         http://charm.cs.uiuc.edu
Professor, Computer Science     kale AT illinois.edu
201 N. Goodwin Avenue           Ph:  (217) 244-0094
Urbana, IL  61801-2302          FAX: (217) 265-6582

On 2/29/12 12:03 AM, "Shad Kirmani" <sxk5292 AT cse.psu.edu> wrote:

I want to do MPI_Allgather on group. Having an allgather would have helped a lot. 

Thanks,
Shad

On Tue, Feb 28, 2012 at 4:29 PM, Ramprasad Venkataraman <ramv AT illinois.edu> wrote:
There is not yet a direct way to achieve an allgather in charm.

An immediate mechanism to achieve something like this would be to
perform a reduction using a CkReduction::set or a CkReduction::concat
reducer. The result of the reduction (gather) can then be broadcast to
all the contributing entities.However, data element ordering within
the result is not guaranteed and has to be achieved manually.

What charm entity do you want to do this on: group, chare array, section?

Ram


On Tue, Feb 28, 2012 at 15:12, Shad Kirmani <sxk5292 AT cse.psu.edu> wrote:
> Hello,
>
> I want to do an MPI_Allgather in charm code. Can anybody please help me with
> this?
>
> Thanks,
> Shad
>
> _______________________________________________
> charm mailing list
> charm AT cs.uiuc.edu
> http://lists.cs.uiuc.edu/mailman/listinfo/charm
>
> _______________________________________________
> ppl mailing list
> ppl AT cs.uiuc.edu
> http://lists.cs.uiuc.edu/mailman/listinfo/ppl
>



--
Ramprasad Venkataraman
Parallel Programming Lab
Univ. of Illinois

_______________________________________________ charm mailing list charm AT cs.uiuc.edu http://lists.cs.uiuc.edu/mailman/listinfo/charm _______________________________________________ ppl mailing list ppl AT cs.uiuc.edu http://lists.cs.uiuc.edu/mailman/listinfo/ppl





Archive powered by MHonArc 2.6.16.

Top of Page