Skip to Content.
Sympa Menu

charm - Re: [charm] Fwd: Charm++ Qs

charm AT lists.cs.illinois.edu

Subject: Charm++ parallel programming system

List archive

Re: [charm] Fwd: Charm++ Qs


Chronological Thread 
  • From: Abhinav Bhatele <bhatele AT illinoisalumni.org>
  • To: Ankur Narang <annarang AT in.ibm.com>
  • Cc: Charm Mailing List <charm AT cs.illinois.edu>, Yanhua Sun <sun51 AT illinois.edu>
  • Subject: Re: [charm] Fwd: Charm++ Qs
  • Date: Sun, 7 Jul 2013 23:51:30 -0700
  • List-archive: <http://lists.cs.uiuc.edu/pipermail/charm/>
  • List-id: CHARM parallel programming system <charm.cs.uiuc.edu>

I am ccing Yanhua who should be able to answer your questions.

- Abhinav


On Sun, Jul 7, 2013 at 9:24 PM, Ankur Narang <annarang AT in.ibm.com> wrote:
Hi Abhinav,

  We resolved the Kmeans code issue. So, now would only require help for UTS. The UTS code is in the Charm++ repository at:
CHARM/charm/examples/charm++/state_space_searchengine/UnbalancedTreeSearch_SE/

I need to print out the load distribution information across the compute nodes when large UTS binomial trees are run. I compiled the code with -module CommonLBs and ran the code with +balancer <name> +LBDebug 1. But, I am not getting any load balance information. Preferably, would like to have this each time the load balancer runs. I have three specific qs:
(1) Is the load balancer running by default when give the above compile and run options ?
(2) If not then, do we need to use AtSync() ?
(3) If AtSync() is needed to run the load balancing algorithm then, how to write that code. I think we need resumeFromSync() as well. If there exists some code for UTS with such additions for load balancing then that will be helpful.

Thanks,
Ankur.

Senior Researcher (RSM)
Research Lead - HPC Analytics
IBM Research India
4, Block C, Institutional Area,
Phase II, Vasant Kunj,
New Delhi - 110070




From:        Abhinav Bhatele <bhatele AT illinoisalumni.org>
To:        Ankur Narang/India/IBM@IBMIN,
Date:        07/08/2013 09:36 AM
Subject:        Re: Fwd: Charm++ Qs
Sent by:        bhatele AT gmail.com




Can you give me some more details? what is the UTS code and is the kmeans code in the charm repository?


On Sun, Jul 7, 2013 at 8:50 PM, Ankur Narang <annarang AT in.ibm.com> wrote:
Sure, Thanks Abhinav,

  If you know some person in particular in Charm++ group who could help, that will be great,

Ankur.

Senior Researcher (RSM)
Research Lead - HPC Analytics
IBM Research India
4, Block C, Institutional Area,
Phase II, Vasant Kunj,
New Delhi - 110070





From:        Abhinav Bhatele <bhatele AT illinoisalumni.org>
To:        
Charm Mailing List <charm AT cs.illinois.edu>,
Cc:        
Ankur Narang/India/IBM@IBMIN
Date:        
07/06/2013 06:58 AM
Subject:        
Fwd: Charm++ Qs
Sent by:        
bhatele AT gmail.com





Hi Ankur,

Sorry for the delay in response. I am forwarding your e-mail to the Charm mailing list and someone should be able to answer your questions there.

- Abhinav


Begin forwarded message:

From:
Ankur Narang <
annarang AT in.ibm.com>
Subject: Charm++ Qs

Date:
June 21, 2013 4:29:02 AM PDT
To:
<
bhatele AT llnl.gov>

Hi Abhinav,

                        Hope you are doing great. I had couple of qs on Charm++


1) We would like to measure the load balance across the compute nodes during the run of UTS on Charm++. How do we do that using the available code of UTS in the Charm++ distribution ?


2) We have a kmeans code in Charm++ that does not scale with increasing nodes. How do we resolve this issue ?


Also, my team is hiring HPC professionals. If you know some folks then let me know,


Thanks,

Ankur.


Senior Researcher (RSM)
Research Lead - HPC Analytics
IBM Research India
4, Block C, Institutional Area,
Phase II, Vasant Kunj,
New Delhi - 110070



--
Abhinav Bhatele,
people.llnl.gov/bhatele
Center for Applied Scientific Computing, Lawrence Livermore National Laboratory




--
Abhinav Bhatele,
people.llnl.gov/bhatele
Center for Applied Scientific Computing, Lawrence Livermore National Laboratory




--
Abhinav Bhatele, people.llnl.gov/bhatele
Center for Applied Scientific Computing, Lawrence Livermore National Laboratory



Archive powered by MHonArc 2.6.16.

Top of Page