Charm++ parallel programming system

Text archives Help


Re: [charm] Using Load balancers in charm++


Chronological Thread 
  • From: Aditya Kiran Pandare <apandar AT ncsu.edu>
  • To: Laércio Lima Pilla <laercio.pilla AT ufsc.br>
  • Cc: charm AT lists.cs.illinois.edu
  • Subject: Re: [charm] Using Load balancers in charm++
  • Date: Tue, 12 Sep 2017 19:33:59 -0400
  • Authentication-results: illinois.edu; spf=none smtp.mailfrom=apandar AT ncsu.edu

Thank you for your response Dr Pilla.

Here are the same graphs using GreedyLB. The result seems to be the same: no load balancing.

Yes, I have tried using 8 cores, on separate nodes; and the result is the same as before.

​Do you have any idea of how one can​ check if chares are actually being migrated?

​Regards,​

--
Aditya K Pandare
Graduate Research Assistant
Computational Fluid Dynamics Lab A
3211, Engineering Building III
Department of Mechanical and Aerospace Engineering (MAE)
North Carolina State University

On Tue, Sep 12, 2017 at 6:52 PM, Laércio Lima Pilla <laercio.pilla AT ufsc.br> wrote:

Dear Aditya,

I could be wrong, but I think DistributedLB is not configured to work when running on a single compute node, as is the situation that you are presenting.

Have you tried running centralized load balancers, like GreedyLB or RefineLB?

Do you have access to a cluster where you could try to run using multiple compute nodes?

Best regards,

Em 2017-09-12 19:11, Aditya Kiran Pandare escreveu:

Hello,
I'm a graduate student from NC State University and am new to parallel programming and the charm++ environment. I'm working on using charm to parallelize a Mandelbrot set calculation. I was able to do this without load balancing; so the next step is trying to use a load balancer, specifically DistributedLB. I'm currently trying the "periodical load balancing mode". I was hoping to get some help from this mailing-list about a few questions I have.
 
The problem I'm facing is that, even when I use a load balancer, I don't see any change in the PE usage (as compared to no load balancer). I've attached the timelines for the case with and without DistributedLB for comparison (timeline_distLB.pdf, timeline_noLB.pdf). I'm trying to debug my code to find the reason why I cannot see any effect of load balancing. I have a hunch that the chares are not getting migrated at all. I have attached the screen outputs when I run with and without the load balancer (DistLB.log, NoLB.log). As you can see, I have run with the +cs flag.
 
My questions:
 
1) Is there a way to check chare-migration in charm++?

2) In this test, the number of chares are 40 (as seen in the "Load distribution" screen output). However, the "Total chares" shows only 12 created. Could you explain how I can interpret this?
 
3) Also if we compare the outputs of the two tests, it can be seen that there are differences in the "mesgs for groups" column of the statistics table. Does t his mean that Load Balancing is actually being used by the code, but in an incorrect way?
 
To make sure I got the compilation, etc. right, here's how I proceeded:
 
First, I compiled & linked the code with the "-module CommonLBs". Now, I'm trying to run the code on 8 cores of a single node.
 
Then, the command I used to run the code: ./charmrun +p8 ./mandel 4000 0.8 +cs +balancer DistributedLB +LBPeriod 1.0
(here the ./mandel takes tw o arguments, int and double)
 
Any help is appreciated.
 
Thank you,
 
--
Aditya K Pandare
Graduate Research Assistant
Computational Fluid Dynamics Lab A
3211, Engineering Building III
Department of Mechanical and Aerospace Engineering (MAE)
North Carolina State University


--
Laércio Lima Pilla, PhD.
Associate Professor (Professor Adjunto)
UFSC - CTC - INE, Brazil
Email: laercio.pilla AT ufsc.br or laercio.lima.pilla AT gmail.com
Tel: +55 (48) 99152 8120, +55 (48) 3721 7564
Website: www.inf.ufsc.br/~pilla/

Attachment: timeline_greedyLB.jpg
Description: JPEG image

Attachment: GreedyLB.run
Description: Binary data




Archive powered by MHonArc 2.6.19.

Top of page