Charm++ parallel programming system

Text archives Help


Re: [charm] Using Load balancers in charm++


Chronological Thread 
  • From: Vinicius Freitas <vinicius.mct.freitas AT gmail.com>
  • To: Laércio Lima Pilla <laercio.pilla AT ufsc.br>
  • Cc: Aditya Kiran Pandare <apandar AT ncsu.edu>, charm AT lists.cs.illinois.edu
  • Subject: Re: [charm] Using Load balancers in charm++
  • Date: Tue, 12 Sep 2017 20:55:32 -0300
  • Authentication-results: illinois.edu; spf=pass smtp.mailfrom=vinimmbb AT gmail.com

Dear Aditya,

As Laércio said, DistributedLB won't work unless you have multiple compute nodes to run it on.

To check for migrations, you can run your charm++ program with a +LBDebug 1 or +LBDebug 2, this will make centralized load balancers output some information about their execution.

So I'd try suggest following line and check the results again:
" ./charmrun +p8 ./mandel 4000 0.8 +cs +balancer RefineLB +LBPeriod 1.0 +LBDebug 2 "

Best regards,

-- 
Vinicius Marino Calvo Torres de Freitas Computer Science Undergratuate Student (Aluno de graduação em Ciência da Computação)
Research Assistant at the Embedded Computing Laboratory at UFSC
UFSC - CTC - INE - ECL, Brazil
Email: vinicius.mctf AT grad.ufsc.br or vinicius.mct.freitas AT gmail.com 
Tel: +55 (48) 96163803

2017-09-12 19:52 GMT-03:00 Laércio Lima Pilla <laercio.pilla AT ufsc.br>:

Dear Aditya,

I could be wrong, but I think DistributedLB is not configured to work when running on a single compute node, as is the situation that you are presenting.

Have you tried running centralized load balancers, like GreedyLB or RefineLB?

Do you have access to a cluster where you could try to run using multiple compute nodes?

Best regards,

Em 2017-09-12 19:11, Aditya Kiran Pandare escreveu:

Hello,
I'm a graduate student from NC State University and am new to parallel programming and the charm++ environment. I'm working on using charm to parallelize a Mandelbrot set calculation. I was able to do this without load balancing; so the next step is trying to use a load balancer, specifically DistributedLB. I'm currently trying the "periodical load balancing mode". I was hoping to get some help from this mailing-list about a few questions I have.
 
The problem I'm facing is that, even when I use a load balancer, I don't see any change in the PE usage (as compared to no load balancer). I've attached the timelines for the case with and without DistributedLB for comparison (timeline_distLB.pdf, timeline_noLB.pdf). I'm trying to debug my code to find the reason why I cannot see any effect of load balancing. I have a hunch that the chares are not getting migrated at all. I have attached the screen outputs when I run with and without the load balancer (DistLB.log, NoLB.log). As you can see, I have run with the +cs flag.
 
My questions:
 
1) Is there a way to check chare-migration in charm++?

2) In this test, the number of chares are 40 (as seen in the "Load distribution" screen output). However, the "Total chares" shows only 12 created. Could you explain how I can interpret this?
 
3) Also if we compare the outputs of the two tests, it can be seen that there are differences in the "mesgs for groups" column of the statistics table. Does t his mean that Load Balancing is actually being used by the code, but in an incorrect way?
 
To make sure I got the compilation, etc. right, here's how I proceeded:
 
First, I compiled & linked the code with the "-module CommonLBs". Now, I'm trying to run the code on 8 cores of a single node.
 
Then, the command I used to run the code: ./charmrun +p8 ./mandel 4000 0.8 +cs +balancer DistributedLB +LBPeriod 1.0
(here the ./mandel takes tw o arguments, int and double)
 
Any help is appreciated.
 
Thank you,
 
--
Aditya K Pandare
Graduate Research Assistant
Computational Fluid Dynamics Lab A
3211, Engineering Building III
Department of Mechanical and Aerospace Engineering (MAE)
North Carolina State University


--
Laércio Lima Pilla, PhD.
Associate Professor (Professor Adjunto)
UFSC - CTC - INE, Brazil
Email: laercio.pilla AT ufsc.br or laercio.lima.pilla AT gmail.com
Tel: +55 (48) 99152 8120, +55 (48) 3721 7564
Website: www.inf.ufsc.br/~pilla/




Archive powered by MHonArc 2.6.19.

Top of page