Skip to Content.
Sympa Menu

charm - Re: [charm] Regarding using charm++ on MultiGPU machine

charm AT lists.cs.illinois.edu

Subject: Charm++ parallel programming system

List archive

Re: [charm] Regarding using charm++ on MultiGPU machine


Chronological Thread 
  • From: Michael Robson <mprobson AT illinois.edu>
  • To: "Wijayasiri,Malavi Pathirannahalage" <adeeshaw AT ufl.edu>
  • Cc: "charm AT cs.illinois.edu" <charm AT cs.illinois.edu>
  • Subject: Re: [charm] Regarding using charm++ on MultiGPU machine
  • Date: Thu, 3 Dec 2015 06:48:03 -0600

Hello Malavi,

By default our GPU framework, GPUManager, uses all of the GPUs it detects on a machine. If you look at lines 431 - 436 inside of cuda-hybrid-api.cu you see:

431 void initHybridAPI(int myPe) {
432
433   int deviceCount;
434   cudaGetDeviceCount(&deviceCount);
435
436   cudaSetDevice(myPe % deviceCount);


Line 436 is responsible for setting the device each PE sends its work to. If you need to change the scheme from round robin you can modify that calculation. Please let us know if you have any more questions or suggestions for ways we can improve our GPU infrastructure.

Thanks,
Michael Robson

On Mon, Nov 30, 2015 at 11:55 AM, Wijayasiri,Malavi Pathirannahalage <adeeshaw AT ufl.edu> wrote:

Hi,

I am trying to use Charm++ on multiGPU machine. My requirement is to divide work load to GPU cards. I could not find any documentation regarding this. 

I read the files in  hybridAPI in charm++ and as I understood there is no support available for my requirement. ( It can run many chares but all of them run on the same GPU device). 

is my understanding correct? or is there a way to distribute work load among gpu devices using Charm++?

Thanks in advance.




  • Re: [charm] Regarding using charm++ on MultiGPU machine, Michael Robson, 12/03/2015

Archive powered by MHonArc 2.6.16.

Top of Page