Skip to Content.
Sympa Menu

charm - [charm] Profiling and tuning charm++ applications

charm AT lists.cs.illinois.edu

Subject: Charm++ parallel programming system

List archive

[charm] Profiling and tuning charm++ applications


Chronological Thread 
  • From: Alexander Frolov <alexndr.frolov AT gmail.com>
  • To: charm AT cs.uiuc.edu
  • Subject: [charm] Profiling and tuning charm++ applications
  • Date: Wed, 22 Jul 2015 19:24:12 +0300
  • List-archive: <http://lists.cs.uiuc.edu/pipermail/charm/>
  • List-id: CHARM parallel programming system <charm.cs.uiuc.edu>

Hi!

I am profiling my application with projections and found out that usage profile is terribly low (~45%) for the cases when 2 and more cores are used (at the moment I am investigating scalability inside of single smp node). For single pe the usage profile is about 65% (does not look good as well).

I would suppose that something wrong with mpi environment (for eg. mpi-process are continuously switched between cores). But maybe the problem in charm++ configuration?

Has anybody met with similar behavior of charm++ applications? That is there is no scalability when it is expected... 
Any suggestions would be very appreciated! :-)

Hardware:
x2 Intel(R) Xeon(R) CPU E5-2690 with 65868940 kB of memory

System software:
icpc version 14.0.1, impi/4.1.0.030

Charm++ runtime:
./build charm++ mpi-linux-x86_64 mpicxx -verbose 2>&1

ps. I checked with --with-production option but it does not improved the situation significantly.

Best,
   Alex



Archive powered by MHonArc 2.6.16.

Top of Page