Skip to Content.
Sympa Menu

charm - [charm] Charm++ - Error in Parallel Prefix No Barrier (¿Starvation?)

charm AT lists.cs.illinois.edu

Subject: Charm++ parallel programming system

List archive

[charm] Charm++ - Error in Parallel Prefix No Barrier (¿Starvation?)


Chronological Thread 
  • From: Pedro Esequiel Tarazi <pedrotarazi AT gmail.com>
  • To: charm AT lists.cs.illinois.edu
  • Subject: [charm] Charm++ - Error in Parallel Prefix No Barrier (¿Starvation?)
  • Date: Tue, 07 Mar 2017 17:48:32 +0000

Hi! We are students of Computer Engineer of Facultad Ciencias Exactas, Fisicas y Naturales, Cordoba National University, Argentina. 
We are doing a research about Charm++ Framework. We analyze several codes given in Tutorial, and we founded an error. During the execution of the Parallel Prefix No Barrier program, this program it stop of print in the console, but the processors still running in background. Probably is starvation. This no ocurr always (approximately 2 times of 5). We want know if is a problem in the code, or a problem in the our execution. 
This program was executed in two notebooks and one cluster, both running Linux. Below, we show an example of the result of an execution. 
We await your reply. Thank you. Regards.

Aguilar, Mauricio - Tarazi, Pedro
Students of Computer Engineer
Facultad de Ciencias Exactas, Físicas y Naturales - UNC - Argentina

****************************************************************************************************************************
[14:36:36] pedrotarazi 5.ParallelPrefix_NoBarrier $ ./charmrun +p4 prefix 10000 

Running on 4 processors:  prefix 10000
charmrun>  /usr/bin/setarch x86_64 -R  mpirun -np 4  prefix 10000
Charm++> Running on MPI version: 3.0
Charm++> level of thread support used: MPI_THREAD_SINGLE (desired: MPI_THREAD_SINGLE)
Charm++> Running in non-SMP mode: numPes 4
Converse/Charm++ Commit ID: 
Charm++: Tracemode Projections enabled.
Trace: traceroot: prefix
CharmLB> Load balancer assumes all CPUs are same.
Charm++> Running on 1 unique compute nodes (8-way SMP).
Charm++> cpu topology info is gathered in 0.007 seconds.
Running "Parallel Prefix" with 10000 elements using 4 processors.
Before: Prefix[0].value = 1
Before: Prefix[1].value = 1
Before: Prefix[2].value = 1
Before: =�Prefix[3].value = 1
Before: =�Prefix[4].value = 1
Before: =�Prefix[5].value = 1
Before: =�Prefix[6].value = 1
Before: =�Prefix[7].value = 1
Before: =�Prefix[8].value = 1
Before: =�Prefix[9].value = 1
Before: =�Prefix[10].value = 1
...
...
...
Before: =��� Prefix[9987].value = 1
Before: =��� Prefix[9988].value = 1
Before: =��� Prefix[9989].value = 1
Before: =��� Prefix[9990].value = 1
Before: =��� Prefix[9991].value = 1
Before: =��� Prefix[9992].value = 1
Before: =��� Prefix[9993].value = 1
Before: =��� Prefix[9994].value = 1
Before: =��� Prefix[9995].value = 1
Before: =��� Prefix[9996].value = 1
Before: =��� Prefix[9997].value = 1
Before: =��� Prefix[9998].value = 1
Before: =��� Prefix[9999].value = 1
^C
**********************************************************************************************************************

P/D: We compile Charm++ with:
  ./build charm++ mpi-linux-x86_64 --with-production -j8
and, also: 
  ./build charm++ mpi-linux-x86_64 smp --with-production -j8


--
Pedro Esequiel Tarazi
Estudiante de Ingeniería en Computación
Facultad de Ciencias Exactas, Físicas y Naturales - Universidad Nacional de Córdoba
    

--
Pedro Esequiel Tarazi
Estudiante de Ingeniería en Computación
Facultad de Ciencias Exactas, Físicas y Naturales - Universidad Nacional de Córdoba
    




Archive powered by MHonArc 2.6.19.

Top of Page