Skip to Content.
Sympa Menu

charm - Re: [charm] build the mpi-linux-amd64-smp (or net-linux-amd64-smp) version with the intel 10.1 compilers

charm AT lists.cs.illinois.edu

Subject: Charm++ parallel programming system

List archive

Re: [charm] build the mpi-linux-amd64-smp (or net-linux-amd64-smp) version with the intel 10.1 compilers


Chronological Thread 
  • From: Eric Bohm <ebohm AT uiuc.edu>
  • To: Vlad Cojocaru <Vlad.Cojocaru AT eml-r.villa-bosch.de>
  • Cc: charm AT cs.uiuc.edu
  • Subject: Re: [charm] build the mpi-linux-amd64-smp (or net-linux-amd64-smp) version with the intel 10.1 compilers
  • Date: Fri, 08 Aug 2008 09:03:45 -0500
  • List-archive: <http://lists.cs.uiuc.edu/pipermail/charm>
  • List-id: CHARM parallel programming system <charm.cs.uiuc.edu>

I do not think you want the pthreads option in combination with smp. In the smp layer charm will launch one process with +ppn N pthreads per process. The pthreads option controls how charm's user level threads (used in AMPI or with threaded entry methods) are implemented. We generally only use pthreads for the latter when the normal (context or uJcontext) lightweight user level thread packages are not supported by a platform.

Additionally, you will find perform on the smp layer is improved for most platforms if you add -memory os to your application link line.


Vlad Cojocaru wrote:
Dear Eric,

Thanks a lot! I managed to build the net-linux-amd64-smp version with intel compilers 10.1 using -O1 instead of -O3. The megatest run as ./pgm +p2 completed correctly. Now I tried to build the mpi-linux-amd64-pthreads-smp version, I managed to build it correctly but the megatest fails with the error message below both when run as "./pgm +p2" or "mpirun -n1 -machinefile machines pgm +p2"

Now I have 2 questions, Is the pthreads option required when building an smp version for mpi charm ? What is actually the difference between building the mpi-linux-amd64-smp with and without pthreads option?
And the last question: Does anyone know why I get the error message below?

Thanks a lot for all the help

Best wishes
vlad

------------error-----------------------------------------------------------
Megatest is running on 1 nodes 1 processors.
test 0: initiated [callback (olawlor)]
[node-06-01:30629] *** Process received signal ***
[node-06-01:30629] Signal: Segmentation fault (11)
[node-06-01:30629] Signal code: Address not mapped (1)
[node-06-01:30629] Failing at address: 0x10
[node-06-01:30629] [ 0] /lib/libpthread.so.0 [0x2b19ad1ed410]
[node-06-01:30629] [ 1] /lib/libpthread.so.0(__pthread_mutex_lock+0x10) [0x2b19ad1e8c60]
[node-06-01:30629] [ 2] pgm [0x4d49e6]
[node-06-01:30629] [ 3] /lib/libpthread.so.0 [0x2b19ad1e6f1a]
[node-06-01:30629] [ 4] /lib/libc.so.6(__clone+0x72) [0x2b19ae4a15d2]
[node-06-01:30629] *** End of error message ***
mpirun noticed that job rank 0 with PID 30629 on node node-06-01 exited on signal 11 (Segmentation fault).


Eric Bohm wrote:
This appears to be a bug in the compiler itself. If you have access to a different version of the intel compiler you could try that. Or try compiling that file directly with a lower optimization level.

Vlad Cojocaru wrote:
Dear Charm users and developers,

I tried to build the mpi or net version of charm but with the intel compilers and I git the error below. The same version built correctly with gcc 4.1.2 but then the namd that was built on this charm build was about 30% slower than any version of namd compiled with the intel 10.1 compilers.

Is there any way to fix this problem of the smp version with the intel compilers?

Thanks

Best wishes
vlad


---------error------------------------
./bin/charmc -DCMK_OPTIMIZE=1 -O3 -fPIC -c -I. msgmgr.c
msgmgr.c(91): (col. 18) remark: LOOP WAS VECTORIZED.
../bin/charmc -DCMK_OPTIMIZE=1 -O3 -fPIC -c -I. cpm.c
../bin/charmc -DCMK_OPTIMIZE=1 -O3 -fPIC -c -I. cpthreads.c
../bin/charmc -DCMK_OPTIMIZE=1 -O3 -fPIC -c -I. futures.c
../bin/charmc -DCMK_OPTIMIZE=1 -O3 -fPIC -c -I. cldb.c
../bin/charmc -DCMK_OPTIMIZE=1 -O3 -fPIC -c -I. topology.C
../bin/charmc -DCMK_OPTIMIZE=1 -O3 -fPIC -c -I. random.c
../bin/charmc -DCMK_OPTIMIZE=1 -O3 -fPIC -c -I. debug-conv.c
(0): internal error: 0_1561

compilation aborted for debug-conv.c (code 4)
Fatal Error by charmc in directory /home/cojocaru/apps/intel/charm/charm-cvs-mpi-smp-pthreads/mpi-linux-amd64-pthreads-smp-mpicxx/tmp
Command mpicc -D_REENTRANT -I../bin/../include -D__CHARMC__=1 -DCMK_OPTIMIZE=1 -I. -O3 -fPIC -I/apps/mpi/mvapich/1.0.1-2533-intel-10.1/include -c debug-conv.c -o debug-conv.o returned error code 4
charmc exiting...
gmake[2]: *** [debug-conv.o] Error 1
gmake[2]: Leaving directory `/home/cojocaru/apps/intel/charm/charm-cvs-mpi-smp-pthreads/mpi-linux-amd64-pthreads-smp-mpicxx/tmp'
gmake[1]: *** [converse] Error 2
gmake[1]: Leaving directory `/home/cojocaru/apps/intel/charm/charm-cvs-mpi-smp-pthreads/mpi-linux-amd64-pthreads-smp-mpicxx/tmp'
gmake: *** [charm++] Error 2
-------------------------------------------------
Charm++ NOT BUILT. Either cd into mpi-linux-amd64-pthreads-smp-mpicxx/tmp and try
to resolve the problems yourself, visit
http://charm.cs.uiuc.edu/
for more information. Otherwise, email the developers at
ppl AT cs.uiuc.edu







Archive powered by MHonArc 2.6.16.

Top of Page