Skip to Content.
Sympa Menu

charm - Re: [charm] about compiling charm++

charm AT lists.cs.illinois.edu

Subject: Charm++ parallel programming system

List archive

Re: [charm] about compiling charm++


Chronological Thread 
  • From: Phil Miller <mille121 AT illinois.edu>
  • To: sefer baday <sefer.baday AT unibas.ch>
  • Cc: charm AT cs.illinois.edu
  • Subject: Re: [charm] about compiling charm++
  • Date: Fri, 8 Jan 2010 10:39:19 -0500
  • List-archive: <http://lists.cs.uiuc.edu/pipermail/charm>
  • List-id: CHARM parallel programming system <charm.cs.uiuc.edu>

On Fri, Jan 8, 2010 at 10:19, sefer baday
<sefer.baday AT unibas.ch>
wrote:
> thank you for your reply. I really had hard time to compile charm++.

I'm sorry to hear that. If you can tell us what you ran into, it would
help us improve the documentation.

>> What build command did you run?
>
> ./build charm++  mpi-linux   -O  -DCMK_OPTIMIZE=1

Just to check, is your system 32-bit or 64-bit? The output of 'uname
-a' would be helpful.

Also, if the things suggested below don't lead to success, it might be
worthwhile testing a non-optimized build, leaving off the -O and
-DCMK_OPTIMIZE=1 options.

> and I also changed  the c++ compiler as mpiicpc and fortran compiler as
> ifort  in the conv-mach.sh file
>
> CMK_CPP_CHARM='/lib/cpp -P'
> CMK_CPP_C='mpicc -E'
> CMK_CC='mpicc '

Is this the Intel C compiler corresponding to the Intel C++ and
Fortran compilers configured below? It's the right binary name, but
there might be another MPI compiler by that name earlier on your path.

> CMK_CXX='mpiicpc '
> CMK_CXXPP='mpiicpc -E '
> CMK_CF77='f77'
> CMK_CF90='ifort'
> CMK_RANLIB='ranlib'
> CMK_LIBS='-lckqt -lmpich -pthread'
> CMK_LD_LIBRARY_PATH="-Wl,-rpath,$CHARMLIBSO/"
> CMK_NATIVE_LIBS=''
> CMK_NATIVE_CC='gcc '
> CMK_NATIVE_LD='gcc'
> CMK_NATIVE_CXX='g++ '
> CMK_NATIVE_LDXX='g++'
> CMK_NATIVE_CC='gcc '
> CMK_NATIVE_CXX='g++ '
> CMK_F90LIBS='-L/usr/lib -L/opt/absoft/lib -lf90math -lfio -lU77 -lf77math '
> CMK_MOD_NAME_ALLCAPS=1
> CMK_MOD_EXT="mod"
> CMK_F90_USE_MODDIR=1
> CMK_F90_MODINC="-p"
>
>> What version of Linux are you running this on (distribution and release)?
>
> CentOS release 5.4
> linux kernel version : 2.6.18-92.el5
>
>> What MPI libraries and compilers are you using?
>
>
> mpicc and mpiicpc
> version : intel/ictce/p_3.2.020/Linux/impi/3.2.1.009

This isn't the latest, but should be perfectly fine.

>> Is this test failure repeatable?
>
> yes it always gives the same error
>
>> Does it happen with only 1 processor?
>
> simple tests run correctly.
>
> ./charmrun ./hello

How about the megatest program on one processor?

> Running on 1 processors:  ./hello
> Running Hello on 1 processors for 5 elements
> Hello 0 created
> Hello 1 created
> Hello 2 created
> Hello 3 created
> Hello 4 created
> Hi[17] from element 0
> Hi[18] from element 1
> Hi[19] from element 2
> Hi[20] from element 3
> Hi[21] from element 4
> All done
> End of program
>
>
>
>> Does it happen with more than 2 processors?
>
> yes, it gives the same error for 8 processors too.
>
>
>
> thanks a lot
>
> sefer
>
>
>
> On 8 Jan 2010, at 15:10, Phil Miller wrote:
>
>> Sefer,
>>
>> More information would be very helpful in diagnosing your issue.
>>
>> What build command did you run?
>>
>> What version of Linux are you running this on (distribution and release)?
>>
>> What MPI libraries and compilers are you using?
>>
>> Is this test failure repeatable?
>>
>> Does it happen with only 1 processor?
>>
>> Does it happen with more than 2 processors?
>>
>> Phil
>>
>> On Fri, Jan 8, 2010 at 08:33, sefer baday
>> <sefer.baday AT unibas.ch>
>> wrote:
>>>
>>> Hello,
>>>
>>> I am trying to compile charm++  on the linux cluster with mpi compiler.
>>>
>>> I successfully compiled charm++.  However when do megatest  it fails.
>>>
>>> the following is the error I got in the test run :
>>>
>>> ./charmrun ./pgm 12 6 +p2
>>>
>>> Running on 2 processors:  ./pgm 12 6
>>> Megatest is running on 2 processors.
>>> test 0: initiated [bitvector (jbooth)]
>>> test 0: completed (0.00 sec)
>>> test 1: initiated [immediatering (gengbin)]
>>> test 1: completed (0.01 sec)
>>> test 2: initiated [callback (olawlor)]
>>> rank 0 in job 1  bc2-login01_51184   caused collective abort of all ranks
>>>  exit status of rank 0: killed by signal 11
>>>
>>>
>>> Could you help me ?
>>>
>>> What could be the my mistake ?
>>>
>>> thanks
>>>
>>> sefer
>>>
>
>





Archive powered by MHonArc 2.6.16.

Top of Page