Skip to Content.
Sympa Menu

charm - Re: [charm] about compiling charm++

charm AT lists.cs.illinois.edu

Subject: Charm++ parallel programming system

List archive

Re: [charm] about compiling charm++


Chronological Thread 
  • From: Phil Miller <mille121 AT illinois.edu>
  • To: sefer baday <sefer.baday AT unibas.ch>
  • Cc: charm AT cs.illinois.edu
  • Subject: Re: [charm] about compiling charm++
  • Date: Fri, 8 Jan 2010 11:28:02 -0500
  • List-archive: <http://lists.cs.uiuc.edu/pipermail/charm>
  • List-id: CHARM parallel programming system <charm.cs.uiuc.edu>

On Fri, Jan 8, 2010 at 11:20, sefer baday
<sefer.baday AT unibas.ch>
wrote:
> Hi Phil,
>
> I checked my system. it is 64 bit.
> First I tried the non optimized option and it didn't work.
> I also checked the path for mpi compiler and it was ok.
>
> Then I decided to build with the mpi-linux-am64 architecture and it
> successfully compiled.
>
> and I run megatest and got this output :
>
> ./charmrun ./pgm 12 6 +p2
>
> Running on 2 processors:  ./pgm 12 6
> Megatest is running on 2 processors.
> test 0: initiated [bitvector (jbooth)]
> test 0: completed (0.00 sec)
> test 1: initiated [immediatering (gengbin)]
> test 1: completed (0.01 sec)
> test 2: initiated [callback (olawlor)]
> test 2: completed (0.00 sec)
> test 3: initiated [reduction (olawlor)]
> test 3: completed (0.00 sec)
> test 4: initiated [inherit (olawlor)]
> test 4: completed (0.00 sec)
> test 5: initiated [templates (milind)]
> test 5: completed (0.00 sec)
> test 6: initiated [statistics (olawlor)]
> test 6: completed (0.00 sec)
> test 7: initiated [rotest (milind)]
> test 7: completed (0.00 sec)
> test 8: initiated [priotest (mlind)]
> test 8: completed (0.00 sec)
> test 9: initiated [priomsg (fang)]
> test 9: completed (0.00 sec)
> test 10: initiated [marshall (olawlor)]
> test 10: completed (0.01 sec)
> test 11: initiated [migration (jackie)]
> test 11: completed (0.00 sec)
> test 12: initiated [queens (jackie)]
> test 12: completed (0.00 sec)
> test 13: initiated [packtest (fang)]
> test 13: completed (0.00 sec)
> test 14: initiated [tempotest (fang)]
> test 14: completed (0.00 sec)
> test 15: initiated [arrayring (fang)]
> test 15: completed (0.00 sec)
> test 16: initiated [fib (jackie)]
> test 16: completed (0.00 sec)
> test 17: initiated [synctest (mjlang)]
> test 17: completed (0.00 sec)
> test 18: initiated [nodecast (milind)]
> test 18: completed (0.00 sec)
> test 19: initiated [groupcast (mjlang)]
> test 19: completed (0.00 sec)
> test 20: initiated [varraystest (milind)]
> test 20: completed (0.00 sec)
> test 21: initiated [varsizetest (mjlang)]
> test 21: completed (0.00 sec)
> test 22: initiated [nodering (milind)]
> test 22: completed (0.00 sec)
> test 23: initiated [groupring (milind)]
> test 23: completed (0.00 sec)
> test 24: initiated [multi immediatering (gengbin)]
> test 24: completed (0.01 sec)
> test 25: initiated [multi callback (olawlor)]
> test 25: completed (0.00 sec)
> test 26: initiated [multi reduction (olawlor)]
> test 26: completed (0.00 sec)
> test 27: initiated [multi statistics (olawlor)]
> test 27: completed (0.00 sec)
> test 28: initiated [multi priotest (mlind)]
> test 28: completed (0.00 sec)
> test 29: initiated [multi priomsg (fang)]
> test 29: completed (0.00 sec)
> test 30: initiated [multi marshall (olawlor)]
> test 30: completed (0.03 sec)
> test 31: initiated [multi migration (jackie)]
> test 31: completed (0.00 sec)
> test 32: initiated [multi packtest (fang)]
> test 32: completed (0.00 sec)
> test 33: initiated [multi tempotest (fang)]
> test 33: completed (0.00 sec)
> test 34: initiated [multi arrayring (fang)]
> test 34: completed (0.00 sec)
> test 35: initiated [multi fib (jackie)]
> test 35: completed (0.01 sec)
> test 36: initiated [multi synctest (mjlang)]
> test 36: completed (0.01 sec)
> test 37: initiated [multi nodecast (milind)]
> test 37: completed (0.00 sec)
> test 38: initiated [multi groupcast (mjlang)]
> test 38: completed (0.00 sec)
> test 39: initiated [multi varraystest (milind)]
> test 39: completed (0.00 sec)
> test 40: initiated [multi varsizetest (mjlang)]
> test 40: completed (0.00 sec)
> test 41: initiated [multi nodering (milind)]
> test 41: completed (0.00 sec)
> test 42: initiated [multi groupring (milind)]
> test 42: completed (0.00 sec)
> test 43: initiated [all-at-once]
> test 43: completed (0.02 sec)
> All tests completed, exiting
> End of program
>
>
> So , does this mean that it is ok ?

Yes, it seems to. If you want a bit more assurance, you can run the
full test suite with "make test" inside the built charm directory.

Phil

>
>
> thanks
>
> sefer
>
>
>
>
> On 8 Jan 2010, at 16:39, Phil Miller wrote:
>
>> On Fri, Jan 8, 2010 at 10:19, sefer baday
>> <sefer.baday AT unibas.ch>
>> wrote:
>>>
>>> thank you for your reply. I really had hard time to compile charm++.
>>
>> I'm sorry to hear that. If you can tell us what you ran into, it would
>> help us improve the documentation.
>>
>>>> What build command did you run?
>>>
>>> ./build charm++  mpi-linux   -O  -DCMK_OPTIMIZE=1
>>
>> Just to check, is your system 32-bit or 64-bit? The output of 'uname
>> -a' would be helpful.
>>
>> Also, if the things suggested below don't lead to success, it might be
>> worthwhile testing a non-optimized build, leaving off the -O and
>> -DCMK_OPTIMIZE=1 options.
>>
>>> and I also changed  the c++ compiler as mpiicpc and fortran compiler as
>>> ifort  in the conv-mach.sh file
>>>
>>> CMK_CPP_CHARM='/lib/cpp -P'
>>> CMK_CPP_C='mpicc -E'
>>> CMK_CC='mpicc '
>>
>> Is this the Intel C compiler corresponding to the Intel C++ and
>> Fortran compilers configured below? It's the right binary name, but
>> there might be another MPI compiler by that name earlier on your path.
>>
>>> CMK_CXX='mpiicpc '
>>> CMK_CXXPP='mpiicpc -E '
>>> CMK_CF77='f77'
>>> CMK_CF90='ifort'
>>> CMK_RANLIB='ranlib'
>>> CMK_LIBS='-lckqt -lmpich -pthread'
>>> CMK_LD_LIBRARY_PATH="-Wl,-rpath,$CHARMLIBSO/"
>>> CMK_NATIVE_LIBS=''
>>> CMK_NATIVE_CC='gcc '
>>> CMK_NATIVE_LD='gcc'
>>> CMK_NATIVE_CXX='g++ '
>>> CMK_NATIVE_LDXX='g++'
>>> CMK_NATIVE_CC='gcc '
>>> CMK_NATIVE_CXX='g++ '
>>> CMK_F90LIBS='-L/usr/lib -L/opt/absoft/lib -lf90math -lfio -lU77 -lf77math
>>> '
>>> CMK_MOD_NAME_ALLCAPS=1
>>> CMK_MOD_EXT="mod"
>>> CMK_F90_USE_MODDIR=1
>>> CMK_F90_MODINC="-p"
>>>
>>>> What version of Linux are you running this on (distribution and
>>>> release)?
>>>
>>> CentOS release 5.4
>>> linux kernel version : 2.6.18-92.el5
>>>
>>>> What MPI libraries and compilers are you using?
>>>
>>>
>>> mpicc and mpiicpc
>>> version : intel/ictce/p_3.2.020/Linux/impi/3.2.1.009
>>
>> This isn't the latest, but should be perfectly fine.
>>
>>>> Is this test failure repeatable?
>>>
>>> yes it always gives the same error
>>>
>>>> Does it happen with only 1 processor?
>>>
>>> simple tests run correctly.
>>>
>>> ./charmrun ./hello
>>
>> How about the megatest program on one processor?
>>
>>> Running on 1 processors:  ./hello
>>> Running Hello on 1 processors for 5 elements
>>> Hello 0 created
>>> Hello 1 created
>>> Hello 2 created
>>> Hello 3 created
>>> Hello 4 created
>>> Hi[17] from element 0
>>> Hi[18] from element 1
>>> Hi[19] from element 2
>>> Hi[20] from element 3
>>> Hi[21] from element 4
>>> All done
>>> End of program
>>>
>>>
>>>
>>>> Does it happen with more than 2 processors?
>>>
>>> yes, it gives the same error for 8 processors too.
>>>
>>>
>>>
>>> thanks a lot
>>>
>>> sefer
>>>
>>>
>>>
>>> On 8 Jan 2010, at 15:10, Phil Miller wrote:
>>>
>>>> Sefer,
>>>>
>>>> More information would be very helpful in diagnosing your issue.
>>>>
>>>> What build command did you run?
>>>>
>>>> What version of Linux are you running this on (distribution and
>>>> release)?
>>>>
>>>> What MPI libraries and compilers are you using?
>>>>
>>>> Is this test failure repeatable?
>>>>
>>>> Does it happen with only 1 processor?
>>>>
>>>> Does it happen with more than 2 processors?
>>>>
>>>> Phil
>>>>
>>>> On Fri, Jan 8, 2010 at 08:33, sefer baday
>>>> <sefer.baday AT unibas.ch>
>>>> wrote:
>>>>>
>>>>> Hello,
>>>>>
>>>>> I am trying to compile charm++  on the linux cluster with mpi compiler.
>>>>>
>>>>> I successfully compiled charm++.  However when do megatest  it fails.
>>>>>
>>>>> the following is the error I got in the test run :
>>>>>
>>>>> ./charmrun ./pgm 12 6 +p2
>>>>>
>>>>> Running on 2 processors:  ./pgm 12 6
>>>>> Megatest is running on 2 processors.
>>>>> test 0: initiated [bitvector (jbooth)]
>>>>> test 0: completed (0.00 sec)
>>>>> test 1: initiated [immediatering (gengbin)]
>>>>> test 1: completed (0.01 sec)
>>>>> test 2: initiated [callback (olawlor)]
>>>>> rank 0 in job 1  bc2-login01_51184   caused collective abort of all
>>>>> ranks
>>>>>  exit status of rank 0: killed by signal 11
>>>>>
>>>>>
>>>>> Could you help me ?
>>>>>
>>>>> What could be the my mistake ?
>>>>>
>>>>> thanks
>>>>>
>>>>> sefer
>>>>>
>>>
>>>
>
>





Archive powered by MHonArc 2.6.16.

Top of Page