Skip to Content.
Sympa Menu

charm - Re: [charm] charm++ with our supercomputer

charm AT lists.cs.illinois.edu

Subject: Charm++ parallel programming system

List archive

Re: [charm] charm++ with our supercomputer


Chronological Thread 
  • From: Phil Miller <mille121 AT illinois.edu>
  • To: Mostafa Gaber <Mostafa.Gaber AT bibalex.org>
  • Cc: "charm AT cs.uiuc.edu" <charm AT cs.uiuc.edu>
  • Subject: Re: [charm] charm++ with our supercomputer
  • Date: Sun, 19 Jul 2009 08:33:12 -0500
  • List-archive: <http://lists.cs.uiuc.edu/pipermail/charm>
  • List-id: CHARM parallel programming system <charm.cs.uiuc.edu>

On Sun, Jul 19, 2009 at 08:24, Mostafa
Gaber<Mostafa.Gaber AT bibalex.org>
wrote:
> I tried this build command $ ./build charm++ mpi-linux-x86_64 icc
> and there was an error "Command icc -fpic -I../bin/../include
> -D__CHARMC__=1 -DFOR_CPLUS=1 -c machine.c -o machine.o returned error code
> 4"

I can hazard a guess, but without more information I can't be too
confident in it. Seeing output from the compiler would be very useful.

On your machine, if mpicc is a wrapper for icc already, then you
shouldn't specify icc on the build command line. By letting charm use
the default mpicc compiler wrapper, you will ensure that the proper
include and link flags are pass to icc.

Phil


> Kindly, do you have any idea for that?
>
> Thanks.
>
> Mostafa Gaber
> Software Engineer
> ISIS Department (bibalex.org/isis/)
> ICT Sector
> Bibliotheca Alexandrina
> Tel:  +2 (03) 483 9999 Ext.: 1453
> Cell: +2 (010) 316 7187
> Fax: +2 (03) 482 0405
> P.O. Box 138, Chatby, Alexandria 21526, Egypt
> ________________________________________
> From:
> unmobile AT gmail.com
>
> [unmobile AT gmail.com]
> On Behalf Of Phil Miller
> [mille121 AT illinois.edu]
> Sent: Tuesday, July 14, 2009 4:49 PM
> To: Mostafa Gaber
> Cc:
> charm AT cs.uiuc.edu
> Subject: Re: charm++ with our supercomputer
>
> On Tue, Jul 14, 2009 at 01:10, Mostafa
> Gaber<Mostafa.Gaber AT bibalex.org>
> wrote:
>> I would like to thank you very much for your replies.
>>
>> 1) We have an MPI library installed, so could I use that command </build  
>> AMPI mpi-linux-x86_64>? In that case, should I use mpirun or charmrun to
>> start my charm++ programs?
>> 2) Isaac, you told me to use mpirun command to launch my charm++ programs
>> in case I built it with mpi-linux-x86_64 version. I think charmrun will
>> use the MPI library to communicate, am I true?
>
> Either one will work fine. On mpi-* builds of charm++, charmrun is a
> wrapper for mpirun. You can use whichever you're more comfortable
> with.
>
> Phil
>
>> ________________________________________
>> From: Isaac Dooley
>> [idooley AT gmail.com]
>> Sent: Monday, July 13, 2009 9:37 PM
>> To:
>> emenese2 AT illinois.edu
>> Cc: Mostafa Gaber;
>> ppl AT cs.uiuc.edu
>> Subject: Re: charm++ with our supercomputer
>>
>> I would also try installing the mpi-linux-x86_64 version, using an MPI
>> which is configured to use the infiniband. Then you can use the
>> standard mpirun command to launch your charm++ programs.
>>
>> Isaac
>>
>> On Mon, Jul 13, 2009 at 9:25 AM,
>> <emenese2 AT illinois.edu>
>> wrote:
>>>   Hi Mostafa.
>>>   I would suggest to start by stalling this version (once you are in the
>>> charm directory):
>>> ./build charm++ net-linux-x86_64
>>>   Let us know if you have any trouble with it.
>>>   Cheers,
>>>                       Esteban
>>>
>>> ---- Original message ----
>>>>Date: Mon, 13 Jul 2009 14:51:20 +0300
>>>>From: Mostafa Gaber
>>>><Mostafa.Gaber AT bibalex.org>
>>>>Subject: charm++ with our supercomputer
>>>>To:
>>>>"ppl AT cs.uiuc.edu"
>>>>
>>>><ppl AT cs.uiuc.edu>
>>>>
>>>>Hello Mr., Mrs.,
>>>>
>>>>I am a software engineer in the Bibliotheca Alexandrina. I am working on
>>>>our new supercomputer (SC). I'd like to install charm++ on the SC. I have
>>>>read the tutorial, it is really interesting. I found that you specified
>>>>some versions options to the build script such as bluegenep/l. I am
>>>>wondering if there is a version specific to our SC.
>>>>
>>>>The SC consists of 130 compute nodes, each node has 2 Intel xeon
>>>>processors. They are connected via infiniband and also gig-ethernet.
>>>>
>>>>Kindly, could you suggest a build command for the charm++ on our SC?
>





Archive powered by MHonArc 2.6.16.

Top of Page