Skip to Content.
Sympa Menu

charm - [charm] Weird behaviour using MPI_Alloc_mem and MPI_Free_mem

charm AT lists.cs.illinois.edu

Subject: Charm++ parallel programming system

List archive

[charm] Weird behaviour using MPI_Alloc_mem and MPI_Free_mem


Chronological Thread 
  • From: Roberto de Quadros Gomes <rqg.gomes AT gmail.com>
  • To: "charm AT cs.uiuc.edu" <charm AT cs.uiuc.edu>
  • Subject: [charm] Weird behaviour using MPI_Alloc_mem and MPI_Free_mem
  • Date: Wed, 23 Oct 2013 20:43:32 -0200
  • List-archive: <http://lists.cs.uiuc.edu/pipermail/charm/>
  • List-id: CHARM parallel programming system <charm.cs.uiuc.edu>

Hi,

Last week I had a problem very similar to presented by Nicolas Bock about segmentation fault when LB was called.  So, I noticed that my problem started when I started use MPI_Alloc_mem and MPI_Free_mem in AMPI codes. Always when I have a migration between these functions I receive these messages:

CharmLB> GreedyLB: PE [0] step 0 finished at 2.566594 duration 0.008998 s

------------- Processor 1 Exiting: Caught Signal ------------
Signal: segmentation violation
Suggestion: Try running with '++debug', or linking with '-memory paranoid' (memory paranoid requires '+netpoll' at runtime).
------------- Processor 7 Exiting: Caught Signal ------------
Signal: segmentation violation
Suggestion: Try running with '++debug', or linking with '-memory paranoid' (memory paranoid requires '+netpoll' at runtime).
------------- Processor 6 Exiting: Caught Signal ------------
Signal: segmentation violation
Suggestion: Try running with '++debug', or linking with '-memory paranoid' (memory paranoid requires '+netpoll' at runtime).
Charmrun: error on request socket--
Socket closed before recv.

If no migrations is needed, then no problem is shown.

But, whether I back to "malloc" and "free" functions, the LB works fine.

I do not have sure about this functions should be similar, but in my application without MPI_Migrate, works too.

I attached the code where this problem happens.
You can reproduce  building with

.$ampicc mpiteste.c -o mpiteste -module GreedyLB -memory isomalloc -D_Problem

and running it.
If you remove -D_Problem, it works. 


#include <stdio.h>
#include <stdlib.h>
#include <mpi.h>


#define _GAMMA 20

#define DPRINTF printf
//#define DPRINTF

int main( int qt_par, char *par[] )
{
	int size,myrank,i,j,gama;
	unsigned char *buff,*buff2;
	MPI_Init(&qt_par, &par);
	MPI_Comm_size(MPI_COMM_WORLD, &size);
	MPI_Comm_rank(MPI_COMM_WORLD, &myrank);
		
#ifdef _Problem	   
	  MPI_Alloc_mem(100,MPI_INFO_NULL,&buff2);
#else
	  buff2=malloc(100);
#endif  
	for( i = 0; i < 100 ; i++ )
	{		
#ifdef _Problem	   
	  MPI_Alloc_mem(100,MPI_INFO_NULL,&buff);
#else
	  buff = malloc(100);
#endif  

	  for(j = 0;j < (myrank+5)*10;j++)
	  {
	    memset(buff,myrank,100);
	  }
	 
	  MPI_Barrier(MPI_COMM_WORLD);
	  if(gama-- < 0 )
	  {
	    gama = _GAMMA;
	    MPI_Migrate();
	    
	  }
#ifdef _Problem	   
	   MPI_Free_mem(buff);
#else
	  free(buff);
#endif  
	}
#ifdef _Problem	   
	MPI_Free_mem(buff2);
#else
	free(buff2);
#endif  

	MPI_Barrier(MPI_COMM_WORLD);
	MPI_Finalize();

	return 0;
}




Archive powered by MHonArc 2.6.16.

Top of Page