help-octave
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Fwd: Memory Leak & openmpi_ext


From: Riccardo Corradini
Subject: Re: Fwd: Memory Leak & openmpi_ext
Date: Mon, 13 May 2013 12:56:05 +0100 (BST)

Dear
Sukanta,
you may easily debug MPI_Send.cc and MPI_Recv putting some printf concerning the integer flag info
int info = send_class (comm, args(0), tankrank, mytag);
printf("info for sending class = %i",info);
You may follow a bottom up approach for every brick you want to send and receive. Finally you will find the flag that detetcts what is not working as expected.
The older versions contained lots of pieces of code very similar, but they were easier to debug in this naive way.
I would also suggest you to clear the variable once wou MPI_Send it.
Try to get rid of variables that you do not need. The master will receive them, so this will save memory.
Bests
Riccardo



Da: Sukanta Basu <address@hidden>
A: address@hidden
Inviato: Domenica 5 Maggio 2013 19:56
Oggetto: Fwd: Memory Leak & openmpi_ext

FYI

---------- Forwarded message ----------
From: Sukanta Basu <address@hidden>
Date: Sun, May 5, 2013 at 1:51 PM
Subject: Memory Leak & openmpi_ext
To: "c." <address@hidden>, Octave Forge
<address@hidden>, Carnë Draug
<address@hidden>


Hi Carlo and Carne,

I hope all is well.

A few months ago, you helped me with the openmpi_ext toolbox. This
toolbox works like a charm. Unfortunately, I am facing a memory leak
issue with this toolbox. I noticed this leak on all the platforms I
have access to: Ubuntu (12.04, 12.10, and 13.04) and RedHat systems.
The leak persists for all the recent versions of openmpi (1.6.2,
1.6.4, 1.7.1).

Since my original code is too complicated for others to debug, I
created a sample code for testing. Basically, I modified the
speedtest.m file (written by Dr. Jeremy Kepner; MatlabMPI) to work in
conjunction with openmpi_ext. I then ran this code with valgrind:

valgrind --leak-check=yes -v --log-file=Valgrind.out mpirun -np 2
octave -q --eval speedtest &

The summary of valgrind is:
==24981==    definitely lost: 42,031 bytes in 28 blocks
==24981==    indirectly lost: 25,802 bytes in 76 blocks
==24981==      possibly lost: 0 bytes in 0 blocks
==24981==    still reachable: 124,717 bytes in 603 blocks
==24981==        suppressed: 0 bytes in 0 blocks

I would appreciate if you could help me out with identifying the
memory leak in openmpi_ext. I am attaching the speedtest.m file and
Valgrind.out file.

Best regards,
Sukanta

--
Sukanta Basu
Associate Professor
North Carolina State University
http://www4.ncsu.edu/~sbasu5/


--
Sukanta Basu
Associate Professor
North Carolina State University
http://www4.ncsu.edu/~sbasu5/
_______________________________________________
Help-octave mailing list
address@hidden
https://mailman.cae.wisc.edu/listinfo/help-octave



reply via email to

[Prev in Thread] Current Thread [Next in Thread]