help-octave
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Fwd: Memory Leak & openmpi_ext


From: Sukanta Basu
Subject: Re: Fwd: Memory Leak & openmpi_ext
Date: Tue, 14 May 2013 21:37:11 -0400

Hi Riccardo,

Thanks for the advice.

I will try non-blocking send-recv and mpi_iprobe. I will keep you
posted. In case you find out something else, please let me know.

Best regards,
Sukanta

On Tue, May 14, 2013 at 4:49 PM, Riccardo Corradini
<address@hidden> wrote:
> Hi Sukanta,
> if you have loot at .cc code into the tarball you will see that there are
> lots of temporary copy of octave classes. GNU Octave hold them in memory
> until the master receive the results. If think you  may easily modify the
> code to have a not blocking message.
> https://rqchp.ca/modules/cms/checkFileAccess.php?file=local.rqchpweb_udes/mpi/exemples_c/ex06_ISend-IRecv_EN.c
> If you are not interested you may use
> http://octave.sourceforge.net/openmpi_ext/function/MPI_Iprobe.html
> Please have a look at an example about MPI_Iprobe in OpenMPI_Ext in Michael
> Creel's web site:
>
> http://pareto.uab.es/mcreel/Econometrics/MyOctaveFiles/
> http://pareto.uab.es/mcreel/Econometrics/MyOctaveFiles/Econometrics/MonteCarlo/montecarlo.m
>
> Bests
> Riccardo
>
> ________________________________
> Da: Sukanta Basu <address@hidden>
> A: Riccardo Corradini <address@hidden>
> Inviato: Martedì 14 Maggio 2013 21:32
>
> Oggetto: Re: Fwd: Memory Leak & openmpi_ext
>
> Hi Riccardo,
>
> The slaves (me > 0) are not seeing the whole data. They are seeing
> only nx*ny*nzb+2 data. Master is seeing nx*ny*nz data [note: nzb =
> nz/nprocs].
>
> Putting the randn outside iter loop will not be feasible in my
> research code. In that code, every new iteration needs updated values
> of u.
>
> Please note that I am not trying to reduce "memory". I am trying to
> eliminate "growth of memory consumption". I still do not understand
> the "growth" aspect of Octave-Openmpi_ext. By using cellfun, I might
> gain some memory/speeup, it cannot eliminate the "growth".
>
> Best regards,
> Sukanta
>
> On Tue, May 14, 2013 at 3:24 PM, Riccardo Corradini
> <address@hidden> wrote:
>>
>> Dear Sukanta,
>> I think you may create the matrixes before on the master node
>> utot = randn(nx,ny,nz);
>> for iter = 1:100000
>> utot = randn(nx,ny,nz);
>> endfor
>> I think here you may use cellfun to get rif of for ... endfor  and see
>> what
>> happens into the master.
>> The master will be responsible for this huge amount of data.
>> After that you send the smaller matrixes to the slaves, do the
>> calculations
>> and send the results back to the master.
>> The idea is to check if the master may work with GNU Octave with all this
>> local  data ...before the MPI_Send ...
>> The slaves do not need to see the whole dataset but just a small part ...
>> Am I clear?
>> Very bests
>> Riccardo
>> ________________________________
>> Da: Sukanta Basu <address@hidden>
>> A: Jordi Gutiérrez Hermoso <address@hidden>
>> Cc: Riccardo Corradini <address@hidden>; "address@hidden"
>> <address@hidden>
>> Inviato: Martedì 14 Maggio 2013 19:04
>>
>> Oggetto: Re: Fwd: Memory Leak & openmpi_ext
>>
>> I realized I sent a wrong version of the example code. Please see the
>> correct version attached.
>>
>> On Tue, May 14, 2013 at 1:01 PM, Sukanta Basu <address@hidden>
>> wrote:
>>> Dear Riccardo and Jordi,
>>>
>>> I created this simple for-loop problem for testing the memory issue in
>>> openmpi. If I run this code in serial mode with the same loop
>>> operation, the memory leak does not happen!
>>>
>>> Two related notes:
>>> (i) I used to use MatlabMPI with Matlab for the same code. I never
>>> witnessed this memory increase issue.
>>> (ii) The same code without openmpi (i.e., serial version) does not
>>> cause memory issue.
>>>
>>> In my research code, I have more than several dozens of functions
>>> within this iterative loop (a time-dependent operation in a CFD code).
>>> This loop cannot be vectorized.
>>>
>>> Also, deleting the variables from the workspace (after MPI_Send
>>> operation) is not an option. Because, some of the variables are needed
>>> in different functions. Since these variables are not growing in size
>>> with time/iteration, they should not increase memory consumption in
>>> any serious manner.
>>>
>>> Thanks for the links. I have been using Matlab/Octave for more than a
>>> decade and I am quite familiar with vectorization issues. Based on my
>>> experience (as an user and not a developer of Octave), vectorization
>>> helps tremendously in speeding up the operations. Having JIT compiler
>>> makes things really faster. However, I do not see why it should cause
>>> memory increase issues. May be I am missing something.
>>>
>>> Best regards,
>>> Sukanta
>>>
>>> On Tue, May 14, 2013 at 12:31 PM, Jordi Gutiérrez Hermoso
>>> <address@hidden> wrote:
>>>> On 14 May 2013 12:23, Riccardo Corradini <address@hidden>
>>>> wrote:
>>>>> After sending to the master , clear all variables within each slave
>>>>> http://www.mathworks.it/it/help/matlab/ref/cellfun.html
>>>>> http://yagtom.googlecode.com/svn/trunk/html/speedup.html
>>>>> Also search vectorizaction tips on octave mailing list archive.
>>>>
>>>> Is this documentation any good?
>>>>
>>>>
>>>>
>>>> http://www.gnu.org/software/octave/doc/interpreter/Vectorization-and-Faster-Code-Execution.html
>>>>
>>>> - Jordi G. H.
>>>
>>>
>>>
>>> --
>>> Sukanta Basu
>>> Associate Professor
>>> North Carolina State University
>>> http://www4.ncsu.edu/~sbasu5/
>>
>>
>>
>> --
>> Sukanta Basu
>> Associate Professor
>> North Carolina State University
>> http://www4.ncsu.edu/~sbasu5/
>>
>
>
>
> --
> Sukanta Basu
> Associate Professor
> North Carolina State University
> http://www4.ncsu.edu/~sbasu5/
>



-- 
Sukanta Basu
Associate Professor
North Carolina State University
http://www4.ncsu.edu/~sbasu5/


reply via email to

[Prev in Thread] Current Thread [Next in Thread]