help-octave
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: mpi 1.1.1 released


From: Michael Creel
Subject: Re: mpi 1.1.1 released
Date: Mon, 6 Jan 2014 22:34:33 +0100

Hi all,
Is this new strategy robust to changes in Octave's internal data types? I think that Riccardo's strategy was influenced by a desire to make it easy to maintain the package so that any Octave object could be sent, and so that if Octave added or changed the structure of objects, it would be easy to keep up with. I don't know much about this, though, so perhaps I'm way off the mark.
Best,
M.


On Mon, Jan 6, 2014 at 10:07 PM, Sukanta Basu <address@hidden> wrote:
Hi Carlo,

Wow! I am pretty sure you solved the problem! I ran the sample code
(~4000 iterations)... absolutely no sign of memory leak! I am going to
test my MATLES code and send you the good news.

I cannot thank you enough!

Best regards,
Sukanta

On Mon, Jan 6, 2014 at 12:17 PM, c. <address@hidden> wrote:
>
> On 6 Jan 2014, at 12:03, c. <address@hidden> wrote:
>
>>
>> On 5 Jan 2014, at 16:31, Sukanta Basu <address@hidden> wrote:
>>
>>> Hi Carlo,
>>>
>>> I ran the code for ~100,000 steps. I do not see any leak. I did the following:
>>>
>>> 1. mkoctfile prova.cc
>>> 2. octave < memory_test.m
>>>
>>> Do you think you are getting closer to resolve this issue?
>>
>> I'm just trying to convert your script to something more portable (i.e., that I can run on my mac)
>> as on BSD/Darwin there is no simple equivalent of the 'free' command ...
>>
>>> Cheers,
>>> Sukanta
>> c.
>
> Hi Sukanta,
>
> I think I got a clue of what could be the problem:
>
> In order to send/receive arrays what is currently done is to create a derived contigous datatype
>
>         MPI_Datatype fortvec;
>         MPI_Type_contiguous (nitem, TSnd, &fortvec);
>         MPI_Type_commit (&fortvec);
>
> Then the array data is sent via
>
>         info =  MPI_Send (LBNDA1, 1, fortvec, rankrec_ptr[i], tanktag[4], comm);
>
> Once the communication is done, the datatype should be cleared with
>
>         MPI_Type_free (&fortvec);
>
> but this latter call seems to be missing.
> The leak is very small (1 word per message being sent), but over a long number of iterations
> this may be causing (at least part) of the memory usage you reported.
>
> I see two possible fixes:
>
>  * add a call to MPI_Type_free for each MPI_Type_commit in the code
>  * get rid of the derived datatype and use
>
>        MPI_Send (LBNDA1, nitem, TSnd, rankrec_ptr[i], tanktag[4], comm);
>
>    instead
>
> The modified version of the package in the attachment implements the latter option,
> could you please check whether it works any better for you?
>
> Does anyone see a good reason to prefer the former strategy of creating a derived
> contiguous datatype each type an array is sent?
>
> c.
>
>
>
>
>



--
Sukanta Basu
Associate Professor
North Carolina State University
http://www4.ncsu.edu/~sbasu5/


reply via email to

[Prev in Thread] Current Thread [Next in Thread]