help-octave
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Parallel and Large Results


From: Alberto Simões
Subject: Re: Parallel and Large Results
Date: Fri, 21 Feb 2014 21:21:07 +0000

Hello, Olaf

Thank you for your answer. Always best to ask first, as there might be something we do not know :)
So, yes, for now, iterating over a set of chunks, to see if I can gain some time.

Cheers,
ambs


On Fri, Feb 21, 2014 at 8:23 PM, Olaf Till <address@hidden> wrote:
On Fri, Feb 21, 2014 at 07:38:00PM +0000, Alberto Simões wrote:
> Hello,
>
>
> On Fri, Feb 21, 2014 at 7:29 PM, Olaf Till <address@hidden> wrote:
>
> > On Thu, Feb 20, 2014 at 08:01:01PM +0000, Alberto Simões wrote:
> >
> > > I was able to make it work, but given that each iteration result is a
> > huge
> > > cell with matrices (about 20 MB each), my machine gets out of memory
> > before
> > > all processes finish their work.
> > >
> > > What I need is a way to "reduce" results during the parallelism (and not
> > > just at the end). These matrices should be all summed up later, so the
> > > reduce algorithm is associative, and therefore, I can do it whenever I
> > like.
> >
> > I don't understand. Could you explain in more detail?
> >
>
> Sure. Basically, each separate process computes a big matrix. This is the
> time/CPU consuming task, and the one I want to make parallel.
>
> I have 2000 of these processes, therefore, parcellfun will try to create a
> cell array with 2000 of those matrices.
>
> What I want at the end, if to sum up all those matrices together. I can't
> wait until they are all computed, or the memory gets exausted. That's why I
> would like to go "summing up" the resulting matrices in some "global"
> variable.
>
> At the moment, the only idea I have is to compute, say 100 of these
> matrices, sum them all, compute more 100, and so on.
> But was expecting a more effective solution

I'm afraid we currently have no ready-made function or syntax to do
what you want, so your idea is probably your best solution. I don't
think it will be ineffective if the computation of a single matrix is
time-consuming enough.

Of course there is also the "hard way" to write a specialized
parallelization, which possibly isn't too hard since one could copy
and locally modify parcellfun to do this special job of summing
up. But a more general solution for distribution with Octave or a
package would not be trivial.

Olaf

--
public key id EAFE0591, e.g. on x-hkp://pool.sks-keyservers.net



--
Alberto Simões

reply via email to

[Prev in Thread] Current Thread [Next in Thread]