[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: octave server ?
Re: octave server ?
16 Nov 2000 17:52:29 +0100
Gnus/5.0806 (Gnus v5.8.6) Emacs/20.7
>>>>> "Daniel" == Daniel Heiserer <address@hidden> writes:
Daniel> Is this MPI dynamic in the case that octave can start
Daniel> clients on different hosts "on demand" or do you have to
Daniel> tell this octave when you launch it?
A "dynamic" scheme could be implemented, but I have not done so. The
simplest thing to do is to launch all your octave processes at once
and therefore have a fixed, known number of clients to work with. A
"static" application scheme like that is still very general. The
canonical MPI application launches an octave process for every CPU in
your system. After launch, you can decide which subset of processes
to use for a given task.
On the other hand, people do often design dynamic application schemes
using MPI_Spawn() to start new processes within a running MPI
application. If you can show me that having such a dynamic scheme is
important, I might be able to free up some time to try to make it
Daniel> What happens if a client dies? Will the master die as
Depends on how robustly things are programmed. Basically, the absence
of a client isn't fatal until you try to send a message to it. In my
implementation, the server keeps track of the list of valid client
"ranks" (unique integers identifying each process in the MPI
application). You can selectively kill clients and the application
should continue unhindered.
Daniel> Do you have to take care of data transfer to a client or
Daniel> is that done automatically by the MPI library?
You should read the MPI.readme I sent along with the patch
announcement about two weeks ago. You can find this all in the
archives of the octave-sources mailing list at www.octave.org.
The server sends data back and forth to clients via calls to
mpi_setval() and mpi_getval(). You can send or receive any octave
variable to or from any client. Internally I use the pre-existing
load() and save() functions, grafted to MPI sends and receives.
Daniel> Can you control what a client does or is that done by the
You can send arbitrary octave commands to each client via mpi_eval().
Daniel> I see many applications for this kind:
Daniel> 1) more or less seemless parallelization I have a loop in
Daniel> my script, which does not change data of the loop for
Daniel> j=1:1000 a(j)=max(max(inv(rand(1000)))); end this could be
Daniel> done by octave _automatically_ on 1000 clients or as many
Daniel> as I have started/allowed.
Implicit parallel execution is tricky. I think we should develop some
experience with doing it explicitly first. In my patches, I included
some example control scripts that show how to split up trivially
parallelizable jobs like the one you describe above.
I think another very attractive concept is linking in one or more of
the MPI-parallelized linear algebra libraries (ScaLAPACK, BLACS,
PLAPACK). Then instead of calling, e.g., eig(), you could call
eig_mpi(), and the job would be distributed among the available
clients. The FFTW Fourier transform library also comes with MPI-ready
functions. There's a lot of cool stuff that could be done!
Daniel> 2) More a real talk, not automatic: My basic concpet idea
Daniel> was to have one master and different clients, where the
Daniel> master tells a server in detail what he has to do. They
Daniel> dont have to share the same data. This would even be
Daniel> better, because I would like to have for each server a lot
Daniel> of different data in memory.
You can do this now, using the MPI patches from octave-sources.
Octave is freely available under the terms of the GNU GPL.
Octave's home on the web: http://www.octave.org
How to fund new projects: http://www.octave.org/funding.html
Subscription information: http://www.octave.org/archive.html