yafray-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Yafray-devel] Fork-fix/command-line patch


From: Diego Pino Navarro
Subject: Re: [Yafray-devel] Fork-fix/command-line patch
Date: Tue, 2 Mar 2004 14:25:25 -0300

Hi Steve,

Maybe my question is stupid(very), but do you think there is a way to make yafray work under Apples Xgrid cluster app, using your fork/thread modification + some coding? I mean splitting an single render process in several independent threads or forks that xgrid can effectively understand and that can be send to every node in apples grid (and maybe as result get several outputs(images) that can be batch joined together later?). I have just tested xgrid using one/unique unix application over the network with different parameters for each node, resulting in independent data results that can be joined together later(math simulation), but it looks like xgrid can make use of a threaded multi processor app and distribute each job to a different node. I asked for this to Jandro on the web forum but he said it would take a lot of coding. (by the way xgrid don´t use MPI)

I hope it´s possible, anyway, i will try to compile your yafray with your patch on my G4 and give it a test.

Thank´s a lot



Diego Pino N




On Thursday, February 26, 2004, at 07:44 PM, Steve Smith wrote:

On Thu, 2004-02-26 at 19:18, Alejandro Conty Estevez wrote:
If the patch works ok, or at least compiles ok, I'll merge it, no prob.
Will check as soon as possible.

Cool, thanks.

For the MPI version meybe is better to keep a separate tree, since it
involves polluting too much code, right?  We can put a cvs repository
for that and info in the main site if you like.

Actually it should only be a case of adding a couple of files and
tweaking the makefile and configure scripts.  The way I've done the
mono/fork/thread stuff is to use inheritance to keep a clear separation
of the functionality.  With the latest patch the inheritance tree looks
like:

                             scene_t (mono)
                                   |
                             --------------
                       |            |
                   forkscene_t     threadscene_t

With the MPI stuff it will probably look more like:


                             scene_t (mono)
                                   |
                          --------------------
                |                   |
              multiproc_t (abstract)     threadscene_t
                |
    --------------------
|                  |
forkscene_t        mpiscene_t

(Hopefully these diagrams work)

multiproc_t contains common functionality (the actually child worker
code), while the descendants override the communication functions (ie.
what is currently in ipc.cc will go into forkscene_t, MPI specific stuff
will go into mpiscene_t.

Anyway, I'll work on the principle that the existing patch will go in
and produce a proof-of-concept of the MPI stuff and see how you feel
about including it.

Cheers,
Steve/Tarka



_______________________________________________
Yafray-devel mailing list
address@hidden
http://mail.nongnu.org/mailman/listinfo/yafray-devel






reply via email to

[Prev in Thread] Current Thread [Next in Thread]