fluid-dev
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [fluid-dev] Multi core support not so great


From: David Henningsson
Subject: Re: [fluid-dev] Multi core support not so great
Date: Mon, 28 Sep 2009 20:31:19 +0200
User-agent: Thunderbird 2.0.0.23 (X11/20090817)

address@hidden skrev:
> I finished implementing a first pass at multi-core support.  

Oh, now I really must order a multi-core computer ;-)

> While it
> was a fun task, it didn't really yield the kind of performance I was
> hoping for.  For those interested here is a description of the current
> logic:
> 
> Added a synth.cpu-cores setting.

Perhaps "synth.workerthreads" would be more clear.

> Additional core threads are created in new_fluid_synth()
> (synth.cpu-cores - 1).
> Primary synthesis thread signals secondary core threads when there is work.
> Primary and secondary synthesis threads process individual voices in
> parallel.
> Primary thread mixes all voices to left/right, reverb and chorus buffers.
> 
> Having multiple cores really just gives you the ability to have more
> voices in the case of live performance (before maxing the CPU) or
> *should* make your -F (fast MIDI render to file) operations go faster.
>  The reason I say *should* is because it really depends on how complex
> the MIDI file is.  If there aren't a lot of voices, it may in fact be
> slightly worse performance.  Best case I have seen so far was about a
> 20% increase in speed (for the -F render case), which is something. 
> Interestingly the 2 cores were still not quite maxed.

Two things come in mind:

1) does the audio buffer size matter for the performance in this case?

2) if you run single-threaded, is one core maxed?

> One issue that I have stumbled upon, is in regards to thread
> priorities.  We want the secondary core threads to be running at the
> same priority as the primary synthesis thread, for round robin sort of
> response (though it may not matter that much if they are on separate
> CPUs).  In the case of -F fast rendering you definitely don't want your
> processes running high priority (especially on Linux).  In the live case
> though, the audio driver will be running high priority, so you want the
> secondary core threads also running high priority.  The issue is, that
> currently the secondary core threads are created in new_fluid_synth(),
> while the synthesis "thread" is created by audio drivers or via other
> means.  There needs to be some way to ensure that the secondary threads
> end up having the same priority.  Any ideas?  Perhaps a one time
> creation of the secondary threads within the fluid_synth_one_block
> routine and an attempt to make them identical in priority, would make
> sense.

I can imagine starting threads must have a higher upper bound in time
than e g malloc, so starting them from fluid_synth_one_block will
probably lead to an underrun (at one time only, but still). I would
prefer to have the audio drivers create the additional threads. After
all, they are the ones who know how to create threads with the right
priority, right?

> In summary:
> I realized through all this, that optimization is probably more
> important than multi-core support.  Enabling multi-core support
> introduces additional overhead, so unless you are trying to get more
> voices in the realtime case or render MIDI files slightly faster, you're
> better off not enabling it.
> 
> So now that I learned my lesson.  Should I commit the code? ;)  Does it
> seem worth it?  At the moment there may be some very minimal additional
> overhead in the single core case (compared to before), but that is
> probably so minimal as to be lost in the noise.

If all overhead is passing through some "if (cpucores > 1)" lines, I
would say it's nothing to worry about at all.

I think you should commit it, but call it an experimental feature at
this point. It could be a good ground for better multi-threading in the
future. I'm thinking about pre-rendering a few buffers for every voice,
with some rollback in case the voice should change (i e note-off events,
modulators etc). That would bring more stability, but it is a very
long-time goal and is nothing I plan to implement in the near future.

Btw, if you're up to implementing features, I would suggest giving the
libaudiofile / libsndfile a go. "How do I make a wave file" is an issue
coming up every now and then.

// David




reply via email to

[Prev in Thread] Current Thread [Next in Thread]