fluid-dev
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [fluid-dev] Multi core support not so great


From: josh
Subject: Re: [fluid-dev] Multi core support not so great
Date: Mon, 28 Sep 2009 09:37:01 -0700
User-agent: Internet Messaging Program (IMP) H3 (4.1.6)

For the iPhone, I'm considering having a build time option, that builds a single threaded FluidSynth, without glib support. This would probably also mean no shell or some API would need to be added to be able to use the shell in a non-blocking manner (so it could be called from within the same thread as the synthesis routines, this might already be present).

Could you provide some more details of how you will be using FluidSynth on the iPhone? It doesn't have library support, but does have thread support correct? Would you want to use the shell or are you planning to just embed the synthesis core routine in some other application and just send MIDI events directly via C functions? Would doing all of this in one thread work?

The current added overhead, is an if statement "if (synth->cores > 1)" which gets called each 64 samples of synthesized audio and function call overhead for fluid_voice_mix() which gets called for 64 samples of each voice (it used to be inlined into fluid_voice_write).

While this is probably pretty minimal, it could amount to something. I'll try and figure out how to remove this overhead, so that having multi-core support won't impact anything when its not enabled.

As a side note, I think some of the poor results I have been getting, is likely due to the use of mutexes for locking between the core threads. I'm going to attempt a mostly lockless version and see how much it improves things.

Josh



Quoting "S. Christian Collins" <address@hidden>:
Quick question: would that overhead end up being a factor on something
like the iPhone, where the CPU power is quite limited?

-~Chris

address@hidden wrote:
I finished implementing a first pass at multi-core support. While it was a fun task, it didn't really yield the kind of performance I was hoping for. For those interested here is a description of the current logic:

Added a synth.cpu-cores setting.
Additional core threads are created in new_fluid_synth() (synth.cpu-cores - 1).
Primary synthesis thread signals secondary core threads when there is work.
Primary and secondary synthesis threads process individual voices in parallel.
Primary thread mixes all voices to left/right, reverb and chorus buffers.

Having multiple cores really just gives you the ability to have more voices in the case of live performance (before maxing the CPU) or *should* make your -F (fast MIDI render to file) operations go faster. The reason I say *should* is because it really depends on how complex the MIDI file is. If there aren't a lot of voices, it may in fact be slightly worse performance. Best case I have seen so far was about a 20% increase in speed (for the -F render case), which is something. Interestingly the 2 cores were still not quite maxed.

One issue that I have stumbled upon, is in regards to thread priorities. We want the secondary core threads to be running at the same priority as the primary synthesis thread, for round robin sort of response (though it may not matter that much if they are on separate CPUs). In the case of -F fast rendering you definitely don't want your processes running high priority (especially on Linux). In the live case though, the audio driver will be running high priority, so you want the secondary core threads also running high priority. The issue is, that currently the secondary core threads are created in new_fluid_synth(), while the synthesis "thread" is created by audio drivers or via other means. There needs to be some way to ensure that the secondary threads end up having the same priority. Any ideas? Perhaps a one time creation of the secondary threads within the fluid_synth_one_block routine and an attempt to make them identical in priority, would make sense.

In summary:
I realized through all this, that optimization is probably more important than multi-core support. Enabling multi-core support introduces additional overhead, so unless you are trying to get more voices in the realtime case or render MIDI files slightly faster, you're better off not enabling it.

So now that I learned my lesson. Should I commit the code? ;) Does it seem worth it? At the moment there may be some very minimal additional overhead in the single core case (compared to before), but that is probably so minimal as to be lost in the noise.

Cheers.
Josh



_______________________________________________
fluid-dev mailing list
address@hidden
http://lists.nongnu.org/mailman/listinfo/fluid-dev







reply via email to

[Prev in Thread] Current Thread [Next in Thread]