[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [fluid-dev] Thread safety
Re: [fluid-dev] Thread safety
Thu, 04 Jun 2009 22:49:40 +0200
Thunderbird 220.127.116.11 (X11/20090409)
> Quoting David Henningsson <address@hidden>:
>>> I think ideas like these are good. Having each voice be processed and
>>> then mixed, would only require one buffer (64 bytes) per voice and
>>> would not require much extra CPU. This could also facilitate moving
>>> to the multi-thread voice processing model.
>> I guess you would need at least an audio-buffer-size of samples to make
>> any difference in practice though. While this would add stability, it
>> will also add some CPU consumption and increase memory usage (and
>> bandwidth). So perhaps this would be something that should be enabled or
> There probably wouldn't be any extra data copies, since FluidSynth is
> currently rendering each voice to a temporary buffer, which is then
> summed into the final buffer. Perhaps rather than a buffer per voice,
> it could be a buffer per rendering thread. In the single thread case,
> it would work pretty much like it is now. If someone has multiple CPU
> cores, they could add additional threads (long term goal).
It seems like you're thinking that we pre-render one fluidsynth buffer
(64 samples) ahead, and add that to the latency. That's a simpler
solution than the one I had in mind: I was thinking that we should
prerender several buffers ahead, maybe 200 ms or whatever it takes to
protect us from unexpected CPU spikes in other applications. Then we
must discard prerendered buffers if we discover that an incoming MIDI
event changes the voice, so we can't pre-mix them.
>>> I've been thinking about what the ideal FluidSynth thread model would
>>> be for lock-free (or as close to lock free as possible) and of course
>>> crash free ;)
>>> Here are some initial thoughts, though perhaps faulty by design and
>>> lacking completeness.
>> Right, and we should not forget about the embedded / rendering case,
>> which probably still would benefit from being single-threaded. (And
>> also, they must be predictable.)
> True. I've been putting a bit more work into analysis of the FluidSynth
> code base (in particular fluid_voice.c and fluid_synth.c). I'm starting
> to dislike the idea of queuing every event to the synthesis thread,
> since that just adds more overhead. I'm also realizing how complicated
> it is, to make FluidSynth truly thread safe, without locking in the
> audio thread. One good thing though, is that the voice processing
> itself is self contained and does not rely on any variables outside of
> the FluidSynth voice instance.
> The public fluid_voice_* functions could be restricted to only being
> usable from the synthesis thread. The current biggest use of this API
> is for SoundFont loaders. In particular, note-on events trigger the
> creation of voices. If the SoundFont loader note-on callback was always
> executed from within the synthesis thread, then that would satisfy the
> For other publicly available functions, which are expected to be thread
> safe, events could be queued as we have been discussing. For the MIDI
> thread, sequencer and player though, we might be able to convert them
> from being individual threads to being MIDI event processing callbacks
> which are executed from within the synthesis thread.
> That is just at the idea stage though and I'm not sure yet what exactly
> this would entail and how it might affect the current FluidSynth API.
> Queuing events for the non-synthesis thread case though, is not so
> trivial. The current API exposes certain functions which complicate
> matters too. I think in order to do things right, some API would need
> to be deprecated or its use restricted. Having FluidSynth event related
> functions queue MIDI events, would probably be the easiest solution and
> could be processed exactly as other MIDI sources in the synthesis thread.
Let's list the threads and the cases here, before I get lost.
Case 1: Fast-render, single CPU core. One thread only. No need for any
Case 2: Fast-render, multiple CPU cores. One could benefit from using
additional threads here, but I'm not sure that we should care about that
Case 3: Live playing, single CPU core. One MIDI thread, one audio
thread. Would we benefit from additional threads? If not, who should do
the rendering work, the MIDI or the audio thread?
Case 4: Live playing, multiple CPU cores. One MIDI thread, one audio
thread, several worker threads. Is that what you call "synthesis thread"
I'm somewhat following your discussion about queues and threads but I'm
a bit unsure which cases different sections apply to.
>>> * Make active voices and voice pool private to the synthesis thread.
>>> * Parameter updates (MIDI events, etc) go through a lock free FIFO
>>> * Voices are allocated outside of the synthesis process, initialized and
>>> added to the FIFO queue for processing.
>>> * Note-off events are also appended to the event queue.
>> Do I understand you correctly, that you want to treat note-on events
>> differently from the rest of the events, based on the assumption that
>> these events are the only that will take a large amount of time?
> Good question. I like the idea of the note-on processing being done in
> a separate thread from the synthesis thread, since as you say, it can
> entail a lot of CPU consumption. If the synthesis thread was running
> high priority, versus MIDI note-on events, then it would have a tendency
> to throttle the MIDI note-ons when CPU consumption approaches maximum,
> which is good. I'm not certain though, if this in and of itself
> warrants doing the note-on processing in a separate thread.
A problem with separating note-on events from the rest is that you must
avoid reordering. If a note-off immediately follows the note-on, the
note-off must not be processed before the note-on. I guess this is
solvable though, it is just another thing that complicates matters a bit.
>>> A lock free FIFO
>>> with single producer and consumer is pretty trivial,
>> Will these support resizing if the FIFO gets full? And if they don't, is
>> that feature important?
> No, resizing would not be possible. It would just be set to a compile
> time maximum, which equates to maximum expected events per audio
> buffer. I just implemented the lock-free queue code yesterday, using
> glib primitives, though untested.
That would apply to case 3 and 4 (live playing), but for case 1 and 2
(rendering) I would prefer not to have that limitation. I'm thinking
that you probably want to do a lot of initialization at time 0. But
perhaps we can avoid the queue altogether in case 1 and 2?
>>> Sure, if it improves things in the short term, go ahead add it. Fixing
>>> FluidSynth's threading issues, and doing it right, is likely going to be
>>> a bit of a larger task than doing simple fixes. So it might be good to
>>> try and address the more severe issues, while coming up with a long term
I've done so now. I did it in two steps, first all the underlying work
that enables the sequencer to work as a buffer for MIDI threads
(revision 193), and enabling that feature for fluidsynth executable
(revision 194). When the synth has better thread safety on its own, we
revert 194 only.
I would really like some feedback from the community about these
changes, to ensure they don't change the responsiveness or latency, or
mess anything else up. I've tested it with my MIDI keyboard here and I
didn't notice any difference, but my setup is not optimal.
>> In an open-source project where people can suddenly disappear without
>> notice, my assumption is that taking a lot of small steps (while keeping
>> it stable) is often better than taking one big step, even if those small
>> steps sometimes means additional work.
> True. I'm not planning on disappearing at any point soon though and I
> hope you aren't ;)
Nope, although the amount of time I can spend on this project varies.
But that is the case for all of us I guess.