[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [fluid-dev] Thread safety
Re: [fluid-dev] Thread safety
Thu, 04 Jun 2009 20:10:10 -0400
Internet Messaging Program (IMP) H3 (4.1.6)
Quoting David Henningsson <address@hidden>:
It seems like you're thinking that we pre-render one fluidsynth buffer
(64 samples) ahead, and add that to the latency. That's a simpler
solution than the one I had in mind: I was thinking that we should
prerender several buffers ahead, maybe 200 ms or whatever it takes to
protect us from unexpected CPU spikes in other applications. Then we
must discard prerendered buffers if we discover that an incoming MIDI
event changes the voice, so we can't pre-mix them.
Well, pre-render is perhaps not the best term. FluidSynth currently
synthesizes audio at the audio.period-size, which is 64 samples by
default (though it ultimately ends up being rendered at the audio
driver's buffer size at a time). I'm not suggesting we change it from
what it currently is. I don't see the need to add any more buffering
or latency, beyond what the audio driver is already providing.
What I'm proposing isn't to overcome the case of CPU starvation, but
to provide lockless thread safe synthesis. So I'm basically
attempting to fix the synchronization issues that FluidSynth currently
has, with as little additional overhead as practical and in a
lock-free manner (a must for the live low latency use case).
CPU starvation is an important issue, but should be dealt with
separately. Limiting the number of voices dynamically in response to
CPU usage is probably the ideal solution, and is especially important
when the audio thread is running SCHED_FIFO on a Linux system.
Let's list the threads and the cases here, before I get lost.
Good idea! :)
Case 1: Fast-render, single CPU core. One thread only. No need for any
This could be handled by identifying which "thread" is the synthesis
thread (first call to fluid_synth_one_block). Any function which
might need to synchronize in the multi-thread case, could check if the
calling thread is the synthesis thread or not and process the events
immediately or queue them accordingly. This would automatically take
care of the single thread and multi-thread cases, without adding much
Case 2: Fast-render, multiple CPU cores. One could benefit from using
additional threads here, but I'm not sure that we should care about that
I don't think it would be very difficult to add multi-core support at
this point, due to the self-contained nature of FluidSynth voices.
But I agree that it is lower priority.
Case 3: Live playing, single CPU core. One MIDI thread, one audio
thread. Would we benefit from additional threads? If not, who should do
the rendering work, the MIDI or the audio thread?
Agree, you wouldn't benefit from more threads in the single CPU core
case. In fact, if the MIDI processing is guaranteed not to block,
things would be better off if it was all in one thread.
Case 4: Live playing, multiple CPU cores. One MIDI thread, one audio
thread, several worker threads. Is that what you call "synthesis thread"
Yes. The main "synthesis" thread, would be the audio thread, since it
ultimately calls fluid_synth_one_block(). The MIDI thread could be
separate, but it could also be just a callback, as long as it is
guaranteed not to block.
Main synthesis thread's job:
1. Process incoming MIDI events (via queues or directly from MIDI
driver callback, i.e., MIDI event provider callbacks).
2. Synthesize active voices.
3. Mix each synthesized voice block into the output buffer.
#2 is where other worker synthesis threads could be used in the
multi-core case. By rendering voices in parallel with the main
synthesis thread. The main thread would additionally be responsible
for mixing the resulting buffers into the output buffer as well as
signaling the worker thread(s) to start processing voices.
I'm somewhat following your discussion about queues and threads but I'm
a bit unsure which cases different sections apply to.
I'm trying to take care of all those cases :) The single core case
would incur slight additional overhead from what it is now (to check
the thread origin of an event), but I think that would be very tiny
and it wouldn't suffer from the current synchronization issues when
being used from multiple threads.
A problem with separating note-on events from the rest is that you must
avoid reordering. If a note-off immediately follows the note-on, the
note-off must not be processed before the note-on. I guess this is
solvable though, it is just another thing that complicates matters a bit.
If the note-on and off events are originating from the same thread,
then they are guaranteed to be processed in order, since they would be
queued via a FIFO or processed immediately if originating from the
I changed my mind somewhat from what I said before though, that the
fluid_voice_* related stuff should only be called from within the
synthesis thread. Instead, what I meant, was that the fluid_voice_*
functions should only be called from a single thread for voices which
It seems like there are 2 public uses of the fluid_voice_* functions.
To create voices/start them, in response to the SoundFont loader's
note-on callback and to modify a voices parameters in realtime.
I'm still somewhat undecided as to whether there would be any real
advantage to creating voices outside of the synthesis thread. The
note-on callback is potentially external user provided code, which
might not be very well optimized and therefore might be best called
from a lower priority thread (MIDI thread for example) which calls the
note-on callbacks and queues the resulting voices. Perhaps handling
both cases (called from synthesis thread or non-synthesis thread) is
the answer. The creation of voices can be treated as self contained
structures up to the point when they are started.
The current implementation of being able to modify existing voice
parameters is rather problematic though, when being done from a
separate thread. Changes being performed would need to be
synchronized (queued). In addition, using the voice pointer as the ID
of the voice could be an issue, since there is no guarantee that the
voice is the same, as when it was created (could have been stopped and
re-allocated for another event). I think we should therefore
deprecate any public code which accesses voices directly using
pointers, for the purpose of modifying parameters in realtime. We
could instead add functions which use voice ID numbers, which are
guaranteed to be unique to a particular voice. I'm not sure how many
programs would be affected by this change, but I know that Swami would
be one of them.
No, resizing would not be possible. It would just be set to a compile
time maximum, which equates to maximum expected events per audio
buffer. I just implemented the lock-free queue code yesterday, using
glib primitives, though untested.
That would apply to case 3 and 4 (live playing), but for case 1 and 2
(rendering) I would prefer not to have that limitation. I'm thinking
that you probably want to do a lot of initialization at time 0. But
perhaps we can avoid the queue altogether in case 1 and 2?
Indeed. As I wrote above, functions could detect from which thread
they are being called from and act accordingly (queue or execute
directly). If for some reason a queue is maxed out though, I suppose
the function should return a failure code, though it risks being
Sure, if it improves things in the short term, go ahead add it. Fixing
FluidSynth's threading issues, and doing it right, is likely going to be
a bit of a larger task than doing simple fixes. So it might be good to
try and address the more severe issues, while coming up with a long term
I've done so now. I did it in two steps, first all the underlying work
that enables the sequencer to work as a buffer for MIDI threads
(revision 193), and enabling that feature for fluidsynth executable
(revision 194). When the synth has better thread safety on its own, we
revert 194 only.
I would really like some feedback from the community about these
changes, to ensure they don't change the responsiveness or latency, or
mess anything else up. I've tested it with my MIDI keyboard here and I
didn't notice any difference, but my setup is not optimal.
Sounds great! It would be nice to put together a test suite for
FluidSynth, for testing rendering, latency and performance. A simple
render to file case with a pre-determined MIDI sequence would be a
nice benchmark and synthesis verification tool. I'll look over your
changes at some point soon and provide some feedback.
In an open-source project where people can suddenly disappear without
notice, my assumption is that taking a lot of small steps (while keeping
it stable) is often better than taking one big step, even if those small
steps sometimes means additional work.
True. I'm not planning on disappearing at any point soon though and I
hope you aren't ;)
Nope, although the amount of time I can spend on this project varies.
But that is the case for all of us I guess.
Yes, definitely. I've been much more engaged in the project in these
past months than I ever have been, which I think is a good thing ;)
My responses keep getting larger.. I'll be putting my words into code soon.