[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [fluid-dev] Thread safety
Re: [fluid-dev] Thread safety
Thu, 21 May 2009 14:40:21 +0200
Thunderbird 188.8.131.52 (X11/20090409)
> Quoting David Henningsson <address@hidden>:
>> However, if you say that Swami expects fluid_synth_t to be somewhat
>> thread-safe (just had a five-minute look a qsynth and at first glance it
>> seems to expect the same), perhaps it would be bad to enforce the
>> single-threaded synth at this point. Besides, I like the current idea of
>> starving the MIDI threads preferred to underrunning the audio in a
>> heavy-load situation. And that would break.
> Yeah, I'm tending to think that trying to serialize everything into one
> thread, could be a bad idea. Especially if we want to try to provide
> multi-CPU voice processing support in the future. Your initial analysis
> has got me excited though on this subject and I would like to put some
> time into this too.
Excitement is a good thing. It would be nice to know your plans in terms
of time and effort, that is, if you can tell in advance. :-)
> I think it would help to identify the areas where mutual exclusion needs
> to occur, between the synthesis thread and MIDI event thread(s).
> Just some initial thoughts, from memory (haven't actually reviewed the
> code at this point):
> - fluid_synth_t parameters (reverb/chorus/etc)
> - Voice pool activate/deactivate of a voice and voice acquisition
> - Voice parameter changes
That's probably correct.
> A lot of FluidSynth assumes that integers can be atomically assigned
> to. I think this assumption does not hold true on multi-CPU systems,
> which could lead to unexpected behavior. Identifying groups of
> parameters which are dependent on each other is also needed.
> I wonder what the effect would be of using mutexes. In real life, how
> bad would the lock contention be and would it lead to audio underruns?
> Identifying the areas where mutexes are needed would help with this.
Perhaps we could have "big" mutexes (i e one for everything), but on the
voice level instead of the synth level. And on the synth level we try to
use the atomic functions, I assume that's doable.
The downside is more mutexes (one per playing voice), but on the other
hand they should be finished more quickly. So we win in the worst case
but lose in the average case.
Another wild idea would be to preprocess the voices so we're always a
few buffers ahead for every voice. A lower priority thread (or several,
for multicore CPUs) could handle this. The audio thread would then just
mix the voices together. Care has to be taken that MIDI events will
throw already created buffers away.
>> My patch - in its current state - does not touch the synth. It changes
>> the sequencer according to the text above, and fluidsynth.c manually
>> inserts a sequencer between the midi router and the synth. So
>> libfluidsynth applications such as Swami and qsynth will neither suffer
>> or gain from this patch, unless they do the same.
>> What do you say if we leave it at that for the moment, I commit the
>> patch and we can all test it to see if we find any difference in latency
>> or stability when we use fluidsynth from the command line?
>> And at a later point in time we could review the synth threading a bit
>> more deep, to see if we can improve the situation (with regards to
>> segfaults, parallellization, stalls etc)?
> Lets hold off on the patch for the moment. I want to have a look over
> the lot myself to get a better idea of how we might proceed.
My idea was that if I commit my patch in its current state, at least we
will have something more stable (fix for ticket #43). And I think the
possibility to route midi events through the sequencer makes a valuable
addition to the sequencer. And when we have improved the synth thread
safety, we will simply revert the three lines in fluidsynth.c that
inserts the sequencer in the chain. What do you think?