[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [fluid-dev] Thread safety long-term thoughts

From: David Henningsson
Subject: Re: [fluid-dev] Thread safety long-term thoughts
Date: Thu, 26 Nov 2009 23:02:45 +0100
User-agent: Thunderbird (X11/20090817)

address@hidden wrote:
Quoting David Henningsson <address@hidden>:
If someone is playing along with the midi track, they can't have high
latency. On the other hand, I think it is only between songs there is a
problem, the rest should not be very time-consuming.
True, hadn't really thought of that scenario. Something to attend to at a later time.


Yes it will work. As far as I know, all values which can be queried should now work, as far as appearing immediately. For all parameters, except presets and polyphony, the value is set and accessed atomically by all threads.

If this includes the audio thread, don't we have a problem? If the
value is being read by the audio thread after being atomically set by
the MIDI thread(s), but before the corresponding event has arrived in
the queue, things will be incosistent?

(Perhaps this is not such a big issue for the pitch bend, but there
could be another events where this problem could hurt more?)

I don't think there is an issue with this. Since the new value isn't
actually passed through the queue, the queued event is essentially just an update request. The latest value will always be the value assigned to the variable and the update event will ensure that the synthesis thread uses the latest value. Events are processed at whatever interval the fluid_synth_one_block() function is called in relation to the MIDI events. If more than one pitch bend event occurs within a given interval, the latest value will get used.

I can think of issues, but perhaps it is only in theory they can happen.

Imagine that we have a note sounding and a channel volume of 1, so the note is barely audible. Then we have a volume change to 2, then a note-off (and instant release time), then a volume change to 127. Given a certain timing, it could happen that the volume change to 2 is never read and 127 is read instead, so the note will sound at volume 127 instead of 2 for a very short period of time.

Well it isn't just for garbage collection now. Its also being used to handle program changes, which should happen ASAP.
Eh? I had a look at that code and it seems to screw up fast-render and
embedded cases pretty bad, unless I'm missing something...?
Yeah, now that I think about it, it would. In the case of fast render, program changes need to occur synchronously as part of the synthesis process, but in the case of realtime playback it needs to happen outside the synthesis thread. Seems to fall under the single versus multi thread enable/disable.

We have so many use cases it is easy to fix one and break another. I started to write something at http://fluidsynth.resonance.org/trac/wiki/UseCases earlier today but I'm not sure if it will be helpful; at least it is not so complete yet.

Yeah, I think something like that could be good, rather than trying to auto detect it. A simple API function like:
void fluid_synth_multi_thread_enable(fluid_synth_t *synth, int enable);

We have two kinds of multi-threading, a) we have an audio thread or we
don't, and b) we have either one, or more than one, thread accessing
the state machine. I think we should separate those cases if we made
such an API.

I don't understand the difference between the two or what distinction should be made. Can you clarify this a little and what this might look like as far as API? It seems to me like there are really only 2 distinctions that we care about, single threaded (audio synthesis and MIDI events occur synchronously and from the same thread) and multi-threaded where MIDI events may occur in the audio thread or in other threads.

In the wiki page I quoted above, there are three properties and some short explanations, does it clarify things? I'm not sure how (and which of them) we should configure via the API though.

In the case of multi-threads I wonder if there would be any use scenarios where synthesis would be running faster than realtime.

I can't think of one currently.

Sounds good to me. So we can add an API function to set the mode. I guess the default should be multi-threaded? That should probably be the case to remain as backwards compatible as possible with older software which don't yet know about the new API.


// David

reply via email to

[Prev in Thread] Current Thread [Next in Thread]