[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [fluid-dev] Thread safety long-term thoughts

From: David Henningsson
Subject: Re: [fluid-dev] Thread safety long-term thoughts
Date: Thu, 19 Nov 2009 22:28:35 +0100
User-agent: Thunderbird (X11/20090817)

address@hidden wrote:
Quoting David Henningsson <address@hidden>:
A difference between 1.0.9 and 1.1.0 is that the audio thread got more
of the state machine work to do; making it harder to keep its real-time
properties. I admit to not having considered this well enough when we
discussed the thread safety the previous time.

Btw; with the sample timers we also have the entire midi file player (with its fopen calls etc) inside the audio thread. I'm guilty of that, and it should be fixed by inserting a sequencer between the player and the audio thread, at least in real-time use cases. Perhaps something for 1.1.2.

I'm sure the biggest time consumer in this regard is the note-on processing.

Assuming the note-on call makes no system calls, and the soundfont isn't paged out by the OS, the note-on event should complete within a good fixed time, or is there something else bugging us?

Here is an example of what I was talking about, in regards to the state machine depending heavily on the existing voices. Say for example the pitch bend controller changes on a channel. All voices are then scanned and those which are active on the given channel are modulated in respect to the pitch bend controller. The active voices and their parameters are private to the synthesis thread. I don't YET see what additional processing could be moved outside of the synth context for most events, from what it is right now. Though I could see perhaps grouping controller changes, so only one update/calculation occurs.

Perhaps there is not that much to gain performance-wise from moving things out of audio thread context then. It would just feel less messy, I guess, if we splitted the synth object in two parts, one with strict real-time and one without.

And instead of creating shadow variables, all variables should belong the state machine, unless explicitly needed by the voices directly.

In the case above, the current pitch bend controller current value belongs to the state machine. The new value is sent to the audio thread. Btw, if someone tries to read the current pitch bend controller value just after having set it, will it work?

The way it is now, is much closer to the synth thread being a voice renderer than it was with 1.0.9. It seems like its about identifying additional stuff to move outside of it. Note on events are the only thing I can think of at the moment, that could use some improvement. Can you think of any others?

You probably have a better overview over the time and locking needed by various events than I have, but I'm thinking about presets, tuning, soundfont loading etc, but I guess they are already moved out of synthesis context.

The midi thread and shell threads will do the state machine work, when
they make calls against the fluid_synth object.

We still need to do something about the Jack MIDI case, like queuing the events back to another thread (return queue for example). Like what is now being done with program changes.

Something like that, let the non-realtime stuff complete when it's done, yet we must then queue all simple events only if there is non-realtime processing in progress, to prevent reordering... (Sigh.)

The return event queue thread is mainly used for garbage collection,
when the audio thread leaves garbage behind that cannot be collected
within a fixed enough time. Hopefully it can be removed if the audio
thread gets less to do, but I'm not certain about this.

I think it would be hard to remove it, since its purpose is to handle those operations which shouldn't be done in the audio thread. Though I also think it would be nice to be rid of it.

Perhaps an option to let the libfluidsynth user call a function periodically if he wants to skip the additional thread.

I was thinking more about getting rid of the need to test if an event is occurring in synthesis context or not (checking thread IDs). Most event functions probably wont be explicitly called from within synthesis context. Therefore we can assume that if FluidSynth is multi-thread enabled, then if the public event functions are called, they should always be queued (perhaps the act of calling any of these functions could switch on multi-thread queuing at runtime). The exception is when MIDI events are being processed in the audio thread. This occurs with Jack and non-realtime rendering. If we created a non-queuing equivalent of the fluid_synth_handle_midi_event this could be used for that purpose.

Could we do something in create_audio_driver/delete_audio_driver to register with the synth, and say that it is now operating with an audio driver, and that means real-time operation? And if that is not the case, handle events directly. Likewise, we could make the MIDI drivers (and shell thread) register that we now have additional threads referencing the state machine, so the state machine must be multi-threaded.

(Note: potential screwup with libfluidsynth users creating their own audio drivers, although I assume it was screwed up in 1.0.9 the same way as well.)

// David

reply via email to

[Prev in Thread] Current Thread [Next in Thread]