[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [fluid-dev] Thread safety long-term thoughts

From: josh
Subject: Re: [fluid-dev] Thread safety long-term thoughts
Date: Wed, 18 Nov 2009 12:52:26 -0800
User-agent: Internet Messaging Program (IMP) H3 (4.1.6)

Quoting David Henningsson <address@hidden>:
Ebrahim Mayat wrote:
Currently, fluidsynth has five threads.

5 process 6014 thread 0x6f03  0x936a11f8 in mach_msg_trap ()
4 process 6014 thread 0x6703 0x936a1278 in semaphore_timedwait_signal_trap ()
3 process 6014 thread 0x6203  0x936a11f8 in mach_msg_trap ()
2 process 6014 thread 0x1003  0x936a7c0c in __semwait_signal ()
* 1 process 6014 thread 0x10b  0x936ae6b8 in read$UNIX2003 ()

Of these threads the shell process (fluidsynth.c) is the first one. The second thread begins with fluid_synth_return_event_process_thread which together with fluid_synth_one_block are declared in fluid_synth.c

The other three threads (please correct me if I am wrong) address the audio, MIDI and I/O procs.

How does the state machine and voice renderer fit into this picture ?

Hi Ebrahim and thanks for the thread overview; it's nice to have that
point of view as well.

A difference between 1.0.9 and 1.1.0 is that the audio thread got more
of the state machine work to do; making it harder to keep its real-time
properties. I admit to not having considered this well enough when we
discussed the thread safety the previous time.

I wonder how much more processing this amounts to and what particular events are the biggest consumers. There is also the Jack MIDI case, where the audio and MIDI are processed in the same thread now, which would have suffered the same in 1.0.9.

I'm sure the biggest time consumer in this regard is the note-on processing. I like the fact that all voices for a note-on event start at the same time, but I imagine that could still be accomplished and remove the note-on callbacks from the synthesis thread, though I'm not sure of the best way to do this.

I remember we discussed the difficulties a little before, as far as voice allocation and how these voices get handed off to the synth context. The whole voice creation process in the note on callback is fairly independent, which helps. Something I hadn't thought of before, is that for a given note on callback, the voices could be grouped together (perhaps as a linked list) and sent all at once, upon return from the callback, to the synthesis thread (rather than having to queue each voice individually). The voices then either need to be copied to the static pool of voices or perhaps used directly. Something like the return queue would be needed to reclaim voices.

Perhaps we should look into doing this soon, if there is indeed an issue with many note-ons causing excessive CPU usage, resulting in xruns.

So I want to give the audio thread as little to do as possible, to make
it as easy as possible to avoid xruns. The audio thread is proposed to
do just the voice renderer work, not the state machine work (which it
does in 1.1.0).

Here is an example of what I was talking about, in regards to the state machine depending heavily on the existing voices. Say for example the pitch bend controller changes on a channel. All voices are then scanned and those which are active on the given channel are modulated in respect to the pitch bend controller. The active voices and their parameters are private to the synthesis thread. I don't YET see what additional processing could be moved outside of the synth context for most events, from what it is right now. Though I could see perhaps grouping controller changes, so only one update/calculation occurs.

The way it is now, is much closer to the synth thread being a voice renderer than it was with 1.0.9. It seems like its about identifying additional stuff to move outside of it. Note on events are the only thing I can think of at the moment, that could use some improvement. Can you think of any others?

The midi thread and shell threads will do the state machine work, when
they make calls against the fluid_synth object.

We still need to do something about the Jack MIDI case, like queuing the events back to another thread (return queue for example). Like what is now being done with program changes.

The return event queue thread is mainly used for garbage collection,
when the audio thread leaves garbage behind that cannot be collected
within a fixed enough time. Hopefully it can be removed if the audio
thread gets less to do, but I'm not certain about this.

I think it would be hard to remove it, since its purpose is to handle those operations which shouldn't be done in the audio thread. Though I also think it would be nice to be rid of it.

I'm not sure about the fifth (I/O procs) thread, is that something used
internally by libraries we use? What does it do?

// David

I was thinking more about getting rid of the need to test if an event is occurring in synthesis context or not (checking thread IDs). Most event functions probably wont be explicitly called from within synthesis context. Therefore we can assume that if FluidSynth is multi-thread enabled, then if the public event functions are called, they should always be queued (perhaps the act of calling any of these functions could switch on multi-thread queuing at runtime). The exception is when MIDI events are being processed in the audio thread. This occurs with Jack and non-realtime rendering. If we created a non-queuing equivalent of the fluid_synth_handle_midi_event this could be used for that purpose.

At the moment, I can't think of any cases where this would be an issue. But I haven't fully thought it through yet.


reply via email to

[Prev in Thread] Current Thread [Next in Thread]