[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[fluid-dev] Thread safety long-term thoughts

From: David Henningsson
Subject: [fluid-dev] Thread safety long-term thoughts
Date: Mon, 16 Nov 2009 06:47:07 +0100
User-agent: Thunderbird (X11/20090817)

While the recent thread safety improvements are much better than the previous handling (since the previous had unpredictable crashes), the recent postings, the shadow variable workaround, and the multi-core support got me thinking. See this as long-term thoughts for discussion, rather than something I plan to implement in the near future.

The fluid_synth is becoming increasingly large and complex. I've started to think of it as two parts, a state machine, and a voice renderer. The voice renderer is strict real-time, and corresponds roughly to fluid_synth_one_block and everything below that.

The state machine is everything else, and a MIDI synthesizer is a state machine; people expect to set a variable in there and be able read it correctly afterwards. On the other hand, we can probably ease the real-time requirements on this part.

The state machine is multi-threaded by default, but we must be able to switch it off to avoid overhead for some use cases, such as the embedded ones, and fast-render. The more MIDI events that can be handled within a fixed time, the better, though. But for the harder ones (e g program change) we are allowed to use mutexes.

This also proposes moving the thread boundaries from before fluid_synth to between the state machine and the voice renderer. The voice renderer needs an in-queue of "voice events", events so much prepared that the voice renderer can meet its real-time requirements.

This would also have the advantage of moving the sfloader callbacks outside the most realtime sensitive code.

However, nothing new without a downside. Since the sample is used by the voice renderer, freeing a preset or soundfont is not solved easily. But outlined, first we should check if there is an audio thread running, if there isn't (embedded case, fast-render), we can just go ahead. Otherwise send a voice event saying we should kill active voices referencing the preset or soundfont. We then block the call until the audio thread has processed our message (simplest). Optionally we could return asynchronously, but then we still need some kind of garbage queue.

For the multi-core support to make a difference - assuming rendering/interpolating voices it what takes the most time - it would be nice to add a pre-renderer. This pre-renderer would be a function that copies the current value of a voice, assumes nothing happens to that voice, and renders a few buffers ahead, say 100-500 ms. It should run in one or several non-realtime threads, depending on the number of CPU cores. Now the voice renderer, after having processed its in-queue, takes these pre-rendered buffers instead of rendering them directly, assuming nothing happened to the voice and the renderer has the buffer available.

// David

reply via email to

[Prev in Thread] Current Thread [Next in Thread]