[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [fluid-dev] Thread safety long-term thoughts

From: josh
Subject: Re: [fluid-dev] Thread safety long-term thoughts
Date: Mon, 16 Nov 2009 14:36:21 -0800
User-agent: Internet Messaging Program (IMP) H3 (4.1.6)

Quoting David Henningsson <address@hidden>:
address@hidden wrote:
Quoting David Henningsson <address@hidden>:
The fluid_synth is becoming increasingly large and complex. I've
started to think of it as two parts, a state machine, and a voice
renderer. The voice renderer is strict real-time, and corresponds
roughly to fluid_synth_one_block and everything below that.

Just for clarification, are you referring to code organization and/or code changes?

Both. To increase cohesion, let's say that the fluid_synth object could
own a fluid_voice_renderer object, which only deals with the voices and
not the MIDI.

Sounds like that could be a good direction. I don't fully have the picture yet of what that would look like, but I think the more standalone the core synthesis is from the rest of the state machine the better.

As I get back into libInstPatch and Swami development, I'm going to start seriously considering adding libInstPatch support to FluidSynth.

Ah, and then there is the Swami use case, which has its unique
requirements. Keep forgetting about that...

There aren't a lot of extra requirements that Swami has, that aren't currently satisfied by the SoundFont loader API. The main things I'd like to add are 24 bit and/or floating point audio sample support and the ability to register a callback for a sample for streaming the audio. Being able to change an effect on a group of voices in realtime is also a requirement, that may need some improvement.

If this goes well, then we may want to just make it the core of the instrument management in the future. Making FluidSynth GObject oriented, isn't much of a step beyond that. With GObject introspection being a hot topic these days, that could lead to just about any language binding which supports it, automatically. This would be FluidSynth 2.0 though and would probably change the API significantly.

So yet another U-turn on the glib dependency, dropping thoughts about
iPhone etc?

If we get the synthesis core to be pretty independent from the rest of the state machine, etc, then perhaps we could have it be a standalone library, with minimal dependencies and have the best of both worlds. I wouldn't call it a U-turn. I'd like to think that FluidSynth can continue to progress without being overly hindered by special use cases.

Such a change would correspond with a new API and major version number (2.0). We could provide a FluidSynth 1.x compatibility library if that makes sense, or continue to develop both in parallel. This is definitely future thinking though and can probably wait to discuss in detail until after this next phase.

I don't think the multi-thread stuff adds too much overhead. If the fluid_synth_* functions get called from the synthesis thread, which is the case for fast render, then no queuing is done, in which case the only real overhead is checking to see if its the synthesis thread and assigning the thread ID in fluid_synth_one_block.

In my long-term thoughts, I would like to avoid checking thread ID's as
far as possible. We should instead assume that only one thread calls
fluid_synth_one_block at a time, and that calls to the state machine
are  either synchronized or not synchronized depending on the
configuration / use case.

The current assumption IS that fluid_synth_one_block is only called by one thread at a time, but I hadn't initially considered that the actual thread might change.

We could provide another version of the MIDI event handler which calls the _LOCAL variants directly. This could be used in the case of Jack and rendering to disk for example. It could then be assumed that all other calls to fluid_synth functions need queuing if multi-threading is enabled.

It seems difficult though, to determine the use case of FluidSynth automatically, in regards to threads and what not, at least with the current API.

That seems like a good idea. A lot of state machine processing though, relies on the current state of voices.

Hmm? It could be that swami wants information about the current voices,
but otherwise I would say that information goes from the state machine
to the voice renderer only. What information does it need that comes
from the voices rather than the current MIDI state?

I probably made that statement without having a more complete understanding of the logical division changes you are proposing.

I think reference counting would help a lot with this. When a voice is using a sample, it holds a reference. The sample references its parent SoundFont, etc. This is how libInstPatch works. If a SoundFont gets removed or changed, different presets get assigned, causing the samples to become unreferenced, ultimately freeing the SoundFont if no more references are held.

We must make sure freeing a SoundFont never happens in the audio
thread, since then it'll lose real-time. So I don't see how this solves
the problem.

Right, which is why its nice to have a lock free return event queue, as there is now. I'd like to make it block/wakeup when work is available, rather than being a timer. That would allow for program changes to get passed back to lower priority as well. I'm still unsure about using a GCond with a mutex though and if it introduces issues in regards to lock contention mucking up the high priority thread. Perhaps doing SCHED_RR instead would make sense? I haven't been able to find out much info on this.

For the multi-core support to make a difference - assuming
rendering/interpolating voices it what takes the most time - it would
be nice to add a pre-renderer. This pre-renderer would be a function
that copies the current value of a voice, assumes nothing happens to
that voice, and renders a few buffers ahead, say 100-500 ms. It should
run in one or several non-realtime threads, depending on the number of
CPU cores. Now the voice renderer, after having processed its in-queue,
takes these pre-rendered buffers instead of rendering them directly,
assuming nothing happened to the voice and the renderer has the buffer

That sounds like a good idea. There could be some change prediction logic too, to select those voices deemed less likely to change (haven't been changed in a while or are known not to change for some time if playing back a MIDI file).

Jimmy's post got me thinking that perhaps we should cache the result of
the rendering, especially for drum tracks which are often repetitive.
At least it will speed up rendering of techno music ;-)

Ha ha. Some profiling should really be done to figure out what areas are the most CPU intensive, prior to delving too much into optimization. Caching rendered voices would be interesting, but there would likely still be the need to update voice state information, in the event that something else occurs than what was expected. Such a cache might not really get the hit rate that would make it worth it and would add more irregularity to CPU consumption which means potential unexpected CPU consumption changes. Might be a fun task though nonetheless ;)

Nice to hear your thoughts on FluidSynth future. It would be good to get an idea of the next phase of development. As I mentioned, I'll primarily be focusing on libInstPatch and Swami for the coming months. So the next release should probably be more focused on bug fixes, optimization, voice stealing improvements, etc. But limit the amount of new functionality or code overhaul.

I guess that's more reality based, since we (well, mostly you) just
made a lot of code overhaul, and I won't have time to do this change
currently anyway. It's just that given the recent posts, I just can't
help thinking that perhaps we didn't do the thread safety in the best
way possible.

For the most part I'm happy with the thread safety changes. There was some oversight on my part in regards to synth state querying/queuing and I don't really like the fact that the thread ID has to be read every call to fluid_synth_one_block. Once the return event queue is more responsive (wait for work), I think it will be in a pretty good state. Perhaps not the best architecture, but I think its fairly efficient, thread safe and much better as far as organization.

// David



reply via email to

[Prev in Thread] Current Thread [Next in Thread]