[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [fluid-dev] DSP testing

From: Peter Hanappe
Subject: Re: [fluid-dev] DSP testing
Date: Wed, 31 Mar 2004 20:03:42 +0200
User-agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.5) Gecko/20031107 Debian/1.5-3

Tim Goetze wrote:
[Peter Hanappe]

Tim Goetze wrote:

[Peter Hanappe]

I overlooked  the case where the audio thread can be interupted, which
can happen if fluidsynth runs without priviledges. You are quite
right that that case poses a problem. A complication I see with the
fifos, though, is that when the user thread has to kill a voice, it send
the 'kill' request to the audio thread and then has to wait for the
audio thread to confirm the request. So you have to introduce
synchronization even if you use lock-free FIFOs.

with the FIFO scheme proper, the note <-> voice mapping is done
entirely by the audio thread. imagine the audio thread reading a
complete MIDI stream and acting on all noteon/off, controller etc
events, calling the equivalent of fluid_synth_noteon() itself.

if the public interface (the user thread) wants to start a note, the
respective function simply writes to the FIFO and lets the audio
thread do the rest of the work.

The problem is that the audio thread cannot handle it all. The
noteon function calls upon the soundfont object to initialize the
voice. The soundfont object may do all kinds of non real-time stuff,
in particular loading files. So that has to be done by the user thread.
A solution would be to make the FIFO a stream of initialised voice
objects instead of noteon events. And then there could be a second
stream for events that modify the state of the voices (basically noteoff
and update_param). I'll take a look at the code how much change that
would involve.

i was suspecting it would not be all that easy. the initialized voice
stream is OK i guess. i'm usually writing pointers not instances to the
FIFO in such cases.

They could be pointers, yes.

we'd need another stream with 'killed' voices in this scheme, actually
it doesn't seem so simple to do anymore.

I'd suggest that the synthesizer has an internal set of voices that
it uses for the audio synthesis. When the audio thread picks up a
voice from the FIFO, it copies the voice's data to one of the
internal voice and leaves the voice in the FIFO for reuse.
The FIFO could simply be a round-robin buffer with a read and a
write pointer.

do you think that instead it would be feasible to split the voice
initialization work into non-RT and RT parts?

this way, the user thread could ask the soundfont to prepare the
samples and do whatever else non-RT needs to be done, without actually
touching the voice struct. after this call returns, the user thread
writes the noteon to the stream, and the audio thread then asks the
soundfont to do the rest of the setup, knowing for sure that this call
is RT-compliant.

I've thought of that. But when the audio thread picks up the voice,
how can you guarantee that the soundfont and it's sample cache are
still in the same state (there could have been MIDI program change
in between that messes things up).




reply via email to

[Prev in Thread] Current Thread [Next in Thread]