[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [fluid-dev] DSP testing

From: Tim Goetze
Subject: Re: [fluid-dev] DSP testing
Date: Wed, 31 Mar 2004 13:52:08 +0200 (CEST)

[Peter Hanappe]

>Tim Goetze wrote:
>> i think that the safe way to handle this is to decouple the synth
>> interface (*_noteon() and friends) from the engine. a possible
>> solution is to queue commands in a lock-free FIFO that the engine
>> thread reads from at the start of an audio cycle, and that
>> fluid_synth_noteon() and friends write to. thus, complete execution
>> of noteons and the like is guaranteed.
>I would like to avoid lock-free FIFOs if possible. FIFOs don't make the
>design of the synthesizer simpler. I think it is possible to do without.
>I'm thinking of using "voice groups":
>1) make noteon() atomic. (currently, the synchronization is up to the
>    calling thread but that should be changed).
>2) all the voices created during the call to noteon are in the same
>    voice group and should be phase locked.
>3) a voice starts playing when the voice is turned on and when
>    the voice group is turned on. Currently, only the voice is checked.
>Changing and checking whether a voice group is on/off can be done using
>atomic operations. No need for locking, no need for FIFOs.

 agreed, FIFOs would not make things simpler, and if we can do without
it's nice.

 otoh, i see some trouble with making noteon() atomic. the problem is
that while you can make _individual members_ of the voice struct
atomic, there's no way to make the _whole structure_ atomic without a
lock. below i'll try to outline one case where the user-update scheme
will potentially run into trouble.

 now, given that the audio thread actually must _never_ acquire a lock
(we have a potential source of dropouts if it does no matter how), the
only way i see to guarantee the atomicity of voice updates is to
access the voice structs _exclusively_ from the audio thread.

 in essence, i think your scheme may well work. my fear is that it is
very hard to understand its implications because the voice struct as a
whole is not atomic. in consequence, while the sources may look
simpler, understanding what happens precisely can be almost
impossible: given that both user and audio threads can be preempted at
any point in the code we have literally thousands of possible
interaction combinations between the voice struct reader/writer code.

 the FIFO approach solves the synchronization problems once and for
all. we'll never have to think about "in what order can i access the
members of voice*?" again if it's only ever done in the audio thread.
one dead-simple design for such a FIFO with best code reuse is
probably reading/writing a MIDI byte stream a la /dev/midi.

>I don't think there's a need for locking anyway. (BTW, I've commented
>out the use of the synth->busy lock in fluid_synth.c). I'll try to
>explain. The main data structure shared between the user thread and the
>audio thread is the voice structure. When the voice is not
>playing, the user thread can initialize a voice and then toggle the
>playing state. This can be done atomically. Once a voice is playing the
>audio thread will start accessing it.

 but what happens when a voice needs to be killed because a noteon
demands more voices than the polyphony setting allows? your scheme
requires this to happen in the user thread.

 now imagine the audio thread gets preempted after having decided a
voice is valid to play, i.e. in the middle of an audio cycle (we're
not always running privileged, so this can happen). before the audio
thread is woken again, the user thread decides to kill the voice and
reuse it.

 things get very messy now because the audio thread completes the
audio cycle with some initialization done on the previous contents of
the voice structure, which are not valid with respect to the current
contents of *voice. the actual behaviour depends heavily on the
ordering of the audio cycle code, en tous cas it's not predictable at

>While playing, the user thread may change some generator
>value. This is just a matter a writing a new float value in the voice
>structure. This can be done atomically, too (no?).

 yes, on most platforms float writes are atomic, just like int. only
some embedded devices make an exception here.

>I hope this analysis is correct (and clear). The current implementation
>doesn't use explicit atomic operators to set/get a value. It uses plain
>integer or float assignments and reads but that can be changed and
>doesn't alter the analysis above.

 yes, you don't need to go beyond simple assignments to get atomic
access. (on most platforms that is, once again the embedded headache



reply via email to

[Prev in Thread] Current Thread [Next in Thread]