[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [fluid-dev] New development

From: Josh Green
Subject: Re: [fluid-dev] New development
Date: Mon, 26 Jan 2009 16:49:51 -0800

On Tue, 2009-01-27 at 01:12 +0100, Bernat Arlandis i Mañó wrote:
> > About new development, there is an improvement to fluid that I had  
> > worked on two years ago in a private branch that I think should be  
> > integrated inside the main code base. I don't know if it should be  
> > integrated into 1.x or 2.x, because it changes the API a bit.
> > 
> > The bug is with the sequencer : in short, the sequencer uses the  
> > computer clock to trigger its sequenced events, whereas the audio  
> > buffers are requested and created by the soundcard when it needs
> > them  
> > which may be ahead of realtime, resulting in events not being  
> > triggered at the right moment in the audio stream. The fact is that
> > in  
> > real life, audio cards do request audio buffers ahead of time,  
> > sometimes a lot ahead, in bulk (like 16 buffers at once), thus not  
> > leaving time for the sequencer to trigger its events at the right  
> > moment in the audio stream.
> > I have fixed this by:
> > - making the sequencer use the audio stream as a clock
> > - calling the sequencer from the synth audio callback so that  
> > sequenced events are inserted in the audio buffer right before the  
> > audio is rendered
> > This implied a small change in the API because now the sequencer  
> > depends on the synth to get its clock (which was not the case
> > before).
> > 
> > How should I proceed to include it in the main code base ?
> > 
> > ++ as
> > 
> Maybe I'm not understanding what he's done, but it sounds to me like
> he's talking about simple and well-known sound card latency. I don't see
> it related to what you're talking about.
> I don't know why latency could be a problem playing midi files, maybe
> it's another problem. It might happen that his system timer has a low
> resolution and thus MIDI file playing is affected, I think fluidsynth
> tries to get a 4ms period system timer. In Linux you can solve this
> easily by setting up a higher system timer resolution in the kernel, I
> don't know about other systems.

I think the difference is between clocking MIDI events in MIDI files
based on the system timer versus using the sound card as the timing
source (how much audio has been played back).  It makes sense to me to
process the audio based on the audio playback.  This would lead to
identical playback between successive renders of a MIDI file, which is
what we want.  I don't see a problem with this change and I think it
would vastly improve things.  There might be a little more overhead as
far as MIDI event processing, but it would lead to more accurate timing
as well.

Does this adequately describe your solution Antoine?

Best regards,

reply via email to

[Prev in Thread] Current Thread [Next in Thread]