[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [fluid-dev] Callback driven rendering?

From: David Olofson
Subject: Re: [fluid-dev] Callback driven rendering?
Date: Tue, 9 Sep 2003 13:03:45 +0200
User-agent: KMail/1.4.3

On Tuesday 09 September 2003 06.47, Josh Green wrote:

The overview says

        "The function fluid_synth_process is still experimental
        and its use is therefore not recommended but it will
        probably become the generic interface in future versions."

Is there any design document or something on this? Most importantly, 
if you can ask for N buffers, who decides which buffer comes from 

I figured, *if* I'm going to hack FluidSynth, I might as well do it 
right. :-)

> > I'm thinking about routing FluidSynth's output, preferably
> > channel by channel, directly into the FX network of Audiality,
> > and having Audiality's MIDI input and/or internal sequencer drive
> > both FluidSynth and Audiality's internal synth. That is, using
> > FluidSynth to add "structurally clean" SF2 support to Audiality.
> Sounds good. I'm not sure if FluidSynth has the channel by channel
> audio routing capability that you mention. Do you mean audio or
> MIDI channel? Like being able to route the individual left/right
> audio channels, or each MIDI channel?

Both, actually. What I want is to plug FluidSynth channels into 
Audiality channels, so it appears as if Audiality is playing 
SoundFonts natively. That is, the Audiality channel gets control 
input from somewhere (internal sequencer, application, external MIDI 
or whatever), passes it to a FluidSynth channel. Later, when it's 
time for audio processing, Audiality gets the raw outputs and sends 
from the FluidSynth channel, and routes the audio just as if it had 
come from the internal synth.

So, I would need the ability to somehow address channels, each as one 
MIDI channel, one dry stereo output, one stereo reverb send and one 
stereo chorus send, and what have you. (I'd probably turn the 
FluidSynth FX units off, as I can't really use them without an extra 
"roundtrip". And I'm probably going to steal the code anyway. ;-)

> > //David Olofson - Programmer, Composer, Open Source Advocate
> I'm currently the maintainer of FluidSynth, but lack much knowledge
> in its workings, and I don't see myself adding any new
> functionality in the near future, since I'm working a lot on my
> other projects (Swami - http://swami.sourceforge.net).

Just played around a bit with Swami the other day, BTW. Looks great! I 
guess I'll be using it a bit in the future. :-)

> So unless
> someone else steps up to the plate, or Markus and/or Peter get some
> free time, there probably wont be a whole lot of new developments
> for the time being. Feel free to ask questions about the current
> code base and API though. I'm also open to ideas for new features,
> but they will probably be placed on a list for future active
> development. Cheers.

Ok. Well, if you don't mind, I might have a go at implementing the 
stuff I need at some point.

//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! --------------------------------.
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`-----------------------------------> http://audiality.org -'
   --- http://olofson.net --- http://www.reologica.se ---

reply via email to

[Prev in Thread] Current Thread [Next in Thread]