fluid-dev
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [fluid-dev] API design: fluid_synth_process()


From: Ceresa Jean-Jacques
Subject: Re: [fluid-dev] API design: fluid_synth_process()
Date: Wed, 2 May 2018 19:41:12 +0200 (CEST)

>In fact I believe that no matter how many output buffers the user calls it with, fluid_synth_process should always render all playing voices to those buffers (provided that nout >= 2).

yes, this is what fluid_synth_process() actually does (apart the internal mapping "MIDI channel to ouput buffers").

 

>Just following to those "normal" audio buffers we could place the effects buffers

Yes it could be done as these buffers effect availability is also important.

 

>If surround audio would ever be implemented, channel layout could look like:
>
> out[ i*5 + 0 ] = left_front_buffer_channel_i
> out[ i*5 + 1 ] = right_front_buffer_channel_i
> out[ i*5 + 2 ] = center_front_buffer_channel_i
> out[ i*5 + 3 ] = left_rear_buffer_channel_i
> out[ i*5 + 4 ] = right_rear_buffer_channel_i

 

You mean 5.1 (i.e 6 channels)

Sorry i dont' understand this index 5 ?.Does Surround could follow any others know buffers ?

 

>We could directly use this channel layout for rvoice_mixer internally, provided that the requested number of audio frames to synthesize is a multiple of fluid_synth_get_internal_bufsize() so we dont >have to hold any temporary audio buffer.

Yes but we need to keep in mind that the requested number of audio frames and the format is often imposed by the host API driver.

Actually the actual mixer internal temporary audio buffer is pretty flexible/performant .

 

>Disadvantage: Not very user friendly. There is no clear separation...

Yes, but this shouldn't be an issue as far there is no unknow. This is a matter of documentation .

 

>We could (ab)use the "in" parameter for effect buffers to avoid this ambiguity..

Mapping of dry and effect to output buffers is not easy to solve. I don't think that fluidsynth should impose his strategie.

-1) For exemple when the application want to use only one stereo audio channel, it is normal that dry and effect are mixed in the

same output buffers (as fluidsynth does actually).

-2) The oposite exemple of 1 comes when multiples audio channels are availlable and the application want to decide if some effect

     should be mixed with some dry in the same output buffer.

 For now i need time to think how to solve (2) clearly using "in" parameter (or others way).

-Note) Also, i think that in the future, the actual internal "MIDI channel to output buffer" hard coded mapping should be replaced by an API.

 

jjc

 

 

 

 

> Message du 02/05/18 16:40
> De : "Tom M." <address@hidden>
> A : "Ceresa Jean-Jacques" <address@hidden>
> Copie à : "FluidSynth mailing list" <address@hidden>
> Objet : Re: [fluid-dev] API design: fluid_synth_process()
>
> > fluid_synth_process() behaves like fluid_synth_nwrite_float()
>
> Except for the fact that it doesnt handle fx channels.
>
> > Consequently nout and out array must be set to sufficient size (nout >= 2 x synth->audio_channels).
>
> Not necessarily. I think fluid_synth_process() should also be usable for simple stereo mixing, i.e. nout == 2. In fact I believe that no matter how many output buffers the user calls it with, fluid_synth_process should always render all playing voices to those buffers (provided that nout >= 2).
>
> The channel layout indeed can be used by fluidsynths internal rendering engine. I'd suggest to use / extend the current calling convention. As you already said "out" currently contains an array of planar buffers for normal, dry, stereo audio (alternating left and right). Like:
>
> out[0] = left_buffer_channel_1
> out[1] = right_buffer_channel_1
> out[2] = left_buffer_channel_2
> out[3] = right_buffer_channel_2
> ...
> out[ i*2 + 0 ] = left_buffer_channel_i
> out[ i*2 + 1 ] = right_buffer_channel_i
>
> where 0 <= i < fluid_synth_count_audio_channels()
>
> Just following to those "normal" audio buffers we could place the effects buffers like:
>
> out [ fluid_synth_count_audio_channels() + 0 ] = left_buffer_fxchannel_1 (currently hardcoded to reverb)
> out [ fluid_synth_count_audio_channels() + 1 ] = right_buffer_fxchannel_1 (currently reverb)
> out [ fluid_synth_count_audio_channels() + 2 ] = left_buffer_fxchannel_2 (currently chorus)
> out [ fluid_synth_count_audio_channels() + 3 ] = right_buffer_fxchannel_2 (currently chorus)
>
> out [ fluid_synth_count_audio_channels() + k*2 + 0 ] = left_buffer_fxchannel_k (if this will ever be added)
> out [ fluid_synth_count_audio_channels() + k*2 + 1 ] = right_buffer_fxchannel_k
>
> where 0 <= k < fluid_synth_count_effects_channels()
>
> If surround audio would ever be implemented, channel layout could look like:
>
> out[ i*5 + 0 ] = left_front_buffer_channel_i
> out[ i*5 + 1 ] = right_front_buffer_channel_i
> out[ i*5 + 2 ] = center_front_buffer_channel_i
> out[ i*5 + 3 ] = left_rear_buffer_channel_i
> out[ i*5 + 4 ] = right_rear_buffer_channel_i
>
> We could directly use this channel layout for rvoice_mixer internally, provided that the requested number of audio frames to synthesize is a multiple of fluid_synth_get_internal_bufsize() so we dont have to hold any temporary audio buffer.
>
> Disadvantage: Not very user friendly. There is no clear separation between dry and effects channels anymore, like done in fluid_synth_nwrite_float(). This will lead to ambiguous situations: if the user calls fluid_synth_process() with 4 output buffers, is he requesting the first two normal stereo audio channels, or one stereo channel and one effects channel? We could (ab)use the "in" parameter for effect buffers to avoid this ambiguity, unless someone sees a better use-case for the "in" param?
>
> Tom
>
>

reply via email to

[Prev in Thread] Current Thread [Next in Thread]