On Sat, Nov 3, 2012 at 6:44 AM, David
Java) but relatively new to MIDI. I am researching a project
requires both 1. real-time audio synthesis and 2. real-time
audio streaming from server to client, but I wanted to get
insight/direction before moving forward with FluidSynth Very
simply, this is what I'm trying to build:
- Web app (i.e. runs in most
Mac + Windows desktop browsers) that plays MIDI files
with high-quality grand piano SoundFont
- Real-time controls for
speed and pitch (along with typical controls for
volume, pause/play, etc.)... so there's no option to
pre-generate audio files since you can't anticipate
what pitch/key combination will be requested in the
middle of playing the song.
assumption is that it is NOT a good idea to have the
softsynth running in the browser (computationally
intense, large SoundFont download, install fat client
vs. web app, etc.)
this leads me to believe that the softsynth should
be running in real-time on the server,
generating audio that can be streamed to
the browser app, which would be very lightweight
since all it would need to do is play streaming
controls for speed + pitch would actually go back to
the server, and in real-time cause the softsynth to
generate the corresponding audio which would be
streamed to the web client
my questions are:
- Can Fluidsynth be installed
on a server and generate real-time audio fast enough to
give up with playback, i.e. given typical server CPU and
single piano instrument, is it reasonable to expect that
FluidSynth can generate audio in faster than real-time?
- Can the FluidSynth API be
accessed in mid-song to change the pitch and velocity,
or does it have to start over from the beginning of the
- Do you know of anyone who has
taken the audio output from FluidSynth and streamed it
to another client?
I greatly appreciate your time to review these questions and
hopefully point me in the right direction. And for those who
are interested, I'm willing to pay for a short-term
development contract to help get this project started.
I have also been interested in such an application of
FluidSynth for some time now, for use with the online
SoundFont instrument database project that I was working on.
In that case it would be used to preview SoundFont
interface for playing notes, etc. Seems like a similar
application to yours.
My own thoughts around the architecture of such a system is
* Would be a server based solution, with a server application
written in C.
* Server application would handle spawning FluidSynth
rendering threads using libFluidSynth to stream to users.
* Server would provide a FastCGI interface for control
and use AJAX to control the FluidSynth instance.
* Server application would interface to a Shoutcast server to
stream encoded MP3 data.