[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [fluid-dev] Real-time Controls + Audio Streaming

From: James L.
Subject: Re: [fluid-dev] Real-time Controls + Audio Streaming
Date: Sun, 4 Nov 2012 01:48:51 +0800

Not possible to generate audio in realtime from fluidsynth, I believe Java could capture the sound source and stream it over vlc player? Another idea Play! Framework using Iteratee might be working for your case with Scala programming language, the framework does support Java too.

On Nov 3, 2012 1:44 PM, "David Pearah" <address@hidden> wrote:

I am an experienced web programmer (_javascript_, HTML, Flash, Java) but relatively new to MIDI. I am researching a project requires both 1. real-time audio synthesis and 2. real-time audio streaming from server to client, but I wanted to get insight/direction before moving forward with FluidSynth Very simply, this is what I'm trying to build:
    • Web app (i.e. runs in most Mac + Windows desktop browsers) that plays MIDI files with high-quality grand piano SoundFont
    • Real-time controls for speed and pitch (along with typical controls for volume, pause/play, etc.)... so there's no option to pre-generate audio files since you can't anticipate what pitch/key combination will be requested in the middle of playing the song.
    • My assumption is that it is NOT a good idea to have the softsynth running in the browser (computationally intense, large SoundFont download, install fat client vs. web app, etc.)
    • So this leads me to believe that the softsynth should be running in real-time on the server, generating audio that can be streamed to the browser app, which would be very lightweight since all it would need to do is play streaming audio
    • The controls for speed + pitch would actually go back to the server, and in real-time cause the softsynth to generate the corresponding audio which would be streamed to the web client
So my questions are:
  1. Can Fluidsynth be installed on a server and generate real-time audio fast enough to give up with playback, i.e. given typical server CPU and single piano instrument, is it reasonable to expect that FluidSynth can generate audio in faster than real-time?
  2. Can the FluidSynth API be accessed in mid-song to change the pitch and velocity, or does it have to start over from the beginning of the song?
  3. Do you know of anyone who has taken the audio output from FluidSynth and streamed it to another client?

I greatly appreciate your time to review these questions and hopefully point me in the right direction. And for those who are interested, I'm willing to pay for a short-term development contract to help get this project started.


-- Dave

fluid-dev mailing list

reply via email to

[Prev in Thread] Current Thread [Next in Thread]