[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gnash-dev] New Gstreamer based soundbackend commited

From: strk
Subject: Re: [Gnash-dev] New Gstreamer based soundbackend commited
Date: Mon, 24 Jul 2006 21:48:01 +0200

Good work Tom.
As you know I've some problems with the dependency on latest
release. Do you think it is possible at all to make the code
work against 0.8.9 ? If not, what's been introduced in latter
versions that make this impossible ?



On Mon, Jul 24, 2006 at 04:11:14PM +0200, Tomas Groth wrote:
> Hi all,
> It has finally happened! Gnash can now use gstreamer to decode and play audio
> sounds! (well mostly anyway...)
> It's not enabled by default, it's a giant walking memory leak, only mp3 sounds
> works and it still needs some work!
> So I hear you ask: "Is this what we've all been waiting for? Is Gstreamer the
> solution to all our problems?" The short answer is: "Not yet", and the longer
> one is: Some parts of gstreamer is still in it's early stages, including the
> adder (the things that enables it to play to sounds at the same time), which 
> we
> use a lot! This means that certain things isn't working at the moment... More
> precisely, movies which starts sounds rapidly after each other will make
> gstreamer choke and die (but gnash will not crash).
> But as far as I know that's the only bug in the current gstreamer stable
> release which causes problems, but you will need gstreamer 0.10.8 (i think),
> since there seems to be other and more severe bugs in earlier versions.
> So what needs to be done? Here a list:
>  * Make the configure system select gstreamer by default, and disable 
> SDL_mixer
>  * Figure out a nice way to free the gstreamer elements after use
>  * Improve the soundstream implementation (I will hopefully be able to do this
> within a day or 2)
>  * Make other soundformat work; ADPCM, raw (maybe vorbis too?)
>  * Probably more...
> And now the gstreamer details:
> Gstreamer works with pipelines, bins and elements. Pipelines are the main bin,
> where all other bins or elements are places. Visually the pipeline looks like
> this:
>  ___
> |Bin|_
> |___| \
>  ___   \ _____       ____________
> |Bin|___|Adder|_____|Audio output|
> |___|   |_____|     |____________|
>  ___   /
> |Bin|_/
> |___|
> There is one bin for each sound which is being played. If a sound is played
> more the once at the same time, multiple bins will be made. The bins contains:
> |source|---|capsfilter|---|decoder|---|aconverter|---|aresampler|---|volume|
> In the source element we place parts of the undecodede sounddata, and when
> playing the pipeline will pull the data from the element. Via callbacks it is
> refilled if needed.
> In the capsfilter the data is labeled with the format of the data.
> The decoder (surprise!) decodes the data.
> The audioconverter converts the now raw sounddata into a format accepted by 
> the
> adder, all input to the adder must in the same format.
> The audioresampler resamples the raw sounddata into a sample accepted by the
> adder, all input to the adder must in the same samplerate.
> The volume element makes it possible to control the volume of each sound.
> That's the basics, and i hope it helps :)
> cheers,
> Tomas
> _______________________________________________
> Gnash-dev mailing list
> address@hidden


 /"\    ASCII Ribbon Campaign
 \ /    Respect for low technology.
  X     Keep e-mail messages readable by any computer system.
 / \    Keep it ASCII. 

reply via email to

[Prev in Thread] Current Thread [Next in Thread]