gnash-dev
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gnash-dev] drawVideoFrame


From: Shachar Kaufman
Subject: Re: [Gnash-dev] drawVideoFrame
Date: Mon, 1 Sep 2008 11:40:23 +0300


It sounds like you're still using an old version of Gnash. If that's the
case, please update to current sources (in bzr) before spending too much
time working on improvements, as the codebase changes very quickly.

I believe I'm using the latest version 0.8.3. 
 

As far as I know, only encoded frames are queued now and decoded on
demand.

For queued decoded frames see, for instance, NetStreamFfmpeg::decodeVideo and NetStreamFfmpeg::refreshVideoFrame.
 
There is certainly a VideoDecoder object.

Indeed there's an object named VideoDecoder but it is not used with streamed media - why separate the two decoders it is basically replicated code using the external codec API. 

Whatever approach we
take, it will be necessary at some point to implement ActionScript
colour transform of the video.

What I don't understand is why video playback has to be controlled by gnash on frame granularity. It would be far better to treat the a/v decoder as a standalone entity with create, destroy, configure, play, pause, stop, query etc' methods. Configure methods can connect this decoder entity with the NetStream/DiskStream on the one side and with the renderer/GUI on the other. Of course clip and transform changes must be propagated to the video decoder (or more precisely to a video compositor entity). Frame based control should only be done if it is required - for instance I don't know the flash spec that well but if videos can be embedded in such a way that they need to be synced with the rest of the graphics to frame-accuracy (and not timestamp accuracy) current style sync is necessary. In any case a compositor entity could receive frames from GUI, videos etc' and do the sync according to the current scheme, scaling, filtering and format-converting as necessary for each input. This kind of a design lends itself much better for acceleration (be it multicore CPU/GPU or dedicated codec HW).


reply via email to

[Prev in Thread] Current Thread [Next in Thread]