In any case the current users expect to find timestamps in the
queue, so your best bet would be to demux-and-decode in one
step and store the result in the queues, then do nothing
from the decoder (except a memory copy).
1. When you say current users expect timestamps, do you mean in the encoded frames queue or the decoded frames queue or both? Why should encoded frames need a time stamp anyway?
I have already asked this when I was working with the previous release, but let's give it another try since you guys are so great helping me understand how gnash works:
2. Much more painful than passing encoded frames (elementary bitstream data == minor bandwidth) via a software layer, is passing decoded frames (usually YCbCr 4:2:0 images at high resolutions and framerates == huge bandwidth) via software. In order to make something like this work smoothly on a low end device (rather than a desktop grade computer system) I have to avoid this limitation, and pass decoded frames to a hardware compositor, which is also taking gnash's GUI as another input. This hardware of course has to do its own timestamp based or whatever synchronization between the composited input sources. And... you see, I want to use gnash on my (slow) embedded CPU. And I want to play the highest quality media the web can give me. I don't have the time to modify gnash more than superficially which is what I've been doing so far. This software front is a bottomless pit and I'm just one guy.