[Top][All Lists]
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Gnash-dev] NetStream design (episode 1)
From: |
strk |
Subject: |
Re: [Gnash-dev] NetStream design (episode 1) |
Date: |
Fri, 16 May 2008 19:06:02 +0200 |
On Fri, May 16, 2008 at 06:48:41PM +0200, Bastiaan Jacques wrote:
>
>
> On Thu, 15 May 2008, strk wrote:
>
> >I've been trying to model the new NetStream architecture.
> >Doing so, questions arise very early in the process, so
> >here they are.
> >
> >Consider the following model:
> >
> >
> > +------------+
> > | input |
> > +------------+
> > | bytes |
> > +------------+
> > |
> > v
> > ( parser )
> > |
> > +----------+---------+
> > | |
> > v v
> > +----------------+ +----------------+ +------------+
> > | video_buffer | | audio_buffer | | playhead |
> > +----------------+ +----------------+ +------------+
> > | encoded_frames | | encoded_frames | | cur_time |
> > +----------------+ +----------------+ | play_state |
> > +------------+
> >
> >In this model, the (parser) process would run in its thread and
> >fully fill the video/audio buffers.
>
> I like the idea of having a lot of things in one thread: downloading,
> parsing, and decoding.
One problem with this might be memory use.
Buffer length (in seconds) can be specified by SWF author.
Keeping hundreds of decoded frames in a buffer would be really a lot
of memory compared with hundreds of *encoded* frames.
> In theory that should be enough, so we'd only
> have two threads (including "main").
We could still have only 2 threads if main thread would decode
on demand. Consider also any case of frame dropping or seek-forward,
in which case we wouldn't necessarely need to decode intermediate frames
(just from last keyframe to current).
> AFAIK, most container formats have some timekeeping mechanism. Usually
> you'll get some extra information like duration through metadata. So by
> the time you've parsed the first kilobyte of the stream you'll usually
> know the size of the stream in bytes, the duration in seconds, the total
> number of frames and perhaps the bitrates of audio and video,
> respectively.
What I was particularly interested in was attaching timestamps
to encoded frames, in the idea that we'd store encoded frames
in the buffer.
Ie: the ability to have our Buffer class expose services like:
getMostRecentEncodedFrameForTime(timestamp)
--strk;