gnash-commit
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Gnash-commit] gnash ChangeLog backend/sound_handler_sdl.cpp l...


From: Tomas Groth
Subject: [Gnash-commit] gnash ChangeLog backend/sound_handler_sdl.cpp l...
Date: Wed, 23 May 2007 07:42:17 +0000

CVSROOT:        /sources/gnash
Module name:    gnash
Changes by:     Tomas Groth <tgc>       07/05/23 07:42:16

Modified files:
        .              : ChangeLog 
        backend        : sound_handler_sdl.cpp 
        libbase        : FLVParser.cpp FLVParser.h curl_adapter.cpp 
        server/asobj   : NetStreamFfmpeg.cpp NetStreamFfmpeg.h 

Log message:
                * backend/sound_handler_sdl.cpp: Made it a bit more thread-safe 
by
                  locking when attaching/detaching a aux_streamer.
                * libbase/FLVParser.{h,cpp}: Added audioFrameDelay(), and a 
fixes.
                * libbase/curl_adapter.cpp: Changed min and max sleep.
                * server/asobj/NetStreamFfmpeg.{h,cpp}: Cleanup! Splitted the 
big
                  decode function (read_frame) into 3 functions. Changed a few 
names
                  to make more sence. Improved syncing, and made it depend on 
audio 
                  if available, not video. Plus some overall improvements.

CVSWeb URLs:
http://cvs.savannah.gnu.org/viewcvs/gnash/ChangeLog?cvsroot=gnash&r1=1.3313&r2=1.3314
http://cvs.savannah.gnu.org/viewcvs/gnash/backend/sound_handler_sdl.cpp?cvsroot=gnash&r1=1.62&r2=1.63
http://cvs.savannah.gnu.org/viewcvs/gnash/libbase/FLVParser.cpp?cvsroot=gnash&r1=1.12&r2=1.13
http://cvs.savannah.gnu.org/viewcvs/gnash/libbase/FLVParser.h?cvsroot=gnash&r1=1.8&r2=1.9
http://cvs.savannah.gnu.org/viewcvs/gnash/libbase/curl_adapter.cpp?cvsroot=gnash&r1=1.33&r2=1.34
http://cvs.savannah.gnu.org/viewcvs/gnash/server/asobj/NetStreamFfmpeg.cpp?cvsroot=gnash&r1=1.54&r2=1.55
http://cvs.savannah.gnu.org/viewcvs/gnash/server/asobj/NetStreamFfmpeg.h?cvsroot=gnash&r1=1.28&r2=1.29

Patches:
Index: ChangeLog
===================================================================
RCS file: /sources/gnash/gnash/ChangeLog,v
retrieving revision 1.3313
retrieving revision 1.3314
diff -u -b -r1.3313 -r1.3314
--- ChangeLog   23 May 2007 06:54:42 -0000      1.3313
+++ ChangeLog   23 May 2007 07:41:45 -0000      1.3314
@@ -1,9 +1,18 @@
+2007-05-23 Tomas Groth Christensen <address@hidden>
+
+       * backend/sound_handler_sdl.cpp: Made it a bit more thread-safe by
+         locking when attaching/detaching a aux_streamer.
+       * libbase/FLVParser.{h,cpp}: Added audioFrameDelay(), and a fixes.
+       * libbase/curl_adapter.cpp: Changed min and max sleep.
+       * server/asobj/NetStreamFfmpeg.{h,cpp}: Cleanup! Splitted the big
+         decode function (read_frame) into 3 functions. Changed a few names
+         to make more sence. Improved syncing, and made it depend on audio 
+         if available, not video. Plus some overall improvements.
+
 2007-05-23 Zou Lunkai <address@hidden>
 
-       * testsuite/misc-ming.all: shape_test.c
-               add tests for shpaes.
-       * testsuite/misc-ming.all/Makefile.am
-               activate some testcases.
+       * testsuite/misc-ming.all/shape_test.c: add tests for shapes.
+       * testsuite/misc-ming.all/Makefile.am: activate some testcases.
                
 2007-05-22 Martin Guy <address@hidden>
 

Index: backend/sound_handler_sdl.cpp
===================================================================
RCS file: /sources/gnash/gnash/backend/sound_handler_sdl.cpp,v
retrieving revision 1.62
retrieving revision 1.63
diff -u -b -r1.62 -r1.63
--- backend/sound_handler_sdl.cpp       21 May 2007 16:23:42 -0000      1.62
+++ backend/sound_handler_sdl.cpp       23 May 2007 07:41:46 -0000      1.63
@@ -18,7 +18,7 @@
 // Based on sound_handler_sdl.cpp by Thatcher Ulrich http://tulrich.com 2003
 // which has been donated to the Public Domain.
 
-// $Id: sound_handler_sdl.cpp,v 1.62 2007/05/21 16:23:42 tgc Exp $
+// $Id: sound_handler_sdl.cpp,v 1.63 2007/05/23 07:41:46 tgc Exp $
 
 #ifdef HAVE_CONFIG_H
 #include "config.h"
@@ -495,6 +495,7 @@
 
 void   SDL_sound_handler::attach_aux_streamer(aux_streamer_ptr ptr, void* 
owner)
 {
+       mutex::scoped_lock lock(_mutex);
        assert(owner);
        assert(ptr);
 
@@ -521,6 +522,7 @@
 
 void   SDL_sound_handler::detach_aux_streamer(void* owner)
 {
+       mutex::scoped_lock lock(_mutex);
        aux_streamer_ptr p;     
        if (m_aux_streamer.get(owner, &p))
        {

Index: libbase/FLVParser.cpp
===================================================================
RCS file: /sources/gnash/gnash/libbase/FLVParser.cpp,v
retrieving revision 1.12
retrieving revision 1.13
diff -u -b -r1.12 -r1.13
--- libbase/FLVParser.cpp       16 May 2007 18:22:31 -0000      1.12
+++ libbase/FLVParser.cpp       23 May 2007 07:42:16 -0000      1.13
@@ -17,7 +17,7 @@
 // Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA  02110-1301  USA
 //
 
-// $Id: FLVParser.cpp,v 1.12 2007/05/16 18:22:31 tgc Exp $
+// $Id: FLVParser.cpp,v 1.13 2007/05/23 07:42:16 tgc Exp $
 
 #include "FLVParser.h"
 #include "amf.h"
@@ -96,6 +96,24 @@
        return _videoFrames[_nextVideoFrame-1]->timestamp - 
_videoFrames[_nextVideoFrame-2]->timestamp;
 }
 
+uint32_t FLVParser::audioFrameDelay()
+{
+       boost::mutex::scoped_lock lock(_mutex);
+
+       // If there are no audio in this FLV return 0
+       if (!_audio && _lastParsedPosition > 0) return 0;
+
+       // Make sure that there are parsed some frames
+       while(_audioFrames.size() < 2 && !_parsingComplete) {
+               parseNextFrame();
+       }
+
+       // If there is no video data return 0
+       if (_audioFrames.size() == 0 || !_audio || _nextAudioFrame < 2) return 
0;
+
+       return _audioFrames[_nextAudioFrame-1]->timestamp - 
_audioFrames[_nextAudioFrame-2]->timestamp;
+}
+
 FLVFrame* FLVParser::nextMediaFrame()
 {
        boost::mutex::scoped_lock lock(_mutex);
@@ -183,6 +201,7 @@
        FLVFrame* frame = new FLVFrame;
        frame->dataSize = _audioFrames[_nextAudioFrame]->dataSize;
        frame->timestamp = _audioFrames[_nextAudioFrame]->timestamp;
+       frame->tag = 8;
 
        _lt->seek(_audioFrames[_nextAudioFrame]->dataPosition);
        frame->data = new uint8_t[_audioFrames[_nextAudioFrame]->dataSize];
@@ -220,6 +239,7 @@
        FLVFrame* frame = new FLVFrame;
        frame->dataSize = _videoFrames[_nextVideoFrame]->dataSize;
        frame->timestamp = _videoFrames[_nextVideoFrame]->timestamp;
+       frame->tag = 9;
 
        _lt->seek(_videoFrames[_nextVideoFrame]->dataPosition);
        frame->data = new uint8_t[_videoFrames[_nextVideoFrame]->dataSize];
@@ -437,8 +457,12 @@
        boost::mutex::scoped_lock lock(_mutex);
 
        // Parse frames until the need time is found, or EOF
-       while (!_parsingComplete && _videoFrames.size() > 0 && 
_videoFrames.back()->timestamp < time && _audioFrames.size() > 0 && 
_audioFrames.back()->timestamp < time) {
+       while (!_parsingComplete) {
                if (!parseNextFrame()) break;
+               if ((_videoFrames.size() > 0 && _videoFrames.back()->timestamp 
>= time)
+                       || (_audioFrames.size() > 0 && 
_audioFrames.back()->timestamp >= time)) {
+                       return true;
+               }
        }
 
        if (_videoFrames.size() > 0 && _videoFrames.back()->timestamp >= time) {

Index: libbase/FLVParser.h
===================================================================
RCS file: /sources/gnash/gnash/libbase/FLVParser.h,v
retrieving revision 1.8
retrieving revision 1.9
diff -u -b -r1.8 -r1.9
--- libbase/FLVParser.h 16 May 2007 18:22:32 -0000      1.8
+++ libbase/FLVParser.h 23 May 2007 07:42:16 -0000      1.9
@@ -17,7 +17,7 @@
 // Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA  02110-1301  USA
 //
 
-// $Id: FLVParser.h,v 1.8 2007/05/16 18:22:32 tgc Exp $
+// $Id: FLVParser.h,v 1.9 2007/05/23 07:42:16 tgc Exp $
 
 // Information about the FLV format can be found at http://osflash.org/flv
 
@@ -196,6 +196,10 @@
        uint32_t seek(uint32_t);
 
        /// Returns the framedelay from the last to the current
+       /// audioframe in milliseconds. This is used for framerate.
+       uint32_t audioFrameDelay();
+
+       /// Returns the framedelay from the last to the current
        /// videoframe in milliseconds. This is used for framerate.
        uint32_t videoFrameDelay();
 

Index: libbase/curl_adapter.cpp
===================================================================
RCS file: /sources/gnash/gnash/libbase/curl_adapter.cpp,v
retrieving revision 1.33
retrieving revision 1.34
diff -u -b -r1.33 -r1.34
--- libbase/curl_adapter.cpp    14 May 2007 14:18:26 -0000      1.33
+++ libbase/curl_adapter.cpp    23 May 2007 07:42:16 -0000      1.34
@@ -17,7 +17,7 @@
 // Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA  02110-1301  USA
 //
 
-/* $Id: curl_adapter.cpp,v 1.33 2007/05/14 14:18:26 rsavoye Exp $ */
+/* $Id: curl_adapter.cpp,v 1.34 2007/05/23 07:42:16 tgc Exp $ */
 
 #ifdef HAVE_CONFIG_H
 #include "config.h"
@@ -273,7 +273,7 @@
 #endif
 
 // Disable this when you're convinced the sleeping mechanism is satisfactory
-#define VERBOSE_POLLING_LOOP 1
+//#define VERBOSE_POLLING_LOOP 1
 
 #if VERBOSE_POLLING_LOOP
        long unsigned fetchRequested = size-_cached;
@@ -283,8 +283,8 @@
        // to nap between curl_multi_perform calls if the amount
        // of data requested haven't arrived yet.
        // 
-       const long unsigned minSleep =  500000; // half second
-       const long unsigned maxSleep = 1000000; // one second
+       const long unsigned minSleep =  100000; // 1/10 second
+       const long unsigned maxSleep =  300000; // 3/10 second
 
        CURLMcode mcode;
 #if VERBOSE_POLLING_LOOP

Index: server/asobj/NetStreamFfmpeg.cpp
===================================================================
RCS file: /sources/gnash/gnash/server/asobj/NetStreamFfmpeg.cpp,v
retrieving revision 1.54
retrieving revision 1.55
diff -u -b -r1.54 -r1.55
--- server/asobj/NetStreamFfmpeg.cpp    21 May 2007 16:23:42 -0000      1.54
+++ server/asobj/NetStreamFfmpeg.cpp    23 May 2007 07:42:16 -0000      1.55
@@ -17,7 +17,7 @@
 // Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA  02110-1301  USA
 //
 
-/* $Id: NetStreamFfmpeg.cpp,v 1.54 2007/05/21 16:23:42 tgc Exp $ */
+/* $Id: NetStreamFfmpeg.cpp,v 1.55 2007/05/23 07:42:16 tgc Exp $ */
 
 #ifdef HAVE_CONFIG_H
 #include "config.h"
@@ -63,7 +63,9 @@
 
        _decodeThread(NULL),
 
-       m_video_clock(0),
+       m_last_video_timestamp(0),
+       m_last_audio_timestamp(0),
+       m_current_timestamp(0),
        m_unqueued_data(NULL),
        m_time_of_pause(0)
 {
@@ -83,15 +85,11 @@
        {
                if (m_pause) unpauseDecoding();
                else pauseDecoding();
-               
-//             m_pause = ! m_pause;
        }
        else
        {
                if (mode == 0) pauseDecoding();
                else unpauseDecoding();
-
-//             m_pause = (mode == 0) ? true : false;
        }
        if (!m_pause && !m_go) { 
                setStatus(playStart);
@@ -209,7 +207,7 @@
        // Is it already playing ?
        if (m_go)
        {
-               if (m_pause) unpauseDecoding(); //m_pause = false;
+               if (m_pause) unpauseDecoding();
                return 0;
        }
 
@@ -229,7 +227,7 @@
        }
 
        m_go = true;
-       unpauseDecoding();//m_pause = true;
+       pauseDecoding();
 
        // This starts the decoding thread
        _decodeThread = new 
boost::thread(boost::bind(NetStreamFfmpeg::av_streamer, this)); 
@@ -361,10 +359,12 @@
        return av_probe_input_format(&probe_data, 1);
 }
 
-void
-NetStreamFfmpeg::startPlayback(NetStreamFfmpeg* ns)
+bool
+NetStreamFfmpeg::startPlayback()
 {
 
+       NetStreamFfmpeg* ns = this; // Remove this and all "ns->" in this 
function
+
        boost::intrusive_ptr<NetConnection> nc = ns->_netCon;
        assert(nc);
 
@@ -373,7 +373,7 @@
        if ( !nc->openConnection(ns->url) ) {
                log_error(_("Gnash could not open movie: %s"), ns->url.c_str());
                ns->setStatus(streamNotFound);
-               return;
+               return false;
        }
 
        ns->inputPos = 0;
@@ -382,8 +382,9 @@
        char head[4] = {0, 0, 0, 0};
        if (nc->read(head, 3) < 3) {
                ns->setStatus(streamNotFound);
-               return;
+               return false;
        }
+
        nc->seek(0);
        if (std::string(head) == "FLV") {
                ns->m_isFLV = true;
@@ -392,7 +393,7 @@
                        ns->setStatus(streamNotFound);
                        log_error(_("Gnash could not open FLV movie: %s"), 
ns->url.c_str());
                        delete ns->m_parser;
-                       return;
+                       return false;
                }
 
                // Init the avdecoder-decoder
@@ -402,13 +403,13 @@
                ns->m_VCodecCtx = initFlvVideo(ns->m_parser);
                if (!ns->m_VCodecCtx) {
                        log_msg(_("Failed to initialize FLV video codec"));
-                       return;
+                       return false;
                }
 
                ns->m_ACodecCtx = initFlvAudio(ns->m_parser);
                if (!ns->m_ACodecCtx) {
                        log_msg(_("Failed to initialize FLV audio codec"));
-                       return;
+                       return false;
                }
 
                // We just define the indexes here, they're not really used when
@@ -423,11 +424,9 @@
 
                // Allocate a frame to store the decoded frame in
                ns->m_Frame = avcodec_alloc_frame();
-
-               return;
+               return true;
        }
 
-
        // This registers all available file formats and codecs 
        // with the library so they will be used automatically when
        // a file with the corresponding format/codec is opened
@@ -437,7 +436,7 @@
        AVInputFormat* inputFmt = probeStream(ns);
        if (!inputFmt) {
                log_error(_("Couldn't determine stream input format from URL 
%s"), ns->url.c_str());
-               return;
+               return false;
        }
 
        // After the format probe, reset to the beginning of the file.
@@ -454,7 +453,7 @@
        if(av_open_input_stream(&ns->m_FormatCtx, &ns->ByteIOCxt, "", inputFmt, 
NULL) < 0){
                log_error(_("Couldn't open file '%s' for decoding"), 
ns->url.c_str());
                ns->setStatus(streamNotFound);
-               return;
+               return false;
        }
 
        // Next, we need to retrieve information about the streams contained in 
the file
@@ -463,7 +462,7 @@
        if (ret < 0)
        {
                log_error(_("Couldn't find stream information from '%s', error 
code: %d"), ns->url.c_str(), ret);
-               return;
+               return false;
        }
 
 //     m_FormatCtx->pb.eof_reached = 0;
@@ -502,7 +501,7 @@
        if (ns->m_video_index < 0)
        {
                log_error(_("Didn't find a video stream from '%s'"), 
ns->url.c_str());
-               return;
+               return false;
        }
 
        // Get a pointer to the codec context for the video stream
@@ -515,7 +514,7 @@
                ns->m_VCodecCtx = NULL;
                log_error(_("Video decoder %d not found"), 
                        ns->m_VCodecCtx->codec_id);
-               return;
+               return false;
        }
 
        // Open codec
@@ -547,7 +546,7 @@
                {
                        log_error(_("No available audio decoder %d to process 
MPEG file: '%s'"), 
                                ns->m_ACodecCtx->codec_id, ns->url.c_str());
-                       return;
+                       return false;
                }
         
                // Open codec
@@ -555,25 +554,26 @@
                {
                        log_error(_("Could not open audio codec %d for %s"),
                                ns->m_ACodecCtx->codec_id, ns->url.c_str());
-                       return;
+                       return false;
                }
 
                s->attach_aux_streamer(audio_streamer, (void*) ns);
 
        }
 
-       ns->unpauseDecoding(); //ns->m_pause = false;
+       ns->unpauseDecoding();
+       return true;
 }
 
 
-/// Copy RGB data from a source raw_videodata_t to a destination image::rgb.
+/// Copy RGB data from a source raw_mediadata_t to a destination image::rgb.
 /// @param dst the destination image::rgb, which must already be initialized
 ///            with a buffer of size of at least src.m_size.
-/// @param src the source raw_videodata_t to copy data from. The m_size member
+/// @param src the source raw_mediadata_t to copy data from. The m_size member
 ///            of this structure must be initialized.
 /// @param width the width, in bytes, of a row of video data.
 static void
-rgbcopy(image::rgb* dst, raw_videodata_t* src, int width)
+rgbcopy(image::rgb* dst, raw_mediadata_t* src, int width)
 {
        assert(src->m_size <= static_cast<uint32_t>(dst->m_width * 
dst->m_height * 3));
 
@@ -594,7 +594,7 @@
 {
 
        if (!ns->m_parser && !ns->m_FormatCtx) {
-               startPlayback(ns);
+               if (!ns->startPlayback()) return;
        } else {
                // We need to restart the audio
                sound_handler* s = get_sound_handler();
@@ -608,7 +608,9 @@
 
        ns->setStatus(playStart);
 
-       ns->m_video_clock = 0;
+       ns->m_last_video_timestamp = 0;
+       ns->m_last_audio_timestamp = 0;
+       ns->m_current_timestamp = 0;
 
        ns->m_start_clock = tu_timer::ticks_to_seconds(tu_timer::get_ticks());
 
@@ -619,22 +621,35 @@
        // Loop while we're playing
        while (ns->m_go)
        {
+               if (ns->m_isFLV) {
+                       // If queues are full then don't bother filling it
+                       if (ns->m_qvideo.size() < 20 || ns->m_qvideo.size() < 
20) {
+
                // If we have problems with decoding - break
-               if (ns->read_frame() == false && ns->m_start_onbuffer == false 
&& ns->m_qvideo.size() == 0)
+                               if (!ns->decodeFLVFrame() && 
ns->m_start_onbuffer == false && ns->m_qvideo.size() == 0 && 
ns->m_qaudio.size() == 0) break;
+                       }
+
+                       if (ns->m_pause || (ns->m_qvideo.size() > 10 && 
ns->m_qaudio.size() > 10)) { 
+                               ns->decode_wait.wait(lock);
+                       }
+               } else {
+
+                       // If we have problems with decoding - break
+                       if (ns->decodeMediaFrame() == false && 
ns->m_start_onbuffer == false && ns->m_qvideo.size() == 0 && 
ns->m_qaudio.size() == 0)
                {
                        break;
                }
 
                // If paused, wait for being unpaused, or
                // if the queue is full we wait until someone notifies us that 
data is needed.
-               if (ns->m_pause || (ns->m_qvideo.size() > 0 && 
ns->m_unqueued_data)) { 
+                       if (ns->m_pause || ((ns->m_qvideo.size() > 0 && 
ns->m_qaudio.size() > 0) && ns->m_unqueued_data)) { 
                        ns->decode_wait.wait(lock);
                }
+               }
 
        }
        ns->m_go = false;
        ns->setStatus(playStop);
-
 }
 
 // audio callback is running in sound handler thread
@@ -648,41 +663,14 @@
 
        while (len > 0 && ns->m_qaudio.size() > 0)
        {
-               raw_videodata_t* samples = NULL; // = ns->m_qaudio.front();
-
-               // Find the best audioframe
-               while(1) {
-                       samples = ns->m_qaudio.front();
-
-                       // If the queue is empty, we tell the decoding thread 
to wake up,
-                       // and decode some more.
-                       if (!samples) {
-                               ns->decode_wait.notify_one();
-                               return true;
-                       }
+               raw_mediadata_t* samples = ns->m_qaudio.front();
 
-                       if (ns->m_qaudio.size() < 10) {
+               // If less than 3 frames in the queue notify the decoding thread
+               // so that we don't suddenly run out.
+               if (ns->m_qaudio.size() < 3) {
                                ns->decode_wait.notify_one();
                        }
 
-                       // Caclulate the current time
-                       double current_clock = 
(tu_timer::ticks_to_seconds(tu_timer::get_ticks()) - ns->m_start_clock)*1000;
-                       double audio_clock = samples->m_pts;
-
-                       // If the timestamp on the videoframe is smaller than 
the
-                       // current time, we put it in the output image.
-                       if (current_clock >= audio_clock)
-                       {
-                               break;
-                       } else {
-                               ns->m_qaudio.pop();
-                               delete samples;
-                               samples = NULL;
-                       }
-               }
-
-
-               if (samples) {
                        int n = imin(samples->m_size, len);
                        memcpy(stream, samples->m_ptr, n);
                        stream += n;
@@ -690,91 +678,70 @@
                        samples->m_size -= n;
                        len -= n;
 
+               ns->m_current_timestamp = samples->m_pts;
+
                        if (samples->m_size == 0)
                        {
                                ns->m_qaudio.pop();
                                delete samples;
                        }
-               }
+
        }
        return true;
 }
 
-bool NetStreamFfmpeg::read_frame()
+bool NetStreamFfmpeg::decodeFLVFrame()
 {
-       boost::mutex::scoped_lock  lock(decoding_mutex);
+       AVPacket packet;
 
-//     raw_videodata_t* ret = NULL;
-       if (m_unqueued_data)
-       {
-               if (m_unqueued_data->m_stream_index == m_audio_index)
-               {
-                       sound_handler* s = get_sound_handler();
-                       if (s)
-                       {
-                               m_unqueued_data = 
m_qaudio.push(m_unqueued_data) ? NULL : m_unqueued_data;
-                       }
-               }
-               else
-               if (m_unqueued_data->m_stream_index == m_video_index)
-               {
-                       m_unqueued_data = m_qvideo.push(m_unqueued_data) ? NULL 
: m_unqueued_data;
-               }
-               else
-               {
-                       log_error(_("read_frame: not audio & video stream"));
-               }
-               return true;
+       FLVFrame* frame;
+       if (m_qvideo.size() < m_qaudio.size()) {
+               frame = m_parser->nextVideoFrame();
+       } else {
+               frame = m_parser->nextAudioFrame();
        }
 
-       AVPacket packet;
-       int rc;
-       if (m_isFLV) {
-               FLVFrame* frame = m_parser->nextMediaFrame();
-
                if (frame == NULL) {
                        if (_netCon->loadCompleted()) {
                                // Stop!
                                m_go = false;
                        } else {
                                // We pause and load and buffer a second before 
continuing.
-                               pauseDecoding(); m_pause = true;
-                               m_bufferTime = 
static_cast<uint32_t>(m_video_clock) * 1000 + 1000;
+                       pauseDecoding();
+                       m_bufferTime = 
static_cast<uint32_t>(m_current_timestamp) * 1000 + 1000;
                                setStatus(bufferEmpty);
                                m_start_onbuffer = true;
                        }
                        return false;
                }
                
-               if (frame->tag == 9) {
-                       packet.stream_index = 0;
-               } else {
-                       packet.stream_index = 1;
-               }
                packet.destruct = avpacket_destruct;
                packet.size = frame->dataSize;
                packet.data = frame->data;
                // FIXME: is this the right value for packet.dts?
                packet.pts = packet.dts = 
static_cast<int64_t>(frame->timestamp);
-               rc = 0;
 
+       if (frame->tag == 9) {
+               packet.stream_index = 0;
+               return decodeVideo(&packet);
        } else {
-               rc = av_read_frame(m_FormatCtx, &packet);
+               packet.stream_index = 1;
+               return decodeAudio(&packet);
        }
 
-       if (rc >= 0)
-       {
-               if (packet.stream_index == m_audio_index && get_sound_handler())
-               {
+}
+
+bool NetStreamFfmpeg::decodeAudio(AVPacket* packet)
+{
                        int frame_size;
                        unsigned int bufsize = (AVCODEC_MAX_AUDIO_FRAME_SIZE * 
3) / 2;
 
                        uint8_t* ptr = new uint8_t[bufsize];
 #ifdef FFMPEG_AUDIO2
                        frame_size = bufsize;
-                       if (avcodec_decode_audio2(m_ACodecCtx, (int16_t*) ptr, 
&frame_size, packet.data, packet.size) >= 0)
+       if (avcodec_decode_audio2(m_ACodecCtx, (int16_t*) ptr, &frame_size, 
packet->data, packet->size) >= 0)
 #else
-                       if (avcodec_decode_audio(m_ACodecCtx, (int16_t*) ptr, 
&frame_size, packet.data, packet.size) >= 0)
+       if (avcodec_decode_audio(m_ACodecCtx, (int16_t*) ptr, &frame_size, 
packet->data, packet->size) >= 0)
 #endif
                        {
 
@@ -793,22 +760,47 @@
                                        ptr = 
reinterpret_cast<uint8_t*>(output);
                                }
                                
-                               raw_videodata_t* raw = new raw_videodata_t;
+               raw_mediadata_t* raw = new raw_mediadata_t();
                                
                                raw->m_data = ptr;
                                raw->m_ptr = raw->m_data;
                                raw->m_size = samples * 2 * 2; // 2 for stereo 
and 2 for samplesize = 2 bytes
                                raw->m_stream_index = m_audio_index;
 
-                               m_unqueued_data = m_qaudio.push(raw) ? NULL : 
raw;
+               // set presentation timestamp
+               if (packet->dts != static_cast<signed long>(AV_NOPTS_VALUE))
+               {
+                       if (!m_isFLV) raw->m_pts = 
as_double(m_audio_stream->time_base) * packet->dts;
+                       else raw->m_pts = as_double(m_ACodecCtx->time_base) * 
packet->dts;
                        }
+
+               if (raw->m_pts != 0)
+               {       
+                       // update audio clock with pts, if present
+                       m_last_audio_timestamp = raw->m_pts;
                }
                else
-               if (packet.stream_index == m_video_index)
                {
+                       raw->m_pts = m_last_audio_timestamp;
+               }
+
+               // update video clock for next frame
+               double frame_delay;
+               if (!m_isFLV) frame_delay = 
as_double(m_audio_stream->codec->time_base);
+               else frame_delay = 
static_cast<double>(m_parser->audioFrameDelay())/1000.0;
+
+               m_last_audio_timestamp += frame_delay;
 
+               if (m_isFLV) m_qaudio.push(raw);
+               else m_unqueued_data = m_qaudio.push(raw) ? NULL : raw;
+       }
+       return true;
+}
+
+bool NetStreamFfmpeg::decodeVideo(AVPacket* packet)
+{
                        int got = 0;
-                       avcodec_decode_video(m_VCodecCtx, m_Frame, &got, 
packet.data, packet.size);
+       avcodec_decode_video(m_VCodecCtx, m_Frame, &got, packet->data, 
packet->size);
                        if (got) {
                                boost::scoped_array<uint8_t> buffer;
 
@@ -821,7 +813,6 @@
                                }
 
                                if (m_videoFrameFormat == render::NONE) { // 
NullGui?
-                                       av_free_packet(&packet);
                                        return false;
 
                                } else if (m_videoFrameFormat == render::YUV && 
m_VCodecCtx->pix_fmt != PIX_FMT_YUV420P) {
@@ -838,7 +829,7 @@
                                        m_Frame = frameRGB;
                                }
 
-                               raw_videodata_t* video = new raw_videodata_t;
+               raw_mediadata_t* video = new raw_mediadata_t;
                                if (m_videoFrameFormat == render::YUV) {
                                        video->m_data = new 
uint8_t[static_cast<image::yuv*>(m_imageframe)->size()];
                                } else if (m_videoFrameFormat == render::RGB) {
@@ -851,20 +842,20 @@
                                video->m_pts = 0;
 
                                // set presentation timestamp
-                               if (packet.dts != static_cast<signed 
long>(AV_NOPTS_VALUE))
+               if (packet->dts != static_cast<signed long>(AV_NOPTS_VALUE))
                                {
-                                       if (!m_isFLV)   video->m_pts = 
as_double(m_video_stream->time_base) * packet.dts;
-                                       else video->m_pts = 
as_double(m_VCodecCtx->time_base) * packet.dts;
+                       if (!m_isFLV)   video->m_pts = 
as_double(m_video_stream->time_base) * packet->dts;
+                       else video->m_pts = as_double(m_VCodecCtx->time_base) * 
packet->dts;
                                }
 
                                if (video->m_pts != 0)
                                {       
                                        // update video clock with pts, if 
present
-                                       m_video_clock = video->m_pts;
+                       m_last_video_timestamp = video->m_pts;
                                }
                                else
                                {
-                                       video->m_pts = m_video_clock;
+                       video->m_pts = m_last_video_timestamp;
                                }
 
                                // update video clock for next frame
@@ -875,7 +866,7 @@
                                // for MPEG2, the frame can be repeated, so we 
update the clock accordingly
                                frame_delay += m_Frame->repeat_pict * 
(frame_delay * 0.5);
 
-                               m_video_clock += frame_delay;
+               m_last_video_timestamp += frame_delay;
 
                                if (m_videoFrameFormat == render::YUV) {
                                        image::yuv* yuvframe = 
static_cast<image::yuv*>(m_imageframe);
@@ -915,7 +906,58 @@
 
                                }
 
-                               m_unqueued_data = m_qvideo.push(video) ? NULL : 
video;
+               if (m_isFLV) m_qvideo.push(video);
+               else m_unqueued_data = m_qvideo.push(video) ? NULL : video;
+
+               return true;
+       }
+       return false;
+}
+
+bool NetStreamFfmpeg::decodeMediaFrame()
+{
+       boost::mutex::scoped_lock  lock(decoding_mutex);
+
+       if (m_unqueued_data)
+       {
+               if (m_unqueued_data->m_stream_index == m_audio_index)
+               {
+                       sound_handler* s = get_sound_handler();
+                       if (s)
+                       {
+                               m_unqueued_data = 
m_qaudio.push(m_unqueued_data) ? NULL : m_unqueued_data;
+                       }
+               }
+               else
+               if (m_unqueued_data->m_stream_index == m_video_index)
+               {
+                       m_unqueued_data = m_qvideo.push(m_unqueued_data) ? NULL 
: m_unqueued_data;
+               }
+               else
+               {
+                       log_error(_("read_frame: not audio & video stream"));
+               }
+               return true;
+       }
+
+       AVPacket packet;
+       int rc = av_read_frame(m_FormatCtx, &packet);
+
+       if (rc >= 0)
+       {
+               if (packet.stream_index == m_audio_index && get_sound_handler())
+               {
+                       if (!decodeAudio(&packet)) {
+                               log_error(_("Problems decoding audio frame"));
+                               return false;
+                       }
+               }
+               else
+               if (packet.stream_index == m_video_index)
+               {
+                       if (!decodeVideo(&packet)) {
+                               log_error(_("Problems decoding video frame"));
+                               return false;
                        }
                }
                av_free_packet(&packet);
@@ -955,15 +997,19 @@
 
        // This is kindof hackish and ugly :-(
        if (newpos == 0) {
-               m_video_clock = 0;
+               m_last_video_timestamp = 0;
+               m_last_audio_timestamp = 0;
+               m_current_timestamp = 0;
+
                m_start_clock = 
tu_timer::ticks_to_seconds(tu_timer::get_ticks());
 
        } else if (m_isFLV) {
                double newtime = static_cast<double>(newpos) / 1000.0;
-               m_start_clock += (m_video_clock - newtime) / 1000.0;
-
-               m_video_clock = newtime;
+               m_start_clock += (m_last_audio_timestamp - newtime) / 1000.0;
 
+               m_last_audio_timestamp = newtime;
+               m_last_video_timestamp = newtime;
+               m_current_timestamp = newtime;
        } else {
                AVPacket Packet;
                av_init_packet(&Packet);
@@ -981,9 +1027,11 @@
                av_free_packet(&Packet);
                av_seek_frame(m_FormatCtx, m_video_index, newpos, 0);
 
-               m_start_clock += (m_video_clock - newtime) / 1000.0;
+               m_start_clock += (m_last_audio_timestamp - newtime) / 1000.0;
 
-               m_video_clock = newtime;
+               m_last_audio_timestamp = newtime;
+               m_last_video_timestamp = newtime;
+               m_current_timestamp = newtime;
        }
        // Flush the queues
        while (m_qvideo.size() > 0)
@@ -1004,22 +1052,28 @@
 NetStreamFfmpeg::refreshVideoFrame()
 {
        // If we're paused or not running, there is no need to do this
-       if (!m_go && m_pause) return;
+       if (!m_go || m_pause) return;
 
        // Loop until a good frame is found
        while(1) {
                // Get video frame from queue, will have the lowest timestamp
-               raw_videodata_t* video = m_qvideo.front();
+               raw_mediadata_t* video = m_qvideo.front();
 
-               // If the queue is empty, we tell the decoding thread to wake 
up, 1179596,155087 1179596,169546
+               // If the queue is empty, we tell the decoding thread to wake 
up,
                // and decode some more.
                if (!video) {
                        decode_wait.notify_one();
-                       break;
+                       return;
                }
 
                // Caclulate the current time
-               double current_clock = 
(tu_timer::ticks_to_seconds(tu_timer::get_ticks()) - m_start_clock)*1000;
+               double current_clock;
+               if (m_ACodecCtx && get_sound_handler()) {
+                       current_clock = m_current_timestamp;
+               } else {
+                       current_clock = 
(tu_timer::ticks_to_seconds(tu_timer::get_ticks()) - m_start_clock)*1000;
+                       m_current_timestamp = current_clock;
+               }
 
                double video_clock = video->m_pts;
 
@@ -1043,15 +1097,16 @@
 
                        // A frame is ready for pickup
                        m_newFrameReady = true;
+
                } else {
                        // The timestamp on the first frame in the queue is 
greater
                        // than the current time, so no need to do anything.
-                       break;
+                       return;
                }
 
-               // If less than 10 frames in the queue notify the decoding 
thread
+               // If less than 3 frames in the queue notify the decoding thread
                // so that we don't suddenly run out.
-               if (m_qvideo.size() < 10) {
+               if (m_qvideo.size() < 3) {
                        decode_wait.notify_one();
                }
        }
@@ -1069,7 +1124,7 @@
        //    and we then wait until the buffer contains some data (1 sec) 
again.
        if (m_go && m_pause && m_start_onbuffer && m_parser && 
m_parser->isTimeLoaded(m_bufferTime)) {
                setStatus(bufferFull);
-               unpauseDecoding();      m_pause = false;
+               unpauseDecoding();
                m_start_onbuffer = false;
        }
 
@@ -1090,7 +1145,7 @@
                double time = (double)m_FormatCtx->streams[0]->time_base.num / 
(double)m_FormatCtx->streams[0]->time_base.den * 
(double)m_FormatCtx->streams[0]->cur_dts;
                return static_cast<int64_t>(time);
        } else if (m_isFLV) {
-               return static_cast<int64_t>(m_video_clock);
+               return static_cast<int64_t>(m_current_timestamp);
        } else {
                return 0;
        }
@@ -1112,9 +1167,13 @@
 
        m_pause = false;        
 
+       if (m_current_timestamp == 0) {
+               m_start_clock = 
tu_timer::ticks_to_seconds(tu_timer::get_ticks());
+       } else {
        // Add the paused time to the start time so that the playhead doesn't
        // noticed that we have been paused
        m_start_clock += tu_timer::ticks_to_seconds(tu_timer::get_ticks()) - 
m_time_of_pause;
+       }
 
        // Notify the decode thread/loop that we are running again
        decode_wait.notify_one();

Index: server/asobj/NetStreamFfmpeg.h
===================================================================
RCS file: /sources/gnash/gnash/server/asobj/NetStreamFfmpeg.h,v
retrieving revision 1.28
retrieving revision 1.29
diff -u -b -r1.28 -r1.29
--- server/asobj/NetStreamFfmpeg.h      19 May 2007 21:18:34 -0000      1.28
+++ server/asobj/NetStreamFfmpeg.h      23 May 2007 07:42:16 -0000      1.29
@@ -14,7 +14,7 @@
 // along with this program; if not, write to the Free Software
 // Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA  02110-1301  USA
 
-/* $Id: NetStreamFfmpeg.h,v 1.28 2007/05/19 21:18:34 tgc Exp $ */
+/* $Id: NetStreamFfmpeg.h,v 1.29 2007/05/23 07:42:16 tgc Exp $ */
 
 #ifndef __NETSTREAMFFMPEG_H__
 #define __NETSTREAMFFMPEG_H__
@@ -48,21 +48,22 @@
 
 namespace gnash {
   
-struct raw_videodata_t
+class raw_mediadata_t
 {
-       raw_videodata_t():
+public:
+       raw_mediadata_t():
        m_stream_index(-1),
        m_size(0),
        m_data(NULL),
        m_ptr(NULL),
        m_pts(0)
        {
-       };
+       }
 
-       ~raw_videodata_t()
+       ~raw_mediadata_t()
        {
                if (m_data) delete [] m_data;
-       };
+       }
 
        int m_stream_index;
        uint32_t m_size;
@@ -169,6 +170,8 @@
                return audio_resample (_context, output, input, samples);
        }
        
+       // The timestamp of the last decoded video frame
+       volatile double m_last_video_timestamp;
 
 private:
        ReSampleContext* _context;
@@ -189,19 +192,14 @@
        static int readPacket(void* opaque, uint8_t* buf, int buf_size);
        static offset_t seekMedia(void *opaque, offset_t offset, int whence);
 
-       bool read_frame();
-
-       inline double as_double(AVRational time)
-       {
-               return time.num / (double) time.den;
-       }
-
-       static void startPlayback(NetStreamFfmpeg* ns);
        static void av_streamer(NetStreamFfmpeg* ns);
        static bool audio_streamer(void *udata, uint8_t *stream, int len);
 
 private:
 
+       // Setups the playback
+       bool startPlayback();
+
        // Pauses the decoding - don't directly modify m_pause!!
        void pauseDecoding();
 
@@ -211,6 +209,24 @@
        // Check is we need to update the video frame
        void refreshVideoFrame();
 
+       // Used to decode and push the next available (non-FLV) frame to the 
audio or video queue
+       bool decodeMediaFrame();
+
+       // Used to decode push the next available FLV frame to the audio or 
video queue
+       bool decodeFLVFrame();
+
+       // Used to decode a video frame and push it on the videoqueue
+       bool decodeVideo(AVPacket* packet);
+
+       // Used to decode a audio frame and push it on the audioqueue
+       bool decodeAudio(AVPacket* packet);
+
+       // Used to calculate a decimal value from a ffmpeg fraction
+       inline double as_double(AVRational time)
+       {
+               return time.num / (double) time.den;
+       }
+
        int m_video_index;
        int m_audio_index;
        
@@ -241,25 +257,30 @@
        boost::mutex decode_wait_mutex;
        boost::condition decode_wait;
 
-       // The current time-position of the video in seconds
-       volatile double m_video_clock;
+       // The timestamp of the last decoded video frame
+       volatile double m_last_video_timestamp;
+
+       // The timestamp of the last decoded audio frame
+       volatile double m_last_audio_timestamp;
+
+       // The timestamp of the last played audio (default) or video (if no 
audio) frame
+       double m_current_timestamp;
 
        // The queues of audio and video data.
-       multithread_queue <raw_videodata_t*> m_qaudio;
-       multithread_queue <raw_videodata_t*> m_qvideo;
+       multithread_queue <raw_mediadata_t*> m_qaudio;
+       multithread_queue <raw_mediadata_t*> m_qvideo;
 
        // The time we started playing
        volatile double m_start_clock;
 
        // When the queues are full, this is where we keep the audio/video frame
        // there wasen't room for on its queue
-       raw_videodata_t* m_unqueued_data;
+       raw_mediadata_t* m_unqueued_data;
 
        ByteIOContext ByteIOCxt;
 
        // Time of when pause started
        double m_time_of_pause;
-
 };
 
 } // gnash namespace




reply via email to

[Prev in Thread] Current Thread [Next in Thread]