Hi everyone,
I was writing a player using GStreamer for a video format (which contains
video and audio). Currently the audio and video could be separately
renderered, but when combined together, the audio track is not playing
(although no error was generated during the playback). The structure of the
pipeline should be typical:
|-queue -> my own video decoder plugin -> my own video
renderer plugin
demuxer -
|-queue -> mad -> audioconverter -> alsasink
I tried to find the issue to the problem, and found out that the timestamp
of the audio samples delivered to the audio decoder and the timestamp of the
video samples delivered to the video decoder are no where close to be
matched. For instance, the video is playing timestamp of around 10000ms at a
duration of 40ms while audio is playing at the timestamp of about 1000ms at
a duration of 20ms. Subjectively, the video is playing at a correct speed
(25fps steady). So we assume the audio track should be playing at the
similar timestamp, then my player is not generating as much samples as the
player needs. I wonder whether my assumption is correct.
I wonder whether this is the reason that audio track has been silent during
playback? Also if this is the situation, what's the strategy here to sync
them up correctly or there is some material I should read to make this work.
Thank you guys very much for the help!
--
View this message in context:
http://n4.nabble.com/GStreamer-audio-video-sync-issue-tp1744987p1744987.htmlSent from the GStreamer-devel mailing list archive at Nabble.com.
------------------------------------------------------------------------------
Download Intel® Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/gstreamer-devel