This problem is a little special. Let me describe it in detail.
Now I have producer part and consumer part. The producer keeps generating video & audio samples from MP4 file. The consumer keeps playing back video & audio samples after receiving them. The producer part is like this (1)"filesrc -> qtdemux -> queue ->h264parse -> appsink(video) -> sending to consumer sample by sample -> queue -> aacparse -> appsink(audio) -> sending to consumer sample by sample" (2)I use " g_signal_emit_by_name (appsink, "pull-sample", &sample) " to get video or audio sample. (3)I will also record the sample caps by using "caps = gst_sample_get_caps(sample);" I need this information to set up consumer. The consumer part is like this (1)"receiving sample by sample -> appsrc(video) -> queue -> avdec_h264 -> autovideosink receiving sample by sample -> appsrc(audio) -> queue -> faad -> autoaudiosink" (2)I'm using Push-Mode of appsrc. Like this "g_signal_connect (appsrc, "need-data", G_CALLBACK (start_feed), audio); g_signal_connect (appsrc, "enough-data", G_CALLBACK (stop_feed), audio);" (3)Before consuming video and audio sample by sample, the consumer will first set up the pipeline using the video and audio caps information separately. ================================================================== Here is my problem. For one mp4 file, this procedure works perfect. But for another mp4 file, the audio and video are just out of sync. Actually, I don't quite understand how the consumer part sync the video and audio. Although I add all the plugins into one pipeline. I link video plugins and audio plugins separately. (1)Do the samples which appsink generated contain the synchronization information? (2)And what else I should do in the consumer part if I want sync the video samples and audio sample? (3)And what's the relationship between GstSample number and the video framerate and audio samplerate? ========================================================== Here are more information about the example MP4 file. (1)The caps of the file which works well: "Video streaminfo ----- video/x-h264, stream-format=(string)avc, alignment=(string)au, level=(string)3, profile=(string)high, codec_data=(buffer)0164001effe1001a6764001eacc8602a0c7e4c0440000003004000000ca3c58b678001000568e9bb2c8b, width=(int)672, height=(int)378, framerate=(fraction)25/1, pixel-aspect-ratio=(fraction)1/1, parsed=(boolean)true Audio streaminfo ----- audio/mpeg, mpegversion=(int)4, framed=(boolean)true, stream-format=(string)raw, level=(string)1, base-profile=(string)lc, profile=(string)lc, codec_data=(buffer)139056e5a54800, rate=(int)22050, channels=(int)2" The length of this mp4 file is about 10 min. (2) The caps of the file which out of sync. "Video streaminfo ----- video/x-h264, stream-format=(string)avc, alignment=(string)au, level=(string)2.1, profile=(string)high, codec_data=(buffer)01640015ffe1001a67640015acd941410fcb8088000003000800000301e4f8a14cb001000468febcb0, width=(int)320, height=(int)240, framerate=(fraction)2000/141, pixel-aspect-ratio=(fraction)1/1, parsed=(boolean)true Audio streaminfo ----- audio/mpeg, mpegversion=(int)4, framed=(boolean)true, stream-format=(string)raw, level=(string)2, base-profile=(string)lc, profile=(string)lc, codec_data=(buffer)1208, rate=(int)44100, channels=(int)1" The length of this mp4 file is about 15 sec. My platform is Mac OS, and Gstreamer is 1.4.3. I really don't know how to handle the synchronization issue. I'm a newbie of Gstreamer. Hope I describe my situation clearly... If you have any idea, please tell me. Thank you very much! |
I am also facing the similar issue. One hint may be to use the clock time on
the sample buffered from the sink pads and push them to the appsrc : I am using 2 different methods to push the buffer for audio and video in the "need-data" signal of appsrc *For Video* buffer = gst_sample_get_buffer(sample); GST_BUFFER_PTS(buffer) = ctx->timestamp; GST_BUFFER_DURATION(buffer) = gst_util_uint64_scale_int(rate, GST_SECOND, frames); ctx->timestamp += GST_BUFFER_DURATION(buffer); g_signal_emit_by_name(appsrc, "push-buffer", buffer, &ret); *Note *: /I use the framerate per seconds (got from the video caps) to segment the buffer/ *For Audio :* buffer = gst_sample_get_buffer(sample); GstSegment *seg = gst_sample_get_segment(sample); GstClockTime pts, dts; /* Convert the PTS/DTS to running time so they start from 0 */ pts = GST_BUFFER_PTS(buffer); if (GST_CLOCK_TIME_IS_VALID(pts)) pts = gst_segment_to_running_time(seg, GST_FORMAT_TIME, pts); dts = GST_BUFFER_DTS(buffer); if (GST_CLOCK_TIME_IS_VALID(dts)) dts = gst_segment_to_running_time(seg, GST_FORMAT_TIME, dts); if (buffer) { /* Make writable so we can adjust the timestamps */ buffer = gst_buffer_copy(buffer); GST_BUFFER_PTS(buffer) = pts; GST_BUFFER_DTS(buffer) = dts; g_signal_emit_by_name(appsrc, "push-buffer", buffer, &ret); } I am not sure how to make this same for audio and video. My guess is that constructing buffers of same clock time can make the audio and video in sync. But it all depends on appsink data. If you have succeeded in syncing audio and video, please let me know. -- Sent from: http://gstreamer-devel.966125.n4.nabble.com/ _______________________________________________ gstreamer-devel mailing list [hidden email] https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel |
Free forum by Nabble | Edit this page |