Greetings,
I’ve run into an issue I’m hoping someone can shed light on. I’m using two pipelines – one for decoding H.264 video with imxvpudec, and the other for rendering video with imxeglvivsink. I suppose this could be something specific about the i.MX6 GStreamer elements but I suspect that it is not particular to those.
The decoder pipeline terminates in an appsink and the rendering pipeline starts with an appsrc. I’m using gst_app_sink_pull_sample () to pull samples from the appsink and gst_app_src_push_sample () to push them to the appsrc.
All works fine if the video being received (via RTP) and decoded is 30 fps. But if the video is 15 fps then it is rendered in slow motion.
If I use a single GStreamer pipeline with the same stream of H.264 video then all is well at 30 and 15 fps.
I have debugged extensively (and learned a lot more about GStreamer doing it) and have confirmed that when the video is 15 fps the GST_BUFFER_DURATION() and GST_BUFFER_PTS() show the correct values for the buffers contained in the samples being pushed to the appsink.
I have also gotten the caps from the samples and from the imxeglvivsink sink pad and both show the correct frame rate.
I have also verified that the decoded frames are delivered to the render pipeline at 15 fps (via time stamped log messages from my application).
So clearly there is something happening under the hood with a single GStreamer pipeline that is still missing with this split pipeline scenario in spite of the caps being relayed correctly between the two pipelines via the samples.
Help would be greatly appreciated. I may have to use the single pipeline if I can’t solve it but it just doesn’t fit as nicely into our existing architecture.
Regards,
Mike
_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel