Hi all, During the testing of our RTSP implementation which is using gst-rtsp-server we noticed that not all RTSP clients will have audio/video synchronized, and that making some changes will fix the sync in one client while
breaking the sync in another one. The pipeline used in gst-rtsp-server looks like this, although this can be modified based on the requested configuration: appsrc is-live=true do-timestamp=true min-latency=0 ! h264parse ! rtph264pay name=pay0 pt=96 appsrc is-live=true do-timestamp=true min-latency=0 ! audioconvert ! audioresample ! mulawenc ! rtppcmupay name=pay1 pt=0 Video appsrc is getting buffers from another pipeline where we are dynamically adding/removing branch for RTSP as needed, simplified pipeline which is providing video looks like this: v4l2src do-timestamp=true ! tee ! queue ! videorate drop-only=true ! imxvideoconvert_g2d ! vpuenc_h264 ! appsink Audio appsrc is also getting buffers from another pipeline where we are dynamically adding/removing branch for RTSP as needed: appsrc is-live=true min-latency=0 ! tee ! queue ! appsink Appsrc in this pipeline is getting buffers directly from our application, the reason for this audio only pipeline is that we want to avoid accessing audio from RTSP multiple times, the only purpose of this pipeline is
to duplicate buffers via tee element for as many times as RTSP needs it (we have more than one media factory/mount point). In this case we are manually timestamping audio buffers like in the example from
https://gstreamer.freedesktop.org/documentation/tutorials/basic/short-cutting-the-pipeline.html?gi-language=c , i.e. GST_BUFFER_DTS (buffer) = GST_BUFFER_PTS (buffer) = gst_util_uint64_scale (audioGenerator->num_samples, GST_SECOND, audioGenerator->samplerate); gst-rtsp-pipeline is grabbing buffers from the two pipelines more or less the same as in the example
https://github.com/GStreamer/gst-rtsp-server/blob/master/examples/test-appsrc2.c but without manually setting the timestamps since we are letting the appsrc do that by setting the do-timestamp=true as described in
https://gstreamer.freedesktop.org/documentation/application-development/advanced/pipeline-manipulation.html?gi-language=c#inserting-data-with-appsrc
. Audio and video branches are dynamically added on the “media-configure” callback according to the configuration linked to mounting point. All media factories which we use have the “shared” property set to true.
One thing worth mentioning is that this specific VMS is pushing its own configuration via ONVIF which will result in having 2 different ONVIF media profiles (see
https://www.onvif.org/specs/srv/media/ONVIF-Media-Service-Spec.pdf?26d877&26d877 for details if you are interested), where one of those will be used for video, and the second one for audio.
and for the scond one the pipeline will be
// PTS/DTS = absolute (current) time - base (start) time GstClockTime pts, dts; GstClock *appsrcClock = gst_element_get_clock (appsrc); pts = dts = gst_clock_get_time (appsrcClock) - gst_element_get_base_time(appsrc);
Regards, DISCLAIMER: This e-mail may contain confidential and privileged material for the sole use of the intended recipient. Any review, use, distribution or disclosure by others is strictly prohibited. If you are not the intended recipient (or authorized to receive for the recipient), please contact the sender by reply e-mail and delete all copies of this message. No employee of Zenitel has the authority to conclude via e-mail any binding contract without an explicit written consent of their supervisor, and Zenitel does not take responsibility for it’s content, and will not enter into an agreement, without this written consent. _______________________________________________ gstreamer-devel mailing list [hidden email] https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel |
Free forum by Nabble | Edit this page |