Audio video sync problem in the multicast mode

classic Classic list List threaded Threaded
2 messages Options
Reply | Threaded
Open this post in threaded view
|

Audio video sync problem in the multicast mode

Nathan
I noticed an interesting problem about the a/v sync or lip sync issue happened in gstreamer.
We are building hdtv over ip application, and gstreamer will send audio/video together from transmitter side to multiple receivers

when there is one receiver, we set the udpsink/udpsrc in the unicasting mode (see below), the lip sync is perfect.
when there is multiple receivers,  we set the udpsink/dupsrc in the multicasting mode, the delay between audio and video become uncertain. In the begining, there is no delay, and after half an hour we lost the lip sync.

Why the lip sync behavior like this?
RTCP protocol is used in gstreamer to guarantee audio video sync, and when there is only one receiver, this receiver(udpsrc) will report rtcp packages, which contains the timing information to the transmitter(udpsink), then the transmitter can adjust the delay between audio and video pipeline.
However, when there is multiple receivers, (I might be wrong here) so there are multiple receivers reporting the rtcp packages to the transmitter. I was thinking this might be the reason when there is multiple receivers, the lip sync is gone.

My question is, shoud I set the rtcp related pipeline works in the multcasting? How the transmitter pipeline handles different rtcp packets from different receiver?
Please help me out here. I have been stuck with this problem for a while.


Transmitter side:
rtcpsrc, and rtcpsink here are in fact udpsrc and udpsink
/****************************************************************************************************/
        g_object_set(G_OBJECT(app->video_udpsink),  "host",mulcast_address, "port", VIDEO_udpsink_PORT, "sync",FALSE,"async",FALSE ,"auto-multicast",FALSE,NULL); // video rtp
        g_object_set(G_OBJECT(app->audio_udpsink),  "host",mulcast_address, "port", AUDIO_udpsink_PORT, "sync",FALSE,"async",FALSE ,"auto-multicast",FALSE,NULL); //audio rtp sink
       
        g_object_set(G_OBJECT(app->video_rtcpsink), "host",mulcast_address, "port", VIDEO_RTCPSINK_PORT,"sync",FALSE,"async",FALSE ,"auto-multicast",FALSE,NULL); // video rtcp sink
        g_object_set(G_OBJECT(app->audio_rtcpsink), "host",mulcast_address, "port", AUDIO_RTCPSINK_PORT,"sync",FALSE,"async",FALSE ,"auto-multicast",FALSE,NULL);// video rtcp sink
       
       
        g_object_set(G_OBJECT(app->video_rtcpsrc), "multicast-group", mulcast_address, "port", VIDEO_RTCPSRC_PORT, NULL);//  video rtcp sink
        g_object_set(G_OBJECT(app->audio_rtcpsrc), "multicast-group", mulcast_address, "port", AUDIO_RTCPSRC_PORT, NULL);//  video rtcp sink

/****************************************************************************************************/
        app->loop= g_main_loop_new(NULL, FALSE);
    bus = gst_pipeline_get_bus (GST_PIPELINE (app->pipeline));
    gst_bus_add_watch (bus, bus_call, app->loop);
    gst_object_unref(bus);
   

        gst_bin_add_many(GST_BIN(app->pipeline), app->v4l2src,app->omxbufferalloc,app->omx_h264enc,app->gstperf, app->queue1, /*app->h264parse,*/app->rtph264pay,app->video_udpsink,NULL);
        gst_bin_add_many(GST_BIN(app->pipeline), app->alsasrc,app->audioconvert, app->ffenc_ac3,app->queue2, app->rtpac3pay, app->audio_udpsink,NULL);
       
        gst_bin_add_many(GST_BIN(app->pipeline), app->video_rtcpsrc,app->video_rtcpsink,NULL);
        gst_bin_add_many(GST_BIN(app->pipeline), app->audio_rtcpsrc,app->audio_rtcpsink,NULL);
//=====================================================================================================


Receiver side:
rtcpsrc, and rtcpsink here are in fact udpsrc and udpsink
        g_object_set(G_OBJECT(app->video_udpsrc),  "port", VIDEO_RTP_SRC_PORT, "caps", video_caps, "multicast-group", mulcast_address, "auto-multicast", TRUE, NULL); // video rtp src
                g_object_set(G_OBJECT(app->audio_udpsrc),  "port", AUDIO_RTP_SRC_PORT, "caps", audio_caps, "multicast-group", mulcast_address, "auto-multicast", TRUE, NULL); //  audio rtp src
        g_object_set(G_OBJECT(app->video_rtcpsrc), "port", VIDEO_RTCP_SRC_PORT, "multicast-group", mulcast_address, "auto-multicast", TRUE, NULL); // video rtcp src
  g_object_set(G_OBJECT(app->audio_rtcpsrc), "port", AUDIO_RTCP_SRC_PORT, "multicast-group", mulcast_address, "auto-multicast", TRUE, NULL); //  audio rtcp src
       
        g_object_set(G_OBJECT(app->video_rtcpsink),"host", mulcast_address, "port", VIDEO_RTCP_SINK_PORT, "auto-multicast", TRUE,"sync", FALSE, "async", FALSE, NULL);
                g_object_set(G_OBJECT(app->audio_rtcpsink),"host", mulcast_address, "port", AUDIO_RTCP_SINK_PORT, "auto-multicast", TRUE, "sync", FALSE, "async", FALSE, NULL);
               



Reply | Threaded
Open this post in threaded view
|

Re: Audio video sync problem in the multicast mode

Nicolas Dufresne
Le mardi 20 novembre 2012 à 13:14 -0800, Nathan a écrit :
> Receiver side:

Synchronisation is done on the receiver side. To work properly, you need
your media sink (e.g. xvimagesink and pulseaudiosink) to be configured
with sync=TRUE, async=TRUE and a jitterbuffer with appropriate latency
configured. You also need the latency to be correctly propagated, which
mean you need to handle the latency message by calling
gst_bin_recalculate_latency () on your pipeline when this message is
received.

Other then that, I don't see any valid reason for using queues, neither
on receiver or sending side, I would suggest to remove them in order to
get rid of the extra latency introduced.

Nicolas

_______________________________________________
gstreamer-devel mailing list
[hidden email]
http://lists.freedesktop.org/mailman/listinfo/gstreamer-devel