Hi,
I am using the NetClock mechanism to synchronize several receivers which plays streaming audio (over RTP) from one sender. The sender also acts as a time provider. I am now facing an issue where the sender reboots and subsequently creates a new time provider. The receivers notice that the sender has rebooted and creates a new netclientclock and a new pipeline slaved to this clock. Here however a problem arises, I cannot seem to get the new netclientclock to understand it must resynchronize against the new time provider on the sender. This causes the pipeline to never enter playing state, all the "received timestamps" in the jitter buffer reports 0 and i get differing base-times on my receivers. The new time provider starts on the same ip-address and port as before. How should I correctly handle the scenario where the sender reboots and creates a new timeprovider in order to get the receivers synchronized again? Best Regards, Danny |
What I do is to check the SSRC values in the RTP packets. I do that in a
pad probe. If I notice a change in the SSRC, I set the pipeline back to READY, unref the existing netclientclock, create a new netclientclock, and switch back to PLAYING. This has worked well for me. It seems you have most of this in place already. I suppose you also switch back to READY/NULL before creating the new netclientclock? And what do you do with the old one? On 2017-03-30 11:56, danny.smith wrote: > Hi, > > I am using the NetClock mechanism to synchronize several receivers which > plays streaming audio (over RTP) from one sender. The sender also acts as a > time provider. I am now facing an issue where the sender reboots and > subsequently creates a new time provider. > > The receivers notice that the sender has rebooted and creates a new > netclientclock and a new pipeline slaved to this clock. Here however a > problem arises, I cannot seem to get the new netclientclock to understand it > must resynchronize against the new time provider on the sender. This causes > the pipeline to never enter playing state, all the "received timestamps" in > the jitter buffer reports 0 and i get differing base-times on my receivers. > The new time provider starts on the same ip-address and port as before. > > How should I correctly handle the scenario where the sender reboots and > creates a new timeprovider in order to get the receivers synchronized again? > > Best Regards, > Danny > > > > > -- > View this message in context: http://gstreamer-devel.966125.n4.nabble.com/NetTimeProvider-NetClientClock-issue-tp4682456.html > Sent from the GStreamer-devel mailing list archive at Nabble.com. > _______________________________________________ > gstreamer-devel mailing list > [hidden email] > https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel _______________________________________________ gstreamer-devel mailing list [hidden email] https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel |
Thanks for replying!
I put the pipeline to NULL state and the unref the pipeline and netclientclock. Thereafter I create a new pipeline and netclientclock. This new pipeline does not start if the sender which provides the time-provider has rebooted and has started a new time-provider (on the same address and port). I only get the 'synced'-signal on the first netclientclock I created. All netclientclocks created thereafter are already in "synced" state, and I cannot get them to resync to the new time-provider. Regards, Danny |
Managed to solve the issue by lowering the timeout used for the internal clock caching in gstnetclientclock.c. It is currently set to 60s before unrefing the internal clock. By lowering it to a very short value the synch-mechanisms works as expected for our usecase. That is if the sender/time-provider reboots and starts up a new time-provider, the newly spawned receiver netclientclock will start from a clean slate and synchronize as expected as it will dispose of its internal-clock directly instead of after 60s. Using this fix our receiver pipeline works again.
Regards, Danny |
Free forum by Nabble | Edit this page |