Hello,
I am trying to simulate the delay in UDP streaming using the linux tool - netem. netem <https://wiki.linuxfoundation.org/networking/netem> I use the following command to induce a latency of 1000ms (at iMX6 side) : tc qdisc add dev eth0 root netem delay 1000ms Now this delay is observed, when I ping between 2 devices, connected via LAN cable. My devices are iMX6(server) and Ubuntu PC(client) I am transmitting video packets from iMX6 to PC, where I have configured delay of 1000 ms for the eth0 port of iMX6. gst-launch-1.0 -v imxv4l2videosrc device=/dev/video1 ! imxvpuenc_h264 bitrate=500 ! h264parse ! rtph264pay ! udpsink host=192.168.1.11 port=xxxx At Ubuntu PC: I have configured a timeout of 10ms for the udp source : udpsrc port = xxxx timeout=10000000 ! rtph264depay ! h264parse ! avdec_h264 ! autovideosink I install a message callback on the bus for element "udpsrc", that drives a callback on detection of the timeout message. Pseudo code(Python-GStreamer) as follows: bus = self.pipeline.get_bus() bus.connect('message::element',self.on_timeout) def on_timeout(self,bus,msg) strct = msg.get_structure() if strct.has_name("GstUDPSrcTimeout"): print("udp source timeout detected") However, when I transmit video data over UDP from imx6 to PC, I observe a delay of 1000ms in the video rendered at PC. When I completely stop sending packets from iMX6, I can see the timeout occurring and the callback called. So the timeout is called, when there are no packets at all, and not when there is a delay in receiving the packets. I was expecting to receive a notification every 100ms when the packets are received with delay, i.e. the imx6 board transmits video packets only every 1000ms. I understand right now there is a continous stream of packets at UDP and hence perhaps the timeout function does not work. Is there a way to make UDP source work, for not receiving packets every 100ms? To sum up the observations: 1. the timeout is called when the iMX6 completely stops sending udp packets, i.e. when I exit the pipeline using ctrl+c on imx6 side. 2. the timeout is not called when the iMX6 sends delayed udp packets. My further tests will include packet losses, delays, jitter combined, and in such a case when there is no packet received every 100ms or some X ms, I want a callback to be invoked. Does anyone have any pointers on this issue? Regards. -- Sent from: http://gstreamer-devel.966125.n4.nabble.com/ _______________________________________________ gstreamer-devel mailing list [hidden email] https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel |
Le ven. 5 oct. 2018 07 h 11, vk_gst <[hidden email]> a écrit : Hello, The kernel driver will keep a small backlog of packets, even if your app is not running. So if your sender was started over a second before the receiver, this behaviour make sense. During streaming, only a 10ms gap between two packets can cause a timeout. Note that the timeout is programmed on the recv call, so it excludes the time spent pushing (delay, parse, decode). This can easily double your configured timeout.
_______________________________________________ gstreamer-devel mailing list [hidden email] https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel |
Hello Exactly, the behavior was observed when the sender was started before the receiver. Also the when the reverse(receiver start before sender), the timeout message was seen. So in case, the connection between sender-receiver is strong in the initial stage, and then as the distance increases(considering a wireless link) - which might lead to packet drops, won't the element udpsrc detect this timeout? Also, I don't think its wise to tune the kernel driver to clear the backlog of packets. I was thinking of taking the udpsrc element and right a new plugin that could solve this particular issue. What could be the possible ways to detect such a network link breakdown? Regards -- Sent from: http://gstreamer-devel.966125.n4.nabble.com/ _______________________________________________ gstreamer-devel mailing list [hidden email] https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel |
Le mar. 9 oct. 2018 07 h 57, vk_gst <[hidden email]> a écrit :
In RTP world, there is a second protocol called RTCP, which allow for bidirectional side channel. It's used mostly to provide feedback and retransmission. In libwebrtc, there is a custom feedback protocol developed by Google that uses roundtrip time variation to predict network condition change. This is not yet implemented in GStreamer, but seems like a good research direction for your use case.
_______________________________________________ gstreamer-devel mailing list [hidden email] https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel |
Hello,
I have been reading into the Google Congestion Control(GCC) Algorithm and it seems to be a viable option for the use case. Considering that the GCC algorithm, implements a sender and receiver logic to tune and adapt to the bandwidth constraints, it seems a more complicated/full-fledged mechanism for a device like iMX6 at the sender side. My main goal is to detect the video-freeze(failure to receive current video-frame), since the target application is for a Drone live video footage use case. I am considering implementing a subset of GCC, only at the receiver side(Ubuntu PC) that notifies the application in case of a video-freeze/no new video frames received. However, I have a few questions, that I could not understand or find relevant information : 1. In case I have a sender iMX6 device transmitting 30fps, and expect to receive a frame every 33ms - can I consider each push-buffer I receive at 'udpsrc' as 1 frame ?So it means I receive 30 buffers for a second at the receiver side. Is this understanding correct ? ( each push-buffer could be sensed by a probe callback, which could be a vital information. ) 2. How can I receive the RTP timestamps of each buffer received, to create an estimate algorithm on Python ? Or should I stick to writing a GStreamer plugin for the algorithm? 3. Also, with regards to RTCP in GStreamer, are there any APIs, that can be used to read the metrics of the feedback ? Regards. -- Sent from: http://gstreamer-devel.966125.n4.nabble.com/ _______________________________________________ gstreamer-devel mailing list [hidden email] https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel |
In reply to this post by Nicolas Dufresne-5
Hi Nicolas and others,
Please bear the long post. With regards to your suggestion to look into RTCP for the use case, I found out that RTCP sends statistics packets only as a part of the total data transfer. May be this is intended for detecting the congestion over the network over a period of time. Now this approach may sound goofy or not practical, but I was wondering : 1. if I can get the timestamps of the buffer sent from Tx, and 2. use these timestamps in addition to the sequence number of buffers(RTP) at the Rx and predict the network congestion/delay in receiving the next buffers. This would mean timestamps for each video frame (or timestamps for each RTP packets). Why I want to do this : 1. I want to insert an IMU frame(custom visualization) in between the live video frames, whenever the network link breaks, congestion detected, or a delay is predicted. 2. The worst case would be to insert an IMU frame between alternate video frames. eg for a 30 fps video and an interval of 33ms the following: V|I|V|I|V|I|V|I|V|I|V|I|V|I| where V -video frame and I - IMU frame. However, to be practical I am hoping to insert an IMU frame for every 100ms, in case the link breakdown is detected on such a precision. Now I have no idea, if this makes sense to take the timestamps of each frame/buffer and then at the Rx calculate the delay and predict the congestion and then insert an IMU frame. Has this been done by anyone in any application so far? How is the synchronisation between IMU and video achieved in mobile phones? 1. Can anyone point me to some direction or current applications that utilize some of these features? 2. I tried accessing the PTS and DTS of the frames, but these timestamps do not give me the time at which the packets were sent from Tx. So I cannot calculate the delay (Timestamp Rx - Timestamp Tx) in receiving them. Which timestamps should I be looking for at the Rx side, to measure the delay in timestamps, since I need a common reference to the clock at both Rx and Tx? 3. The RTCP does provide an absolute clock, but then the RR/SR packets for RTCP are only sent a fraction of the total packets sent. So it is not for individual frames as well. 4. Please feel free to add any other suggestions/approach when it comes to handling such an application -- Sent from: http://gstreamer-devel.966125.n4.nabble.com/ _______________________________________________ gstreamer-devel mailing list [hidden email] https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel |
Free forum by Nabble | Edit this page |