Loading... |
Reply to author |
Edit post |
Move post |
Delete this post |
Delete this post and replies |
Change post date |
Print post |
Permalink |
Raw mail |
![]() ![]() ![]() ![]() ![]() ![]() ![]() |
10 posts
|
Hi guys,
When comparing rtph264pay and rtpmp4vpay plugins, I noticed a difference in the way RTP packets are filled. In the case of H264, one RTP packet seems to contain maximum one entire NAL unit (can be splitted if MTU too small). In the case of rtpmp4vpay, it fills the RTP packet with eventually several VOP frames as soon as mtu is not exceeded or a non VOP frame is received.
Therefore, with rtph264pay one RTP packet = one frame maximum, with rtpmp4vpay one RTP packet = GOP frames maximum. Why this difference of behavior? I'm wondering this because in case of mpeg version 4, it can lead to delayed and/or laggy output if it gathers several frames in one RTP packet and of course in the case of live streaming. Of course, we can still play with the max-ptime property of the payloader so that it limits the maximum duration of the packet data, thus we can limit the number of frame per RTP packet. But by default the behaviors are different.
To illustrate what I'm saying, here are 2 examples to see the delay and laggy output in case of MPEG4 (in case of H264 both output streams might be delayed due to the encoding time but both output streams should be synchronized):
Cheers, Paul HENRYS
_______________________________________________ gstreamer-devel mailing list [hidden email] http://lists.freedesktop.org/mailman/listinfo/gstreamer-devel |
Loading... |
Reply to author |
Edit post |
Move post |
Delete this post |
Delete this post and replies |
Change post date |
Print post |
Permalink |
Raw mail |
![]() ![]() ![]() ![]() ![]() ![]() ![]() |
266 posts
|
Hi,
The only reason the H.264 payloader doesn't merge NAL into a single packet is that no one had done the work to do it intelligently. Olivier Paul d'AUBIGNY <[hidden email]> wrote: >Hi guys, > >When comparing rtph264pay and rtpmp4vpay plugins, I noticed a >difference in >the way RTP packets are filled. >In the case of H264, one RTP packet seems to contain maximum one entire >NAL >unit (can be splitted if MTU too small). In the case of rtpmp4vpay, it >fills the RTP packet with eventually several VOP frames as soon as mtu >is >not exceeded or a non VOP frame is received. >Therefore, with rtph264pay one RTP packet = one frame maximum, with >rtpmp4vpay one RTP packet = GOP frames maximum. > >Why this difference of behavior? I'm wondering this because in case of >mpeg >version 4, it can lead to delayed and/or laggy output if it gathers >several >frames in one RTP packet and of course in the case of live streaming. >Of >course, we can still play with the max-ptime property of the payloader >so >that it limits the maximum duration of the packet data, thus we can >limit >the number of frame per RTP packet. But by default the behaviors are >different. > >To illustrate what I'm saying, here are 2 examples to see the delay and >laggy output in case of MPEG4 (in case of H264 both output streams >might be >delayed due to the encoding time but both output streams should be >synchronized): > > > - Case of MPEG4: > > >gst-launch-1.0 udpsrc port=2222 ! application/x-rtp, media=video, >payload=96, clock-rate=90000, encoding-name=MP4V-ES ! decodebin ! >xvimagesink sync=false > >gst-launch-1.0 v4l2src ! video/x-raw, format=YUY2, width=640, >height=480, >interlace-mode=progressive, pixel-aspect-ratio=1/1, framerate=30/1 ! >videoconvert ! videorate ! avenc_mpeg4 ! tee name=t t. ! queue ! >rtpmp4vpay config-interval=1 mtu=65507 ! udpsink host=127.0.0.1 >port=2222 > t. ! queue ! decodebin ! autovideosink > > > > - Case of H264: > > >gst-launch-1.0 udpsrc port=2222 ! application/x-rtp, media=video, >payload=96, clock-rate=90000, encoding-name=H264 ! decodebin ! >xvimagesink >sync=false > >gst-launch-1.0 v4l2src ! video/x-raw, format=YUY2, width=640, >height=480, >interlace-mode=progressive, pixel-aspect-ratio=1/1, framerate=30/1 ! >videoconvert ! videorate ! x264enc ! tee name=t t. ! queue ! >rtph264pay >config-interval=1 mtu=65507 ! udpsink host=127.0.0.1 port=2222 t. ! >queue >! decodebin ! autovideosink > > >*Note*: For each case, play the two pipelines at the same time. I put a >high MTU to demonstrate what I'm saying but that could occur even with >the >default MTU if the frame size is small enough. > > >Cheers, > > >Paul HENRYS > > >------------------------------------------------------------------------ > >_______________________________________________ >gstreamer-devel mailing list >[hidden email] >http://lists.freedesktop.org/mailman/listinfo/gstreamer-devel ... [show rest of quote] -- Sent from my Android phone with K-9 Mail. Please excuse my brevity. _______________________________________________ gstreamer-devel mailing list [hidden email] http://lists.freedesktop.org/mailman/listinfo/gstreamer-devel |
Loading... |
Reply to author |
Edit post |
Move post |
Delete this post |
Delete this post and replies |
Change post date |
Print post |
Permalink |
Raw mail |
![]() ![]() ![]() ![]() ![]() ![]() ![]() |
10 posts
|
Hi Olivier,
Thx for your answer. That's clarify the difference in behavior. So, regarding my question and that right now the H.264 payloader does not merge NAL units into a single RTP packet, it means that's important to care about the max-ptime parameter so that only one frame is included into a single RTP packet in case of live streaming, because as I was saying, if the MTU is big or the frame size small, it might lead to laggy and/or delayed output.
Cheers, Paul 2012/12/7 Olivier Crête <[hidden email]> Hi, ... [show rest of quote] _______________________________________________ gstreamer-devel mailing list [hidden email] http://lists.freedesktop.org/mailman/listinfo/gstreamer-devel |
Free forum by Nabble | Disable Popup Ads | Edit this page |