Debugging video jerking problem

classic Classic list List threaded Threaded
11 messages Options
Reply | Threaded
Open this post in threaded view
|

Debugging video jerking problem

Wolfgang Grandegger
Hello,

I'm currently debugging a jerking problem. When I move from 50 to 60
FPS, the displayed video starts jerking. Just a little bit but it's
clearly visible when the camera pans. Here is my pipeline:

         gst-launch-1.0 -v \
          udpsrc port=50004 buffer-size=180000000 do-timestamp=1 \
            caps="application/x-rtp, media=(string)video, clock-rate=(int)90000, \
            encoding-name=(string)JPEG, payload=(int)26, framerate=(fraction)60/1" \
          ! rtpjitterbuffer latency=20 \
          ! rtpjpegdepay \
          ! vaapijpegdec \
          ! timeoverlay text="A Text ..."
          ! tee name=t
          t. ! queue ! vaapisink
          t. ! queue ! vaapih264enc ! mp4mux ! filesink location=/tmp/test.mp4

I have connected a signal for under- and overruns to both, the display and
record queue. What I see when the jerking starts is that I get underruns
on the display (vaapisink) queue. Does that mean that the packets do not
arrive in time due to pre-processing time? The jerking goes away if I remove
the tee for recording or run the video at just 50 FPS.
Any ideas what could cause the jerking and how it can be debugged or even
avoided?

Any help is appreciated.

Thanks,

Wolfgang.
_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

Re: Debugging video jerking problem

Nicolas Dufresne-5
Le jeudi 06 décembre 2018 à 14:59 +0100, Wolfgang Grandegger a écrit :

> Hello,
>
> I'm currently debugging a jerking problem. When I move from 50 to 60
> FPS, the displayed video starts jerking. Just a little bit but it's
> clearly visible when the camera pans. Here is my pipeline:
>
>          gst-launch-1.0 -v \
>           udpsrc port=50004 buffer-size=180000000 do-timestamp=1 \
>             caps="application/x-rtp, media=(string)video, clock-rate=(int)90000, \
>             encoding-name=(string)JPEG, payload=(int)26, framerate=(fraction)60/1" \
>           ! rtpjitterbuffer latency=20 \
>           ! rtpjpegdepay \
>           ! vaapijpegdec \
>           ! timeoverlay text="A Text ..."
>           ! tee name=t
>           t. ! queue ! vaapisink
>           t. ! queue ! vaapih264enc ! mp4mux ! filesink location=/tmp/test.mp4
>
> I have connected a signal for under- and overruns to both, the display and
> record queue. What I see when the jerking starts is that I get underruns
> on the display (vaapisink) queue. Does that mean that the packets do not
> arrive in time due to pre-processing time? The jerking goes away if I remove
> the tee for recording or run the video at just 50 FPS.
> Any ideas what could cause the jerking and how it can be debugged or even
> avoided?
My guess would be that when going for 60 to 50 FPS you increase the
latency. This is because H264 decoding latency is in number of frames.
So the longer the duration of a frame, the longer he latency. Ideally,
you would through your RTP signalling make the caps changes when the
FPS changes so that latency can be recalculated. You can also use
pipeline latency, or processing-latency on sinks.

>
> Any help is appreciated.
>
> Thanks,
>
> Wolfgang.
> _______________________________________________
> gstreamer-devel mailing list
> [hidden email]
> https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel

_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel

signature.asc (201 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: Debugging video jerking problem

Wolfgang Grandegger
Hello,

Am 06.12.2018 um 19:58 schrieb Nicolas Dufresne:

> Le jeudi 06 décembre 2018 à 14:59 +0100, Wolfgang Grandegger a écrit :
>> Hello,
>>
>> I'm currently debugging a jerking problem. When I move from 50 to 60
>> FPS, the displayed video starts jerking. Just a little bit but it's
>> clearly visible when the camera pans. Here is my pipeline:
>>
>>          gst-launch-1.0 -v \
>>           udpsrc port=50004 buffer-size=180000000 do-timestamp=1 \
>>             caps="application/x-rtp, media=(string)video, clock-rate=(int)90000, \
>>             encoding-name=(string)JPEG, payload=(int)26, framerate=(fraction)60/1" \
>>           ! rtpjitterbuffer latency=20 \
>>           ! rtpjpegdepay \
>>           ! vaapijpegdec \
>>           ! timeoverlay text="A Text ..."
>>           ! tee name=t
>>           t. ! queue ! vaapisink
>>           t. ! queue ! vaapih264enc ! mp4mux ! filesink location=/tmp/test.mp4
>>
>> I have connected a signal for under- and overruns to both, the display and
>> record queue. What I see when the jerking starts is that I get underruns
>> on the display (vaapisink) queue. Does that mean that the packets do not
>> arrive in time due to pre-processing time? The jerking goes away if I remove
>> the tee for recording or run the video at just 50 FPS.
>> Any ideas what could cause the jerking and how it can be debugged or even
>> avoided?
>
> My guess would be that when going for 60 to 50 FPS you increase the
> latency. This is because H264 decoding latency is in number of frames.
Not sure if I understand you correctly. I do not change FPS on the fly.
I convert my MJPEG video from 50 to 60 FPS on the remote side with

  $ ffmpeg ... -filter:v "setpts=0.83333333*PTS" -r 60 ...

and then send it to the network. The video with 50 FPS is recorded
and displayed fine. The one with 60 FPS causes jerking on the display.
The jerking is also visible on the recorded video... hm, can I assume
that the "rtpjitterbuffer" calculated jerky timestamps then?

And am using H264 encoding to record the video, not decoding.

> So the longer the duration of a frame, the longer he latency. Ideally,
> you would through your RTP signalling make the caps changes when the
> FPS changes so that latency can be recalculated. You can also use
> pipeline latency, or processing-latency on sinks.

OK, will look into that.

Wolfgang.



_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel

signature.asc (853 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: Debugging video jerking problem

Ian Davidson
You start with a video recorded at 50 fps and convert it to 60 fps.  So,
for each 5 frames of the original video, something has to 'invent' 1
extra frame.  It is never going to be 'perfect'.


On 06/12/18 19:50, Wolfgang Grandegger wrote:

>
> Not sure if I understand you correctly. I do not change FPS on the fly.
> I convert my MJPEG video from 50 to 60 FPS on the remote side with
>
>    $ ffmpeg ... -filter:v "setpts=0.83333333*PTS" -r 60 ...
>
> and then send it to the network. The video with 50 FPS is recorded
> and displayed fine. The one with 60 FPS causes jerking on the display.
> The jerking is also visible on the recorded video... hm, can I assume
> that the "rtpjitterbuffer" calculated jerky timestamps then?
>

_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

Re: Debugging video jerking problem

Wolfgang Grandegger
Hello,

hmm, the long story: My camera records AVCHD video with 1920x1080 pixel
and 50 frames/sec. Then I convert it to an 50 FPS AVI raw video using:

  $ ffmpeg -i input.mts -vcodec rawvideo -y output-50fps.avi

To get the same video running at 60 FPS I do:

  $ ffmpeg -i input.mts -filter:v "setpts=0.83333333*PTS" \
         -r 60 -vcodec rawvideo -y output-60fps.avi

I assume that this video uses the same raw frames just with different
time stamps. What do I miss?

Thanks,

Wolfgang.

Am 07.12.2018 um 10:13 schrieb Ian Davidson:

> You start with a video recorded at 50 fps and convert it to 60 fps.  So,
> for each 5 frames of the original video, something has to 'invent' 1
> extra frame.  It is never going to be 'perfect'.
>
>
> On 06/12/18 19:50, Wolfgang Grandegger wrote:
>>
>> Not sure if I understand you correctly. I do not change FPS on the fly.
>> I convert my MJPEG video from 50 to 60 FPS on the remote side with
>>
>>    $ ffmpeg ... -filter:v "setpts=0.83333333*PTS" -r 60 ...
>>
>> and then send it to the network. The video with 50 FPS is recorded
>> and displayed fine. The one with 60 FPS causes jerking on the display.
>> The jerking is also visible on the recorded video... hm, can I assume
>> that the "rtpjitterbuffer" calculated jerky timestamps then?
>>
>
> _______________________________________________
> gstreamer-devel mailing list
> [hidden email]
> https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

Re: Debugging video jerking problem

Nicolas Dufresne-5


Le ven. 7 déc. 2018 05 h 11, Wolfgang Grandegger <[hidden email]> a écrit :
Hello,

hmm, the long story: My camera records AVCHD video with 1920x1080 pixel
and 50 frames/sec. Then I convert it to an 50 FPS AVI raw video using:

  $ ffmpeg -i input.mts -vcodec rawvideo -y output-50fps.avi

To get the same video running at 60 FPS I do:

  $ ffmpeg -i input.mts -filter:v "setpts=0.83333333*PTS" \
         -r 60 -vcodec rawvideo -y output-60fps.avi

I assume that this video uses the same raw frames just with different
time stamps. What do I miss?

Understand that this could also be the threshold your receiving socket can carry with its current setting. It cost nothing to try and increase it, see buffer-size on udp-sink and sysctl net.core.rmem_max setting of your Linux kernel (if Linux is your host OS).


Thanks,

Wolfgang.

Am 07.12.2018 um 10:13 schrieb Ian Davidson:
> You start with a video recorded at 50 fps and convert it to 60 fps.  So,
> for each 5 frames of the original video, something has to 'invent' 1
> extra frame.  It is never going to be 'perfect'.
>
>
> On 06/12/18 19:50, Wolfgang Grandegger wrote:
>>
>> Not sure if I understand you correctly. I do not change FPS on the fly.
>> I convert my MJPEG video from 50 to 60 FPS on the remote side with
>>
>>    $ ffmpeg ... -filter:v "setpts=0.83333333*PTS" -r 60 ...
>>
>> and then send it to the network. The video with 50 FPS is recorded
>> and displayed fine. The one with 60 FPS causes jerking on the display.
>> The jerking is also visible on the recorded video... hm, can I assume
>> that the "rtpjitterbuffer" calculated jerky timestamps then?
>>
>
> _______________________________________________
> gstreamer-devel mailing list
> [hidden email]
> https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel

_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

Re: Debugging video jerking problem

Wolfgang Grandegger
Hello,

Am 07.12.2018 um 14:43 schrieb Nicolas Dufresne:

>
>
> Le ven. 7 déc. 2018 05 h 11, Wolfgang Grandegger <[hidden email]
> <mailto:[hidden email]>> a écrit :
>
>     Hello,
>
>     hmm, the long story: My camera records AVCHD video with 1920x1080 pixel
>     and 50 frames/sec. Then I convert it to an 50 FPS AVI raw video using:
>
>       $ ffmpeg -i input.mts -vcodec rawvideo -y output-50fps.avi
>
>     To get the same video running at 60 FPS I do:
>
>       $ ffmpeg -i input.mts -filter:v "setpts=0.83333333*PTS" \
>              -r 60 -vcodec rawvideo -y output-60fps.avi
>
>     I assume that this video uses the same raw frames just with different
>     time stamps. What do I miss?
>
>
> Understand that this could also be the threshold your receiving socket
> can carry with its current setting. It cost nothing to try and increase
> it, see buffer-size on udp-sink and sysctl net.core.rmem_max setting of
> your Linux kernel (if Linux is your host OS).

I already doubled both values. It does not help.

I did debug the sink with with "GST_DEBUG=basesink:6". If I look to the
timestamps of the lines:

  0:00:03.215045610 basesink gstbasesink.c:2921:gst_base_sink_is_too_late:<vaapisink0> object was scheduled in time

And calculate the time difference to the previous line in ms I get:

  16.631028
  33.438903
  233.507839
  16.668671
  ...
  16.941117
  33.009663
  183.428079
  16.470837

This is the jerking I see, I guess. With 50 FPS, it's always 20ms.
What I do also realize at 60 FPS is that not all frames are recorded
but just 42.96 fps:

  # ffprobe /tmp/rec0.mp4
 
      encoder         : VA-API H264 encoder
    Duration: 00:00:43.20, start: 0.000000, bitrate: 33813 kb/s
      Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p(tv, bt709), 1920x1080 [SAR 1:1 DAR 16:9], 33809 kb/s, 42.96 fps, 60 tbr, 10k tbn, 20k tbc (default)
      Metadata:
        creation_time   : 2018-12-07T17:18:53.000000Z
        handler_name    : VideoHandler
  [h264 @ 0x62dae0] time_scale/num_units_in_tick invalid or unsupported (0/1)

It looks like there is a resource limit somewhere :(.

Wolfgang.




_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

Re: Debugging video jerking problem

Wolfgang Grandegger
Am 07.12.2018 um 18:25 schrieb Wolfgang Grandegger:

> Hello,
>
> Am 07.12.2018 um 14:43 schrieb Nicolas Dufresne:
>>
>>
>> Le ven. 7 déc. 2018 05 h 11, Wolfgang Grandegger <[hidden email]
>> <mailto:[hidden email]>> a écrit :
>>
>>     Hello,
>>
>>     hmm, the long story: My camera records AVCHD video with 1920x1080 pixel
>>     and 50 frames/sec. Then I convert it to an 50 FPS AVI raw video using:
>>
>>       $ ffmpeg -i input.mts -vcodec rawvideo -y output-50fps.avi
>>
>>     To get the same video running at 60 FPS I do:
>>
>>       $ ffmpeg -i input.mts -filter:v "setpts=0.83333333*PTS" \
>>              -r 60 -vcodec rawvideo -y output-60fps.avi
>>
>>     I assume that this video uses the same raw frames just with different
>>     time stamps. What do I miss?
>>
>>
>> Understand that this could also be the threshold your receiving socket
>> can carry with its current setting. It cost nothing to try and increase
>> it, see buffer-size on udp-sink and sysctl net.core.rmem_max setting of
>> your Linux kernel (if Linux is your host OS).
>
> I already doubled both values. It does not help.
>
> I did debug the sink with with "GST_DEBUG=basesink:6". If I look to the
> timestamps of the lines:
>
>   0:00:03.215045610 basesink gstbasesink.c:2921:gst_base_sink_is_too_late:<vaapisink0> object was scheduled in time
>
> And calculate the time difference to the previous line in ms I get:
>
>   16.631028
>   33.438903
>   233.507839
>   16.668671
>   ...
>   16.941117
>   33.009663
>   183.428079
>   16.470837
>
> This is the jerking I see, I guess. With 50 FPS, it's always 20ms.
> What I do also realize at 60 FPS is that not all frames are recorded
> but just 42.96 fps:
>
>   # ffprobe /tmp/rec0.mp4
>  
>       encoder         : VA-API H264 encoder
>     Duration: 00:00:43.20, start: 0.000000, bitrate: 33813 kb/s
>       Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p(tv, bt709), 1920x1080 [SAR 1:1 DAR 16:9], 33809 kb/s, 42.96 fps, 60 tbr, 10k tbn, 20k tbc (default)
>       Metadata:
>         creation_time   : 2018-12-07T17:18:53.000000Z
>         handler_name    : VideoHandler
>   [h264 @ 0x62dae0] time_scale/num_units_in_tick invalid or unsupported (0/1)
>
> It looks like there is a resource limit somewhere :(.
I now tried pipeline latency with "GST_DEBUG="GST_TRACER:7
GST_TRACERS=latency". It shows me

0:00:10.824079787 ... latency, src=(string)udpsrc0_src, sink=(string)display sink_sink, time=(guint64)78779611, ts=(guint64)10824035191;
0:00:10.837392036 ... latency, src=(string)udpsrc0_src, sink=(string)filesink0_sink, time=(guint64)52504281, ts=(guint64)10837353922;
0:00:10.843501996 ... latency, src=(string)udpsrc0_src, sink=(string)display sink_sink, time=(guint64)78006259, ts=(guint64)10843436324;
0:00:10.857081850 ... latency, src=(string)udpsrc0_src, sink=(string)filesink0_sink, time=(guint64)51733514, ts=(guint64)10857042971;

Here it takes approx. 80ms to display and 50 ms to record a frame. This
is really cool stuff! I can now measure the effect of recording and text
overlay but it does not tell me resource usage.

For that I tried "GST_TRACERS=stats;rusage". I have attached the output
of "gst-stats-1.0". Does it tell me the CPU usage? Are these numbers
documented somewhere?

Wolfgang.

_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel

gst-stats.log (3K) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: Debugging video jerking problem

Wolfgang Grandegger
Hello,

I did made some progress on that topic...

Am 08.12.2018 um 19:34 schrieb Wolfgang Grandegger:

> Am 07.12.2018 um 18:25 schrieb Wolfgang Grandegger:
>> Hello,
>>
>> Am 07.12.2018 um 14:43 schrieb Nicolas Dufresne:
>>>
>>>
>>> Le ven. 7 déc. 2018 05 h 11, Wolfgang Grandegger <[hidden email]
>>> <mailto:[hidden email]>> a écrit :
>>>
>>>     Hello,
>>>
>>>     hmm, the long story: My camera records AVCHD video with 1920x1080 pixel
>>>     and 50 frames/sec. Then I convert it to an 50 FPS AVI raw video using:
>>>
>>>       $ ffmpeg -i input.mts -vcodec rawvideo -y output-50fps.avi
>>>
>>>     To get the same video running at 60 FPS I do:
>>>
>>>       $ ffmpeg -i input.mts -filter:v "setpts=0.83333333*PTS" \
>>>              -r 60 -vcodec rawvideo -y output-60fps.avi
>>>
>>>     I assume that this video uses the same raw frames just with different
>>>     time stamps. What do I miss?
>>>
>>>
>>> Understand that this could also be the threshold your receiving socket
>>> can carry with its current setting. It cost nothing to try and increase
>>> it, see buffer-size on udp-sink and sysctl net.core.rmem_max setting of
>>> your Linux kernel (if Linux is your host OS).
>>
>> I already doubled both values. It does not help.
>>
>> I did debug the sink with with "GST_DEBUG=basesink:6". If I look to the
>> timestamps of the lines:
>>
>>   0:00:03.215045610 basesink gstbasesink.c:2921:gst_base_sink_is_too_late:<vaapisink0> object was scheduled in time
>>
>> And calculate the time difference to the previous line in ms I get:
>>
>>   16.631028
>>   33.438903
>>   233.507839
>>   16.668671
>>   ...
>>   16.941117
>>   33.009663
>>   183.428079
>>   16.470837
>>
>> This is the jerking I see, I guess. With 50 FPS, it's always 20ms.
>> What I do also realize at 60 FPS is that not all frames are recorded
>> but just 42.96 fps:
>>
>>   # ffprobe /tmp/rec0.mp4
>>  
>>       encoder         : VA-API H264 encoder
>>     Duration: 00:00:43.20, start: 0.000000, bitrate: 33813 kb/s
>>       Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p(tv, bt709), 1920x1080 [SAR 1:1 DAR 16:9], 33809 kb/s, 42.96 fps, 60 tbr, 10k tbn, 20k tbc (default)
>>       Metadata:
>>         creation_time   : 2018-12-07T17:18:53.000000Z
>>         handler_name    : VideoHandler
>>   [h264 @ 0x62dae0] time_scale/num_units_in_tick invalid or unsupported (0/1)
>>
>> It looks like there is a resource limit somewhere :(.
>
> I now tried pipeline latency with "GST_DEBUG="GST_TRACER:7
> GST_TRACERS=latency". It shows me
>
> 0:00:10.824079787 ... latency, src=(string)udpsrc0_src, sink=(string)display sink_sink, time=(guint64)78779611, ts=(guint64)10824035191;
> 0:00:10.837392036 ... latency, src=(string)udpsrc0_src, sink=(string)filesink0_sink, time=(guint64)52504281, ts=(guint64)10837353922;
> 0:00:10.843501996 ... latency, src=(string)udpsrc0_src, sink=(string)display sink_sink, time=(guint64)78006259, ts=(guint64)10843436324;
> 0:00:10.857081850 ... latency, src=(string)udpsrc0_src, sink=(string)filesink0_sink, time=(guint64)51733514, ts=(guint64)10857042971;
>
> Here it takes approx. 80ms to display and 50 ms to record a frame. This
> is really cool stuff! I can now measure the effect of recording and text
> overlay but it does not tell me resource usage.
>
> For that I tried "GST_TRACERS=stats;rusage". I have attached the output
> of "gst-stats-1.0". Does it tell me the CPU usage? Are these numbers
> documented somewhere?

The tracing output of "rusage" analysed with "gst-plot-traces.sh" didn't
give meaningful results. It seems that it cannot deal properly with
multiple CPUs. Is this a known issue?

Anyway, "GST_TRACERS=latencies" and "GST_DEBUG=basesink:5" helped me to
understand a bit better what's going on. With 50 FPS, the latencies
reported by "GST_TRACERS=latencies" are rather constant around 80ms,
while with 60 FPS they are fluctuating much more. Also, if I look to
the "rendering" lines of the "GST_DEBUG=basesink:5" log:

00:03.665643643  ... basesink gstbasesink.c:3538:gst_base_sink_chain_unlocked:<display sink> rendering object 0x7f6c8c0752a0

I see that for 50 FPS, the difference between the timestamps is always
close to 20 ms:

  19.986890
  19.950369
  20.245006
  19.663664
  20.054239
  19.919339

while with 60 FPS, often the time difference is greater than 16.7 ms:

  16.663874
  16.182956
  17.571549
  27.771846
  16.442217
  28.842019

How does the default GStreamer pipeline compensate for the processing latency?

Does it adapt dynamically?

Wolfgang.
_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

Re: Debugging video jerking problem

Nicolas Dufresne-5
Le mardi 11 décembre 2018 à 09:41 +0100, Wolfgang Grandegger a écrit :

> Hello,
>
> I did made some progress on that topic...
>
> Am 08.12.2018 um 19:34 schrieb Wolfgang Grandegger:
> > Am 07.12.2018 um 18:25 schrieb Wolfgang Grandegger:
> > > Hello,
> > >
> > > Am 07.12.2018 um 14:43 schrieb Nicolas Dufresne:
> > > >
> > > > Le ven. 7 déc. 2018 05 h 11, Wolfgang Grandegger <[hidden email]
> > > > <mailto:[hidden email]>> a écrit :
> > > >
> > > >     Hello,
> > > >
> > > >     hmm, the long story: My camera records AVCHD video with 1920x1080 pixel
> > > >     and 50 frames/sec. Then I convert it to an 50 FPS AVI raw video using:
> > > >
> > > >       $ ffmpeg -i input.mts -vcodec rawvideo -y output-50fps.avi
> > > >
> > > >     To get the same video running at 60 FPS I do:
> > > >
> > > >       $ ffmpeg -i input.mts -filter:v "setpts=0.83333333*PTS" \
> > > >              -r 60 -vcodec rawvideo -y output-60fps.avi
> > > >
> > > >     I assume that this video uses the same raw frames just with different
> > > >     time stamps. What do I miss?
> > > >
> > > >
> > > > Understand that this could also be the threshold your receiving socket
> > > > can carry with its current setting. It cost nothing to try and increase
> > > > it, see buffer-size on udp-sink and sysctl net.core.rmem_max setting of
> > > > your Linux kernel (if Linux is your host OS).
> > >
> > > I already doubled both values. It does not help.
> > >
> > > I did debug the sink with with "GST_DEBUG=basesink:6". If I look to the
> > > timestamps of the lines:
> > >
> > >   0:00:03.215045610 basesink gstbasesink.c:2921:gst_base_sink_is_too_late:<vaapisink0> object was scheduled in time
> > >
> > > And calculate the time difference to the previous line in ms I get:
> > >
> > >   16.631028
> > >   33.438903
> > >   233.507839
> > >   16.668671
> > >   ...
> > >   16.941117
> > >   33.009663
> > >   183.428079
> > >   16.470837
> > >
> > > This is the jerking I see, I guess. With 50 FPS, it's always 20ms.
> > > What I do also realize at 60 FPS is that not all frames are recorded
> > > but just 42.96 fps:
> > >
> > >   # ffprobe /tmp/rec0.mp4
> > >  
> > >       encoder         : VA-API H264 encoder
> > >     Duration: 00:00:43.20, start: 0.000000, bitrate: 33813 kb/s
> > >       Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p(tv, bt709), 1920x1080 [SAR 1:1 DAR 16:9], 33809 kb/s, 42.96 fps, 60 tbr, 10k tbn, 20k tbc (default)
> > >       Metadata:
> > >         creation_time   : 2018-12-07T17:18:53.000000Z
> > >         handler_name    : VideoHandler
> > >   [h264 @ 0x62dae0] time_scale/num_units_in_tick invalid or unsupported (0/1)
> > >
> > > It looks like there is a resource limit somewhere :(.
> >
> > I now tried pipeline latency with "GST_DEBUG="GST_TRACER:7
> > GST_TRACERS=latency". It shows me
> >
> > 0:00:10.824079787 ... latency, src=(string)udpsrc0_src, sink=(string)display sink_sink, time=(guint64)78779611, ts=(guint64)10824035191;
> > 0:00:10.837392036 ... latency, src=(string)udpsrc0_src, sink=(string)filesink0_sink, time=(guint64)52504281, ts=(guint64)10837353922;
> > 0:00:10.843501996 ... latency, src=(string)udpsrc0_src, sink=(string)display sink_sink, time=(guint64)78006259, ts=(guint64)10843436324;
> > 0:00:10.857081850 ... latency, src=(string)udpsrc0_src, sink=(string)filesink0_sink, time=(guint64)51733514, ts=(guint64)10857042971;
> >
> > Here it takes approx. 80ms to display and 50 ms to record a frame. This
> > is really cool stuff! I can now measure the effect of recording and text
> > overlay but it does not tell me resource usage.
> >
> > For that I tried "GST_TRACERS=stats;rusage". I have attached the output
> > of "gst-stats-1.0". Does it tell me the CPU usage? Are these numbers
> > documented somewhere?
>
> The tracing output of "rusage" analysed with "gst-plot-traces.sh" didn't
> give meaningful results. It seems that it cannot deal properly with
> multiple CPUs. Is this a known issue?
>
> Anyway, "GST_TRACERS=latencies" and "GST_DEBUG=basesink:5" helped me to
> understand a bit better what's going on. With 50 FPS, the latencies
> reported by "GST_TRACERS=latencies" are rather constant around 80ms,
> while with 60 FPS they are fluctuating much more. Also, if I look to
> the "rendering" lines of the "GST_DEBUG=basesink:5" log:
>
> 00:03.665643643  ... basesink gstbasesink.c:3538:gst_base_sink_chain_unlocked:<display sink> rendering object 0x7f6c8c0752a0
>
> I see that for 50 FPS, the difference between the timestamps is always
> close to 20 ms:
>
>   19.986890
>   19.950369
>   20.245006
>   19.663664
>   20.054239
>   19.919339
>
> while with 60 FPS, often the time difference is greater than 16.7 ms:
>
>   16.663874
>   16.182956
>   17.571549
>   27.771846
>   16.442217
>   28.842019
>
> How does the default GStreamer pipeline compensate for the processing latency?
It's not adaptive. You can increase it in your application using the
processing-latency property on sink (it's a recent property).
Increasing that value will make the queues a bit more efficient at
hiding the spikes and the processing load.

>
> Does it adapt dynamically?
>
> Wolfgang.
> _______________________________________________
> gstreamer-devel mailing list
> [hidden email]
> https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel

_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel

signature.asc (201 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: Debugging video jerking problem

Wolfgang Grandegger
Hello,

Am 11.12.2018 um 16:35 schrieb Nicolas Dufresne:

> Le mardi 11 décembre 2018 à 09:41 +0100, Wolfgang Grandegger a écrit :
>> Hello,
>>
>> I did made some progress on that topic...
>>
>> Am 08.12.2018 um 19:34 schrieb Wolfgang Grandegger:
>>> Am 07.12.2018 um 18:25 schrieb Wolfgang Grandegger:
>>>> Hello,
>>>>
>>>> Am 07.12.2018 um 14:43 schrieb Nicolas Dufresne:
>>>>>
>>>>> Le ven. 7 déc. 2018 05 h 11, Wolfgang Grandegger <[hidden email]
>>>>> <mailto:[hidden email]>> a écrit :
>>>>>
>>>>>     Hello,
>>>>>
>>>>>     hmm, the long story: My camera records AVCHD video with 1920x1080 pixel
>>>>>     and 50 frames/sec. Then I convert it to an 50 FPS AVI raw video using:
>>>>>
>>>>>       $ ffmpeg -i input.mts -vcodec rawvideo -y output-50fps.avi
>>>>>
>>>>>     To get the same video running at 60 FPS I do:
>>>>>
>>>>>       $ ffmpeg -i input.mts -filter:v "setpts=0.83333333*PTS" \
>>>>>              -r 60 -vcodec rawvideo -y output-60fps.avi
>>>>>
>>>>>     I assume that this video uses the same raw frames just with different
>>>>>     time stamps. What do I miss?
>>>>>
>>>>>
>>>>> Understand that this could also be the threshold your receiving socket
>>>>> can carry with its current setting. It cost nothing to try and increase
>>>>> it, see buffer-size on udp-sink and sysctl net.core.rmem_max setting of
>>>>> your Linux kernel (if Linux is your host OS).
>>>>
>>>> I already doubled both values. It does not help.
>>>>
>>>> I did debug the sink with with "GST_DEBUG=basesink:6". If I look to the
>>>> timestamps of the lines:
>>>>
>>>>   0:00:03.215045610 basesink gstbasesink.c:2921:gst_base_sink_is_too_late:<vaapisink0> object was scheduled in time
>>>>
>>>> And calculate the time difference to the previous line in ms I get:
>>>>
>>>>   16.631028
>>>>   33.438903
>>>>   233.507839
>>>>   16.668671
>>>>   ...
>>>>   16.941117
>>>>   33.009663
>>>>   183.428079
>>>>   16.470837
>>>>
>>>> This is the jerking I see, I guess. With 50 FPS, it's always 20ms.
>>>> What I do also realize at 60 FPS is that not all frames are recorded
>>>> but just 42.96 fps:
>>>>
>>>>   # ffprobe /tmp/rec0.mp4
>>>>  
>>>>       encoder         : VA-API H264 encoder
>>>>     Duration: 00:00:43.20, start: 0.000000, bitrate: 33813 kb/s
>>>>       Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p(tv, bt709), 1920x1080 [SAR 1:1 DAR 16:9], 33809 kb/s, 42.96 fps, 60 tbr, 10k tbn, 20k tbc (default)
>>>>       Metadata:
>>>>         creation_time   : 2018-12-07T17:18:53.000000Z
>>>>         handler_name    : VideoHandler
>>>>   [h264 @ 0x62dae0] time_scale/num_units_in_tick invalid or unsupported (0/1)
>>>>
>>>> It looks like there is a resource limit somewhere :(.
>>>
>>> I now tried pipeline latency with "GST_DEBUG="GST_TRACER:7
>>> GST_TRACERS=latency". It shows me
>>>
>>> 0:00:10.824079787 ... latency, src=(string)udpsrc0_src, sink=(string)display sink_sink, time=(guint64)78779611, ts=(guint64)10824035191;
>>> 0:00:10.837392036 ... latency, src=(string)udpsrc0_src, sink=(string)filesink0_sink, time=(guint64)52504281, ts=(guint64)10837353922;
>>> 0:00:10.843501996 ... latency, src=(string)udpsrc0_src, sink=(string)display sink_sink, time=(guint64)78006259, ts=(guint64)10843436324;
>>> 0:00:10.857081850 ... latency, src=(string)udpsrc0_src, sink=(string)filesink0_sink, time=(guint64)51733514, ts=(guint64)10857042971;
>>>
>>> Here it takes approx. 80ms to display and 50 ms to record a frame. This
>>> is really cool stuff! I can now measure the effect of recording and text
>>> overlay but it does not tell me resource usage.
>>>
>>> For that I tried "GST_TRACERS=stats;rusage". I have attached the output
>>> of "gst-stats-1.0". Does it tell me the CPU usage? Are these numbers
>>> documented somewhere?
>>
>> The tracing output of "rusage" analysed with "gst-plot-traces.sh" didn't
>> give meaningful results. It seems that it cannot deal properly with
>> multiple CPUs. Is this a known issue?
>>
>> Anyway, "GST_TRACERS=latencies" and "GST_DEBUG=basesink:5" helped me to
>> understand a bit better what's going on. With 50 FPS, the latencies
>> reported by "GST_TRACERS=latencies" are rather constant around 80ms,
>> while with 60 FPS they are fluctuating much more. Also, if I look to
>> the "rendering" lines of the "GST_DEBUG=basesink:5" log:
>>
>> 00:03.665643643  ... basesink gstbasesink.c:3538:gst_base_sink_chain_unlocked:<display sink> rendering object 0x7f6c8c0752a0
>>
>> I see that for 50 FPS, the difference between the timestamps is always
>> close to 20 ms:
>>
>>   19.986890
>>   19.950369
>>   20.245006
>>   19.663664
>>   20.054239
>>   19.919339
>>
>> while with 60 FPS, often the time difference is greater than 16.7 ms:
>>
>>   16.663874
>>   16.182956
>>   17.571549
>>   27.771846
>>   16.442217
>>   28.842019
>>
>> How does the default GStreamer pipeline compensate for the processing latency?
>
> It's not adaptive. You can increase it in your application using the
> processing-latency property on sink (it's a recent property).
> Increasing that value will make the queues a bit more efficient at
> hiding the spikes and the processing load.
Should I see that property with "gst-inspect-1.0"? I'm using the
"vaapisink" and GStreamer version 1.14.4.

I just ran the pipeline on a more powerful X86 system... and the jerking
is gone! It's an Atom E3950 @ 1.6GHz vs. i7-6700TE CPU @ 2.40GHz.

Wolfgang.


_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel

signature.asc (853 bytes) Download Attachment