How do gstreamer interfaces with H264 hardware encoders and creates videos ?

classic Classic list List threaded Threaded
10 messages Options
Reply | Threaded
Open this post in threaded view
|

How do gstreamer interfaces with H264 hardware encoders and creates videos ?

simo-zz
Hello,

I am using an embedded board which has an hardware H264 encoder and I am testing video generation both with gst-launch and with a C++ code wrote by my self.

Comparing my code results to the gst-launch results, it is clear and obvious that gstreamer applies additional processing compared to what I get from the hardware encoder buffer.
The first obvious processing is that it generates an mp4 video, while I can only generate an h264 video, but I am not using additional mp4 demux in my code.

For example, the gst-launch resulting video image's quality it's quiet better, the video has the correct framerate rather than the video I obtain which results slightly "accelerated", and in addtition, the time-stap (minutes - seconds) is present while in the video I obtain from my C++ code it's not.

So I suspect that gstreamer doesn't use the hardware encoder.
How can I be sure that gstreamer uses the hardware encoder instead of a h264 software library and how can I know in real time what are the V4L2 settings that gstreamer applies to the encoder ?

Thanks.
Regards,
Simon

 

_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

RE: How do gstreamer interfaces with H264 hardware encoders and creates videos ?

Rand Graham-2

Hello,

 

It might help if you mention which embedded board you are using.

 

In order to use custom hardware from a vendor such as nVidia, you would compile gstreamer plugins provided by the vendor and then specify them in your pipeline.

 

Regards,

Rand

 

From: gstreamer-devel [mailto:[hidden email]] On Behalf Of [hidden email]
Sent: Wednesday, April 25, 2018 1:01 PM
To: [hidden email]
Subject: How do gstreamer interfaces with H264 hardware encoders and creates videos ?

 

Hello,

 

I am using an embedded board which has an hardware H264 encoder and I am testing video generation both with gst-launch and with a C++ code wrote by my self.

 

Comparing my code results to the gst-launch results, it is clear and obvious that gstreamer applies additional processing compared to what I get from the hardware encoder buffer.

The first obvious processing is that it generates an mp4 video, while I can only generate an h264 video, but I am not using additional mp4 demux in my code.

 

For example, the gst-launch resulting video image's quality it's quiet better, the video has the correct framerate rather than the video I obtain which results slightly "accelerated", and in addtition, the time-stap (minutes - seconds) is present while in the video I obtain from my C++ code it's not.

 

So I suspect that gstreamer doesn't use the hardware encoder.

How can I be sure that gstreamer uses the hardware encoder instead of a h264 software library and how can I know in real time what are the V4L2 settings that gstreamer applies to the encoder ?

 

Thanks.

Regards,

Simon

 

 


_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

Re: RE: How do gstreamer interfaces with H264 hardware encoders and creates videos ?

simo-zz
Hello Rand,

You are right. The board is a Dragonboard 410c by 96boards.


96boards in their release notes


write that the gstreamer pipeline uses the video encoder.
But as I said before, I noticed notable differences in video results, which make me doubt that gstreamer really uses the encoder..

The C/C++ code I am using is based on this one:



I basically changed the Varbanov's code to catch the camera frames and feed the encoder, my code works in the sense I can record an h264 video (not mp4 as gstreamer does), but I noticed the results I commented in my previous mail.

So what additional processing gstreamer applies to the video hardware encoding ?

Regards,
Simon

Il mercoledì 25 aprile 2018, 22:49:38 CEST, Rand Graham <[hidden email]> ha scritto:


Hello,

 

It might help if you mention which embedded board you are using.

 

In order to use custom hardware from a vendor such as nVidia, you would compile gstreamer plugins provided by the vendor and then specify them in your pipeline.

 

Regards,

Rand

 

From: gstreamer-devel [mailto:[hidden email]] On Behalf Of [hidden email]
Sent: Wednesday, April 25, 2018 1:01 PM
To: [hidden email]
Subject: How do gstreamer interfaces with H264 hardware encoders and creates videos ?

 

Hello,

 

I am using an embedded board which has an hardware H264 encoder and I am testing video generation both with gst-launch and with a C++ code wrote by my self.

 

Comparing my code results to the gst-launch results, it is clear and obvious that gstreamer applies additional processing compared to what I get from the hardware encoder buffer.

The first obvious processing is that it generates an mp4 video, while I can only generate an h264 video, but I am not using additional mp4 demux in my code.

 

For example, the gst-launch resulting video image's quality it's quiet better, the video has the correct framerate rather than the video I obtain which results slightly "accelerated", and in addtition, the time-stap (minutes - seconds) is present while in the video I obtain from my C++ code it's not.

 

So I suspect that gstreamer doesn't use the hardware encoder.

How can I be sure that gstreamer uses the hardware encoder instead of a h264 software library and how can I know in real time what are the V4L2 settings that gstreamer applies to the encoder ?

 

Thanks.

Regards,

Simon

 

 


_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

Re: RE: How do gstreamer interfaces with H264 hardware encoders and creates videos ?

simo-zz

Sorry,
 
The code reference I gave is wrong.
The code I am using is based on this one:




SImon
Il mercoledì 25 aprile 2018, 23:33:24 CEST, [hidden email] <[hidden email]> ha scritto:


Hello Rand,

You are right. The board is a Dragonboard 410c by 96boards.


96boards in their release notes


write that the gstreamer pipeline uses the video encoder.
But as I said before, I noticed notable differences in video results, which make me doubt that gstreamer really uses the encoder..

The C/C++ code I am using is based on this one:



I basically changed the Varbanov's code to catch the camera frames and feed the encoder, my code works in the sense I can record an h264 video (not mp4 as gstreamer does), but I noticed the results I commented in my previous mail.

So what additional processing gstreamer applies to the video hardware encoding ?

Regards,
Simon

Il mercoledì 25 aprile 2018, 22:49:38 CEST, Rand Graham <[hidden email]> ha scritto:


Hello,

 

It might help if you mention which embedded board you are using.

 

In order to use custom hardware from a vendor such as nVidia, you would compile gstreamer plugins provided by the vendor and then specify them in your pipeline.

 

Regards,

Rand

 

From: gstreamer-devel [mailto:[hidden email]] On Behalf Of [hidden email]
Sent: Wednesday, April 25, 2018 1:01 PM
To: [hidden email]
Subject: How do gstreamer interfaces with H264 hardware encoders and creates videos ?

 

Hello,

 

I am using an embedded board which has an hardware H264 encoder and I am testing video generation both with gst-launch and with a C++ code wrote by my self.

 

Comparing my code results to the gst-launch results, it is clear and obvious that gstreamer applies additional processing compared to what I get from the hardware encoder buffer.

The first obvious processing is that it generates an mp4 video, while I can only generate an h264 video, but I am not using additional mp4 demux in my code.

 

For example, the gst-launch resulting video image's quality it's quiet better, the video has the correct framerate rather than the video I obtain which results slightly "accelerated", and in addtition, the time-stap (minutes - seconds) is present while in the video I obtain from my C++ code it's not.

 

So I suspect that gstreamer doesn't use the hardware encoder.

How can I be sure that gstreamer uses the hardware encoder instead of a h264 software library and how can I know in real time what are the V4L2 settings that gstreamer applies to the encoder ?

 

Thanks.

Regards,

Simon

 

 


_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

RE: RE: How do gstreamer interfaces with H264 hardware encoders and creates videos ?

Rand Graham-2
In reply to this post by simo-zz

Hello,

 

Did you recompile according to the release notes? Are you using the pipeline shown in the release notes?

 

To know what is being done by gstreamer, you should copy paste the exact pipeline you are using.

 

The release notes show this pipeline

 

gst-launch-1.0 -e v4l2src device=/dev/video3 ! video/x-raw,format=NV12,width=1280,height=960 ! v4l2h264enc extra-controls="controls,h264_profile=4,video_bitrate=2000000;" ! h264parse ! mp4mux ! filesink location=video.mp4

 

It looks like this pipeline is using v4l2h264enc to do the h.264 encoding.

 

It is then using a parser and a muxer and a filesink to create an mp4 file. What this does is use an mp4 container that contains an h264 video track.

 

It looks like the h264 encoder takes some parameters. You may be able to get better video quality by adjusting the parameters of the h264 encoder. For example, there is typically a “high” setting that can be used for h264 quality. You might also try increasing the bitrate to see if that improves quality. (The height and width dimensions seem odd to me. I would expect something like 1280x720 or 1920x1080)

 

Regards,

Rand

 

 

 

From: gstreamer-devel [mailto:[hidden email]] On Behalf Of [hidden email]
Sent: Wednesday, April 25, 2018 4:33 PM
To: Discussion of the Development of and With GStreamer <[hidden email]>
Subject: Re: RE: How do gstreamer interfaces with H264 hardware encoders and creates videos ?

 

Hello Rand,

 

You are right. The board is a Dragonboard 410c by 96boards.

 

 

96boards in their release notes

 

 

write that the gstreamer pipeline uses the video encoder.

But as I said before, I noticed notable differences in video results, which make me doubt that gstreamer really uses the encoder..

 

The C/C++ code I am using is based on this one:

 

 

stanimir.varbanov/v4l2-decode.git - Unnamed repository

 

I basically changed the Varbanov's code to catch the camera frames and feed the encoder, my code works in the sense I can record an h264 video (not mp4 as gstreamer does), but I noticed the results I commented in my previous mail.

 

So what additional processing gstreamer applies to the video hardware encoding ?

 

Regards,

Simon

 

Il mercoledì 25 aprile 2018, 22:49:38 CEST, Rand Graham <[hidden email]> ha scritto:

 

 

Hello,

 

It might help if you mention which embedded board you are using.

 

In order to use custom hardware from a vendor such as nVidia, you would compile gstreamer plugins provided by the vendor and then specify them in your pipeline.

 

Regards,

Rand

 

From: gstreamer-devel [[hidden email]] On Behalf Of [hidden email]
Sent: Wednesday, April 25, 2018 1:01 PM
To:
[hidden email]
Subject: How do gstreamer interfaces with H264 hardware encoders and creates videos ?

 

Hello,

 

I am using an embedded board which has an hardware H264 encoder and I am testing video generation both with gst-launch and with a C++ code wrote by my self.

 

Comparing my code results to the gst-launch results, it is clear and obvious that gstreamer applies additional processing compared to what I get from the hardware encoder buffer.

The first obvious processing is that it generates an mp4 video, while I can only generate an h264 video, but I am not using additional mp4 demux in my code.

 

For example, the gst-launch resulting video image's quality it's quiet better, the video has the correct framerate rather than the video I obtain which results slightly "accelerated", and in addtition, the time-stap (minutes - seconds) is present while in the video I obtain from my C++ code it's not.

 

So I suspect that gstreamer doesn't use the hardware encoder.

How can I be sure that gstreamer uses the hardware encoder instead of a h264 software library and how can I know in real time what are the V4L2 settings that gstreamer applies to the encoder ?

 

Thanks.

Regards,

Simon

 

 


_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

Re: RE: RE: How do gstreamer interfaces with H264 hardware encoders and creates videos ?

simo-zz
Hello Rand,

Yes I recompiled the libs as 96boards suggests.
The pipeline I use is:

gst-launch-1.0 -v -e v4l2src device=/dev/video3 ! videoconvert! video/x-raw,width=1920,height=1080 ! v4l2h264enc extra-controls="controls,h264_profile=4,video_bitrate=2000000;" ! h264parse ! mp4mux ! filesink location=video.mp4

and this pipeline creates a quiet good video..
In this case the output is:

Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
/GstPipeline:pipeline0/GstV4l2Src:v4l2src0.GstPad:src: caps = video/x-raw, width=(int)1920, height=(int)1080, format=(string)NV12, framerate=(fraction)30/1, interlace-mode=(string)progressive, colorimetry=(string)2:4:7:1, pixel-aspect-ratio=(fraction)1/1
/GstPipeline:pipeline0/GstVideoConvert:videoconvert0.GstPad:src: caps = video/x-raw, width=(int)1920, height=(int)1080, format=(string)NV12, framerate=(fraction)30/1, interlace-mode=(string)progressive, colorimetry=(string)2:4:7:1, pixel-aspect-ratio=(fraction)1/1
/GstPipeline:pipeline0/GstCapsFilter:capsfilter0.GstPad:src: caps = video/x-raw, width=(int)1920, height=(int)1080, format=(string)NV12, framerate=(fraction)30/1, interlace-mode=(string)progressive, colorimetry=(string)2:4:7:1, pixel-aspect-ratio=(fraction)1/1
/GstPipeline:pipeline0/GstTee:t.GstTeePad:src_0: caps = video/x-raw, width=(int)1920, height=(int)1080, format=(string)NV12, framerate=(fraction)30/1, interlace-mode=(string)progressive, colorimetry=(string)2:4:7:1, pixel-aspect-ratio=(fraction)1/1
/GstPipeline:pipeline0/GstQueue:queue0.GstPad:src: caps = video/x-raw, width=(int)1920, height=(int)1080, format=(string)NV12, framerate=(fraction)30/1, interlace-mode=(string)progressive, colorimetry=(string)2:4:7:1, pixel-aspect-ratio=(fraction)1/1
/GstPipeline:pipeline0/v4l2h264enc:v4l2h264enc0.GstPad:src: caps = video/x-h264, stream-format=(string)byte-stream, alignment=(string)au, profile=(string)high, level=(string)1, width=(int)1920, height=(int)1080, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction)30/1, interlace-mode=(string)progressive, colorimetry=(string)2:4:7:1
/GstPipeline:pipeline0/GstH264Parse:h264parse0.GstPad:sink: caps = video/x-h264, stream-format=(string)byte-stream, alignment=(string)au, profile=(string)high, level=(string)1, width=(int)1920, height=(int)1080, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction)30/1, interlace-mode=(string)progressive, colorimetry=(string)2:4:7:1
/GstPipeline:pipeline0/GstQueue:queue0.GstPad:sink: caps = video/x-raw, width=(int)1920, height=(int)1080, format=(string)NV12, framerate=(fraction)30/1, interlace-mode=(string)progressive, colorimetry=(string)2:4:7:1, pixel-aspect-ratio=(fraction)1/1
/GstPipeline:pipeline0/GstTee:t.GstTeePad:src_1: caps = video/x-raw, width=(int)1920, height=(int)1080, format=(string)NV12, framerate=(fraction)30/1, interlace-mode=(string)progressive, colorimetry=(string)2:4:7:1, pixel-aspect-ratio=(fraction)1/1
Redistribute latency...
/GstPipeline:pipeline0/GstQueue:queue1.GstPad:src: caps = video/x-raw, width=(int)1920, height=(int)1080, format=(string)NV12, framerate=(fraction)30/1, interlace-mode=(string)progressive, colorimetry=(string)2:4:7:1, pixel-aspect-ratio=(fraction)1/1
/GstPipeline:pipeline0/GstQueue:queue1.GstPad:sink: caps = video/x-raw, width=(int)1920, height=(int)1080, format=(string)NV12, framerate=(fraction)30/1, interlace-mode=(string)progressive, colorimetry=(string)2:4:7:1, pixel-aspect-ratio=(fraction)1/1
/GstPipeline:pipeline0/GstTee:t.GstPad:sink: caps = video/x-raw, width=(int)1920, height=(int)1080, format=(string)NV12, framerate=(fraction)30/1, interlace-mode=(string)progressive, colorimetry=(string)2:4:7:1, pixel-aspect-ratio=(fraction)1/1
/GstPipeline:pipeline0/GstTee:t.GstPad:sink: caps = video/x-raw, width=(int)1920, height=(int)1080, format=(string)NV12, framerate=(fraction)30/1, interlace-mode=(string)progressive, colorimetry=(string)2:4:7:1, pixel-aspect-ratio=(fraction)1/1
/GstPipeline:pipeline0/GstCapsFilter:capsfilter0.GstPad:sink: caps = video/x-raw, width=(int)1920, height=(int)1080, format=(string)NV12, framerate=(fraction)30/1, interlace-mode=(string)progressive, colorimetry=(string)2:4:7:1, pixel-aspect-ratio=(fraction)1/1
/GstPipeline:pipeline0/GstVideoConvert:videoconvert0.GstPad:sink: caps = video/x-raw, width=(int)1920, height=(int)1080, format=(string)NV12, framerate=(fraction)30/1, interlace-mode=(string)progressive, colorimetry=(string)2:4:7:1, pixel-aspect-ratio=(fraction)1/1
/GstPipeline:pipeline0/v4l2h264enc:v4l2h264enc0.GstPad:sink: caps = video/x-raw, width=(int)1920, height=(int)1080, format=(string)NV12, framerate=(fraction)30/1, interlace-mode=(string)progressive, colorimetry=(string)2:4:7:1, pixel-aspect-ratio=(fraction)1/1
/GstPipeline:pipeline0/GstH264Parse:h264parse0.GstPad:src: caps = video/x-h264, stream-format=(string)avc, alignment=(string)au, profile=(string)high, level=(string)1, width=(int)1920, height=(int)1080, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction)30/1, interlace-mode=(string)progressive, colorimetry=(string)2:4:7:1, parsed=(boolean)true, codec_data=(buffer)0164000affe100176764000aacd201e0089a100fe502b3b9aca008da1426a001000568ce06e2c0
/GstPipeline:pipeline0/GstMP4Mux:mp4mux0.GstPad:video_0: caps = video/x-h264, stream-format=(string)avc, alignment=(string)au, profile=(string)high, level=(string)1, width=(int)1920, height=(int)1080, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction)30/1, interlace-mode=(string)progressive, colorimetry=(string)2:4:7:1, parsed=(boolean)true, codec_data=(buffer)0164000affe100176764000aacd201e0089a100fe502b3b9aca008da1426a001000568ce06e2c0
/GstPipeline:pipeline0/GstMP4Mux:mp4mux0.GstPad:src: caps = video/quicktime, variant=(string)iso
/GstPipeline:pipeline0/GstFileSink:filesink0.GstPad:sink: caps = video/quicktime, variant=(string)iso


Otherwise, If I use the following pipeline to extract raw frames while I am recording the video

sudo gst-launch-1.0 -e v4l2src device=/dev/video3 ! videoconvert! video/x-raw,width=1920,height=1080 ! tee name=t ! queue ! v4l2h264enc extra-controls="controls,h264_profile=4,video_bitrate=2000000;" ! h264parse ! mp4mux ! filesink location=video.mp4 t. ! queue ! multifilesink location=file%1d.raw

the resulting video is quiet crappy, not all frames are being recorded and the sequence remains locked on a single frame for a few seconds. It's obvious that the extracting frames task is not good while recording a video. If gstreamer should be using the h264 encoder, why it gives this problems in this case ?
I doubt about it because my C/C++ code doesn't generate a bad video as in this case.

>> It is then using a parser and a muxer and a filesink to create an mp4 file.

This is an interesting point. In this case gstreamer should use a mp4 library using the h264 encoded data ?
This makes sense to me.

Simon
Il mercoledì 25 aprile 2018, 23:49:53 CEST, Rand Graham <[hidden email]> ha scritto:


Hello,

 

Did you recompile according to the release notes? Are you using the pipeline shown in the release notes?

 

To know what is being done by gstreamer, you should copy paste the exact pipeline you are using.

 

The release notes show this pipeline

 

gst-launch-1.0 -e v4l2src device=/dev/video3 ! video/x-raw,format=NV12,width=1280,height=960 ! v4l2h264enc extra-controls="controls,h264_profile=4,video_bitrate=2000000;" ! h264parse ! mp4mux ! filesink location=video.mp4

 

It looks like this pipeline is using v4l2h264enc to do the h.264 encoding.

 

It is then using a parser and a muxer and a filesink to create an mp4 file. What this does is use an mp4 container that contains an h264 video track.

 

It looks like the h264 encoder takes some parameters. You may be able to get better video quality by adjusting the parameters of the h264 encoder. For example, there is typically a “high” setting that can be used for h264 quality. You might also try increasing the bitrate to see if that improves quality. (The height and width dimensions seem odd to me. I would expect something like 1280x720 or 1920x1080)

 

Regards,

Rand

 

 

 

From: gstreamer-devel [mailto:[hidden email]] On Behalf Of [hidden email]
Sent: Wednesday, April 25, 2018 4:33 PM
To: Discussion of the Development of and With GStreamer <[hidden email]>
Subject: Re: RE: How do gstreamer interfaces with H264 hardware encoders and creates videos ?

 

Hello Rand,

 

You are right. The board is a Dragonboard 410c by 96boards.

 

 

96boards in their release notes

 

 

write that the gstreamer pipeline uses the video encoder.

But as I said before, I noticed notable differences in video results, which make me doubt that gstreamer really uses the encoder..

 

The C/C++ code I am using is based on this one:

 

 

stanimir.varbanov/v4l2-decode.git - Unnamed repository

 

I basically changed the Varbanov's code to catch the camera frames and feed the encoder, my code works in the sense I can record an h264 video (not mp4 as gstreamer does), but I noticed the results I commented in my previous mail.

 

So what additional processing gstreamer applies to the video hardware encoding ?

 

Regards,

Simon

 

Il mercoledì 25 aprile 2018, 22:49:38 CEST, Rand Graham <[hidden email]> ha scritto:

 

 

Hello,

 

It might help if you mention which embedded board you are using.

 

In order to use custom hardware from a vendor such as nVidia, you would compile gstreamer plugins provided by the vendor and then specify them in your pipeline.

 

Regards,

Rand

 

From: gstreamer-devel [[hidden email]] On Behalf Of [hidden email]
Sent: Wednesday, April 25, 2018 1:01 PM
To:
[hidden email]
Subject: How do gstreamer interfaces with H264 hardware encoders and creates videos ?

 

Hello,

 

I am using an embedded board which has an hardware H264 encoder and I am testing video generation both with gst-launch and with a C++ code wrote by my self.

 

Comparing my code results to the gst-launch results, it is clear and obvious that gstreamer applies additional processing compared to what I get from the hardware encoder buffer.

The first obvious processing is that it generates an mp4 video, while I can only generate an h264 video, but I am not using additional mp4 demux in my code.

 

For example, the gst-launch resulting video image's quality it's quiet better, the video has the correct framerate rather than the video I obtain which results slightly "accelerated", and in addtition, the time-stap (minutes - seconds) is present while in the video I obtain from my C++ code it's not.

 

So I suspect that gstreamer doesn't use the hardware encoder.

How can I be sure that gstreamer uses the hardware encoder instead of a h264 software library and how can I know in real time what are the V4L2 settings that gstreamer applies to the encoder ?

 

Thanks.

Regards,

Simon

 

 


_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

Re: How do gstreamer interfaces with H264 hardware encoders and creates videos ?

Nicolas Dufresne-5
In reply to this post by Rand Graham-2


Le mer. 25 avr. 2018 17:42, Rand Graham <[hidden email]> a écrit :

Hello,

 

It might help if you mention which embedded board you are using.

 

In order to use custom hardware from a vendor such as nVidia, you would compile gstreamer plugins provided by the vendor and then specify them in your pipeline.

 

Regards,

Rand

 

From: gstreamer-devel [mailto:[hidden email]] On Behalf Of [hidden email]
Sent: Wednesday, April 25, 2018 1:01 PM
To: [hidden email]
Subject: How do gstreamer interfaces with H264 hardware encoders and creates videos ?

 

Hello,

 

I am using an embedded board which has an hardware H264 encoder and I am testing video generation both with gst-launch and with a C++ code wrote by my self.

 

Comparing my code results to the gst-launch results, it is clear and obvious that gstreamer applies additional processing compared to what I get from the hardware encoder buffer.

The first obvious processing is that it generates an mp4 video, while I can only generate an h264 video, but I am not using additional mp4 demux in my code.

 

For example, the gst-launch resulting video image's quality it's quiet better, the video has the correct framerate rather than the video I obtain which results slightly "accelerated", and in addtition, the time-stap (minutes - seconds) is present while in the video I obtain from my C++ code it's not.

 

So I suspect that gstreamer doesn't use the hardware encoder.

How can I be sure that gstreamer uses the hardware encoder instead of a h264 software library and how can I know in real time what are the V4L2 settings that gstreamer applies to the encoder ?


As you mention v4l2 settings, I suppose you have a hardware with v4l2 M2M codecs or are you referring to your camera setting ?

 

Thanks.

Regards,

Simon

 

 

_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel

_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

RE: RE: RE: How do gstreamer interfaces with H264 hardware encoders and creates videos ?

Rand Graham-2
In reply to this post by simo-zz

Hello,

 

I just wanted to comment on this pipeline:

 

sudo gst-launch-1.0 -e v4l2src device=/dev/video3 ! videoconvert! video/x-raw,width=1920,height=1080 ! tee name=t ! queue ! v4l2h264enc extra-controls="controls,h264_profile=4,video_bitrate=2000000;" ! h264parse ! mp4mux ! filesink location=video.mp4 t. ! queue ! multifilesink location=file%1d.raw

 

This is a rather complicated pipeline with a couple potential bottlenecks. The main bottleneck I would worry about is the file system.

 

Beyond potential bottleneck issues, I would say the following.

 

1)      I don’t know exactly what you are expecting as far as what you call raw video. The pipeline has a videoconvert element in it. To me this would mean you would not be capturing raw video but rather the output of videoconvert.

2)      If I understand the pipeline, the video saved by the multifilesink is not using the h264 encoder. This is because the tee comes before the h264 encoder. The video in the filesink element is getting the output of the h264 encoder. Your original email was asking about whether or not gstreamer was using the hardware encoder. Based on the pipeline above, it appears to me that in one case gstreamer would use the hardware encoder and in the other case it would not.

 

 

Regards,

Rand

 

From: gstreamer-devel [mailto:[hidden email]] On Behalf Of [hidden email]
Sent: Wednesday, April 25, 2018 5:11 PM
To: Discussion of the Development of and With GStreamer <[hidden email]>
Subject: Re: RE: RE: How do gstreamer interfaces with H264 hardware encoders and creates videos ?

 

Hello Rand,

 

Yes I recompiled the libs as 96boards suggests.

The pipeline I use is:

 

gst-launch-1.0 -v -e v4l2src device=/dev/video3 ! videoconvert! video/x-raw,width=1920,height=1080 ! v4l2h264enc extra-controls="controls,h264_profile=4,video_bitrate=2000000;" ! h264parse ! mp4mux ! filesink location=video.mp4

 

and this pipeline creates a quiet good video..

In this case the output is:

 

Otherwise, If I use the following pipeline to extract raw frames while I am recording the video

 

sudo gst-launch-1.0 -e v4l2src device=/dev/video3 ! videoconvert! video/x-raw,width=1920,height=1080 ! tee name=t ! queue ! v4l2h264enc extra-controls="controls,h264_profile=4,video_bitrate=2000000;" ! h264parse ! mp4mux ! filesink location=video.mp4 t. ! queue ! multifilesink location=file%1d.raw

 

the resulting video is quiet crappy, not all frames are being recorded and the sequence remains locked on a single frame for a few seconds. It's obvious that the extracting frames task is not good while recording a video. If gstreamer should be using the h264 encoder, why it gives this problems in this case ?
I doubt about it because my C/C++ code doesn't generate a bad video as in this case.

 

>> It is then using a parser and a muxer and a filesink to create an mp4 file.

This is an interesting point. In this case gstreamer should use a mp4 library using the h264 encoded data ?
This makes sense to me.


Simon

Il mercoledì 25 aprile 2018, 23:49:53 CEST, Rand Graham <[hidden email]> ha scritto:

 

 

Hello,

 

Did you recompile according to the release notes? Are you using the pipeline shown in the release notes?

 

To know what is being done by gstreamer, you should copy paste the exact pipeline you are using.

 

The release notes show this pipeline

 

gst-launch-1.0 -e v4l2src device=/dev/video3 ! video/x-raw,format=NV12,width=1280,height=960 ! v4l2h264enc extra-controls="controls,h264_profile=4,video_bitrate=2000000;" ! h264parse ! mp4mux ! filesink location=video.mp4

 

It looks like this pipeline is using v4l2h264enc to do the h.264 encoding.

 

It is then using a parser and a muxer and a filesink to create an mp4 file. What this does is use an mp4 container that contains an h264 video track.

 

It looks like the h264 encoder takes some parameters. You may be able to get better video quality by adjusting the parameters of the h264 encoder. For example, there is typically a “high” setting that can be used for h264 quality. You might also try increasing the bitrate to see if that improves quality. (The height and width dimensions seem odd to me. I would expect something like 1280x720 or 1920x1080)

 

Regards,

Rand

 

 

 

From: gstreamer-devel [[hidden email]] On Behalf Of [hidden email]
Sent: Wednesday, April 25, 2018 4:33 PM
To: Discussion of the Development of and With GStreamer <[hidden email]>
Subject: Re: RE: How do gstreamer interfaces with H264 hardware encoders and creates videos ?

 

Hello Rand,

 

You are right. The board is a Dragonboard 410c by 96boards.

 

 

96boards in their release notes

 

 

write that the gstreamer pipeline uses the video encoder.

But as I said before, I noticed notable differences in video results, which make me doubt that gstreamer really uses the encoder..

 

The C/C++ code I am using is based on this one:

 

 

stanimir.varbanov/v4l2-decode.git - Unnamed repository

 

I basically changed the Varbanov's code to catch the camera frames and feed the encoder, my code works in the sense I can record an h264 video (not mp4 as gstreamer does), but I noticed the results I commented in my previous mail.

 

So what additional processing gstreamer applies to the video hardware encoding ?

 

Regards,

Simon

 

Il mercoledì 25 aprile 2018, 22:49:38 CEST, Rand Graham <[hidden email]> ha scritto:

 

 

Hello,

 

It might help if you mention which embedded board you are using.

 

In order to use custom hardware from a vendor such as nVidia, you would compile gstreamer plugins provided by the vendor and then specify them in your pipeline.

 

Regards,

Rand

 

From: gstreamer-devel [[hidden email]] On Behalf Of [hidden email]
Sent: Wednesday, April 25, 2018 1:01 PM
To:
[hidden email]
Subject: How do gstreamer interfaces with H264 hardware encoders and creates videos ?

 

Hello,

 

I am using an embedded board which has an hardware H264 encoder and I am testing video generation both with gst-launch and with a C++ code wrote by my self.

 

Comparing my code results to the gst-launch results, it is clear and obvious that gstreamer applies additional processing compared to what I get from the hardware encoder buffer.

The first obvious processing is that it generates an mp4 video, while I can only generate an h264 video, but I am not using additional mp4 demux in my code.

 

For example, the gst-launch resulting video image's quality it's quiet better, the video has the correct framerate rather than the video I obtain which results slightly "accelerated", and in addtition, the time-stap (minutes - seconds) is present while in the video I obtain from my C++ code it's not.

 

So I suspect that gstreamer doesn't use the hardware encoder.

How can I be sure that gstreamer uses the hardware encoder instead of a h264 software library and how can I know in real time what are the V4L2 settings that gstreamer applies to the encoder ?

 

Thanks.

Regards,

Simon

 

 


_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

Re: RE: RE: How do gstreamer interfaces with H264 hardware encoders and creates videos ?

Nicolas Dufresne-5
In reply to this post by simo-zz
Le mercredi 25 avril 2018 à 22:10 +0000, [hidden email] a écrit :
> Hello Rand,
>
> Yes I recompiled the libs as 96boards suggests.
> The pipeline I use is:
>
> gst-launch-1.0 -v -e v4l2src device=/dev/video3 ! videoconvert!
> video/x-raw,width=1920,height=1080 ! v4l2h264enc extra-
> controls="controls,h264_profile=4,video_bitrate=2000000;" ! h264parse
> ! mp4mux ! filesink location=video.mp4

I strongly discourage using extra-controls to select the profiles.
Please report to 96board so they can fix their wiki. Instead, the
profile should be selected using a caps filter downstream the encoder.
Applies to profile and level. Note that neither Venus driver or
GStreamer makes any validation of the profile/level combination, so you
can easily produce invalid stream at the moment.

>
> and this pipeline creates a quiet good video..
> In this case the output is:

That's a good news.

>
> Setting pipeline to PAUSED ...
> Pipeline is live and does not need PREROLL ...
> Setting pipeline to PLAYING ...
> New clock: GstSystemClock
> /GstPipeline:pipeline0/GstV4l2Src:v4l2src0.GstPad:src: caps =
> video/x-raw, width=(int)1920, height=(int)1080, format=(string)NV12,
> framerate=(fraction)30/1, interlace-mode=(string)progressive,
> colorimetry=(string)2:4:7:1, pixel-aspect-ratio=(fraction)1/1
> /GstPipeline:pipeline0/GstVideoConvert:videoconvert0.GstPad:src: caps
> = video/x-raw, width=(int)1920, height=(int)1080,
> format=(string)NV12, framerate=(fraction)30/1, interlace-
> mode=(string)progressive, colorimetry=(string)2:4:7:1, pixel-aspect-
> ratio=(fraction)1/1
> /GstPipeline:pipeline0/GstCapsFilter:capsfilter0.GstPad:src: caps =
> video/x-raw, width=(int)1920, height=(int)1080, format=(string)NV12,
> framerate=(fraction)30/1, interlace-mode=(string)progressive,
> colorimetry=(string)2:4:7:1, pixel-aspect-ratio=(fraction)1/1
> /GstPipeline:pipeline0/GstTee:t.GstTeePad:src_0: caps = video/x-raw,
> width=(int)1920, height=(int)1080, format=(string)NV12,
> framerate=(fraction)30/1, interlace-mode=(string)progressive,
> colorimetry=(string)2:4:7:1, pixel-aspect-ratio=(fraction)1/1
> /GstPipeline:pipeline0/GstQueue:queue0.GstPad:src: caps = video/x-
> raw, width=(int)1920, height=(int)1080, format=(string)NV12,
> framerate=(fraction)30/1, interlace-mode=(string)progressive,
> colorimetry=(string)2:4:7:1, pixel-aspect-ratio=(fraction)1/1
> /GstPipeline:pipeline0/v4l2h264enc:v4l2h264enc0.GstPad:src: caps =
> video/x-h264, stream-format=(string)byte-stream,
> alignment=(string)au, profile=(string)high, level=(string)1,
> width=(int)1920, height=(int)1080, pixel-aspect-ratio=(fraction)1/1,
> framerate=(fraction)30/1, interlace-mode=(string)progressive,
> colorimetry=(string)2:4:7:1
> /GstPipeline:pipeline0/GstH264Parse:h264parse0.GstPad:sink: caps =
> video/x-h264, stream-format=(string)byte-stream,
> alignment=(string)au, profile=(string)high, level=(string)1,
> width=(int)1920, height=(int)1080, pixel-aspect-ratio=(fraction)1/1,
> framerate=(fraction)30/1, interlace-mode=(string)progressive,
> colorimetry=(string)2:4:7:1
> /GstPipeline:pipeline0/GstQueue:queue0.GstPad:sink: caps = video/x-
> raw, width=(int)1920, height=(int)1080, format=(string)NV12,
> framerate=(fraction)30/1, interlace-mode=(string)progressive,
> colorimetry=(string)2:4:7:1, pixel-aspect-ratio=(fraction)1/1
> /GstPipeline:pipeline0/GstTee:t.GstTeePad:src_1: caps = video/x-raw,
> width=(int)1920, height=(int)1080, format=(string)NV12,
> framerate=(fraction)30/1, interlace-mode=(string)progressive,
> colorimetry=(string)2:4:7:1, pixel-aspect-ratio=(fraction)1/1
> Redistribute latency...
> /GstPipeline:pipeline0/GstQueue:queue1.GstPad:src: caps = video/x-
> raw, width=(int)1920, height=(int)1080, format=(string)NV12,
> framerate=(fraction)30/1, interlace-mode=(string)progressive,
> colorimetry=(string)2:4:7:1, pixel-aspect-ratio=(fraction)1/1
> /GstPipeline:pipeline0/GstQueue:queue1.GstPad:sink: caps = video/x-
> raw, width=(int)1920, height=(int)1080, format=(string)NV12,
> framerate=(fraction)30/1, interlace-mode=(string)progressive,
> colorimetry=(string)2:4:7:1, pixel-aspect-ratio=(fraction)1/1
> /GstPipeline:pipeline0/GstTee:t.GstPad:sink: caps = video/x-raw,
> width=(int)1920, height=(int)1080, format=(string)NV12,
> framerate=(fraction)30/1, interlace-mode=(string)progressive,
> colorimetry=(string)2:4:7:1, pixel-aspect-ratio=(fraction)1/1
> /GstPipeline:pipeline0/GstTee:t.GstPad:sink: caps = video/x-raw,
> width=(int)1920, height=(int)1080, format=(string)NV12,
> framerate=(fraction)30/1, interlace-mode=(string)progressive,
> colorimetry=(string)2:4:7:1, pixel-aspect-ratio=(fraction)1/1
> /GstPipeline:pipeline0/GstCapsFilter:capsfilter0.GstPad:sink: caps =
> video/x-raw, width=(int)1920, height=(int)1080, format=(string)NV12,
> framerate=(fraction)30/1, interlace-mode=(string)progressive,
> colorimetry=(string)2:4:7:1, pixel-aspect-ratio=(fraction)1/1
> /GstPipeline:pipeline0/GstVideoConvert:videoconvert0.GstPad:sink:
> caps = video/x-raw, width=(int)1920, height=(int)1080,
> format=(string)NV12, framerate=(fraction)30/1, interlace-
> mode=(string)progressive, colorimetry=(string)2:4:7:1, pixel-aspect-
> ratio=(fraction)1/1
> /GstPipeline:pipeline0/v4l2h264enc:v4l2h264enc0.GstPad:sink: caps =
> video/x-raw, width=(int)1920, height=(int)1080, format=(string)NV12,
> framerate=(fraction)30/1, interlace-mode=(string)progressive,
> colorimetry=(string)2:4:7:1, pixel-aspect-ratio=(fraction)1/1
> /GstPipeline:pipeline0/GstH264Parse:h264parse0.GstPad:src: caps =
> video/x-h264, stream-format=(string)avc, alignment=(string)au,
> profile=(string)high, level=(string)1, width=(int)1920,
> height=(int)1080, pixel-aspect-ratio=(fraction)1/1,
> framerate=(fraction)30/1, interlace-mode=(string)progressive,
> colorimetry=(string)2:4:7:1, parsed=(boolean)true,
> codec_data=(buffer)0164000affe100176764000aacd201e0089a100fe502b3b9ac
> a008da1426a001000568ce06e2c0
> /GstPipeline:pipeline0/GstMP4Mux:mp4mux0.GstPad:video_0: caps =
> video/x-h264, stream-format=(string)avc, alignment=(string)au,
> profile=(string)high, level=(string)1, width=(int)1920,
> height=(int)1080, pixel-aspect-ratio=(fraction)1/1,
> framerate=(fraction)30/1, interlace-mode=(string)progressive,
> colorimetry=(string)2:4:7:1, parsed=(boolean)true,
> codec_data=(buffer)0164000affe100176764000aacd201e0089a100fe502b3b9ac
> a008da1426a001000568ce06e2c0
> /GstPipeline:pipeline0/GstMP4Mux:mp4mux0.GstPad:src: caps =
> video/quicktime, variant=(string)iso
> /GstPipeline:pipeline0/GstFileSink:filesink0.GstPad:sink: caps =
> video/quicktime, variant=(string)iso
>
> Otherwise, If I use the following pipeline to extract raw frames
> while I am recording the video
>
> sudo gst-launch-1.0 -e v4l2src device=/dev/video3 ! videoconvert!
There is a syntax error, missing a space after videoconvert, copy paste
error ?

> video/x-raw,width=1920,height=1080 ! tee name=t ! queue ! v4l2h264enc
> extra-controls="controls,h264_profile=4,video_bitrate=2000000;" !
> h264parse ! mp4mux ! filesink location=video.mp4 t. ! queue !
> multifilesink location=file%1d.raw

multifilesink does not support GstVideoMeta, so it's very likely that
v4l2src is forced to copy the frames to "standard" strides/offset
buffers. This is very CPU intense, which can lead to frames being
dropped. I've added fakevideosink recently to workaround similar
issues, it's a bit annoying, but maybe we need something similar for
filesink/multifilesink.

>
> the resulting video is quiet crappy, not all frames are being
> recorded and the sequence remains locked on a single frame for a few
> seconds. It's obvious that the extracting frames task is not good
> while recording a video. If gstreamer should be using the h264
> encoder, why it gives this problems in this case ?
> I doubt about it because my C/C++ code doesn't generate a bad video
> as in this case.

The encoder bitrate adaptater is based on the provided framerate. By
dropping, we decrease the rate, we end up confusing the firmware, which
results in bad quality. We also break the motion, which makes the
encoding less efficient.

>
> >> It is then using a parser and a muxer and a filesink to create an
> mp4 file.
>
> This is an interesting point. In this case gstreamer should use a mp4
> library using the h264 encoded data ?
> This makes sense to me.
> Simon
> Il mercoledì 25 aprile 2018, 23:49:53 CEST, Rand Graham <rand.graham@
> zenith.com> ha scritto:
>
>
> Hello,
>  
> Did you recompile according to the release notes? Are you using the
> pipeline shown in the release notes?
>  
> To know what is being done by gstreamer, you should copy paste the
> exact pipeline you are using.
>  
> The release notes show this pipeline
>  
> gst-launch-1.0 -e v4l2src device=/dev/video3 ! video/x-
> raw,format=NV12,width=1280,height=960 ! v4l2h264enc extra-
> controls="controls,h264_profile=4,video_bitrate=2000000;" ! h264parse
> ! mp4mux ! filesink location=video.mp4
>  
> It looks like this pipeline is using v4l2h264enc to do the h.264
> encoding.
>  
> It is then using a parser and a muxer and a filesink to create an mp4
> file. What this does is use an mp4 container that contains an h264
> video track.
>  
> It looks like the h264 encoder takes some parameters. You may be able
> to get better video quality by adjusting the parameters of the h264
> encoder. For example, there is typically a “high” setting that can be
> used for h264 quality. You might also try increasing the bitrate to
> see if that improves quality. (The height and width dimensions seem
> odd to me. I would expect something like 1280x720 or 1920x1080)
>  
> Regards,
> Rand
>  
>  
>  
> From: gstreamer-devel [mailto:[hidden email]
> op.org] On Behalf Of [hidden email]
> Sent: Wednesday, April 25, 2018 4:33 PM
> To: Discussion of the Development of and With GStreamer <gstreamer-de
> [hidden email]>
> Subject: Re: RE: How do gstreamer interfaces with H264 hardware
> encoders and creates videos ?
>  
> Hello Rand,
>  
> You are right. The board is a Dragonboard 410c by 96boards.
>  
> https://developer.qualcomm.com/hardware/snapdragon-410/tools
>  
> 96boards in their release notes
>  
> http://releases.linaro.org/96boards/dragonboard410c/linaro/debian/lat
> est/
>  
> write that the gstreamer pipeline uses the video encoder.
> But as I said before, I noticed notable differences in video results,
> which make me doubt that gstreamer really uses the encoder..
>  
> The C/C++ code I am using is based on this one:
>  
> stanimir.varbanov/v4l2-decode.git - Unnamed repository
>  
> stanimir.varbanov/v4l2-decode.git - Unnamed repository  
>  
> I basically changed the Varbanov's code to catch the camera frames
> and feed the encoder, my code works in the sense I can record an h264
> video (not mp4 as gstreamer does), but I noticed the results I
> commented in my previous mail.
>  
> So what additional processing gstreamer applies to the video hardware
> encoding ?
>  
> Regards,
> Simon
>  
> Il mercoledì 25 aprile 2018, 22:49:38 CEST, Rand Graham <rand.graham@
> zenith.com> ha scritto:
>  
>  
> Hello,
>  
> It might help if you mention which embedded board you are using.
>  
> In order to use custom hardware from a vendor such as nVidia, you
> would compile gstreamer plugins provided by the vendor and then
> specify them in your pipeline.
>  
> Regards,
> Rand
>  
> From: gstreamer-devel [mailto:[hidden email]
> op.org] On Behalf Of [hidden email]
> Sent: Wednesday, April 25, 2018 1:01 PM
> To: [hidden email]
> Subject: How do gstreamer interfaces with H264 hardware encoders and
> creates videos ?
>  
> Hello,
>  
> I am using an embedded board which has an hardware H264 encoder and I
> am testing video generation both with gst-launch and with a C++ code
> wrote by my self.
>  
> Comparing my code results to the gst-launch results, it is clear and
> obvious that gstreamer applies additional processing compared to what
> I get from the hardware encoder buffer.
> The first obvious processing is that it generates an mp4 video, while
> I can only generate an h264 video, but I am not using additional mp4
> demux in my code.
>  
> For example, the gst-launch resulting video image's quality it's
> quiet better, the video has the correct framerate rather than the
> video I obtain which results slightly "accelerated", and in
> addtition, the time-stap (minutes - seconds) is present while in the
> video I obtain from my C++ code it's not.
>  
> So I suspect that gstreamer doesn't use the hardware encoder.
> How can I be sure that gstreamer uses the hardware encoder instead of
> a h264 software library and how can I know in real time what are the
> V4L2 settings that gstreamer applies to the encoder ?
>  
> Thanks.
> Regards,
> Simon
>  
>  
> _______________________________________________
> gstreamer-devel mailing list
> [hidden email]
> https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel

signature.asc (201 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: RE: RE: How do gstreamer interfaces with H264 hardware encoders and creates videos ?

simo-zz
Hello Nicolas,
Thank you for your suggestions.
I will forward these to 96boards staff.
Regards,
Simon


_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel

Untitled (248 bytes) Download Attachment