Opening a webcam which is already in use by a gst-launch

classic Classic list List threaded Threaded
7 messages Options
Reply | Threaded
Open this post in threaded view
|

Opening a webcam which is already in use by a gst-launch

simo-zz
Hello,

I am using the following gst-launch command to grab an h264 mp4 video from my usb camer hardware video encoder using the following pipeline :

gst-launch-1.0 -e v4l2src device=/dev/webcam ! videoconvert ! video/x-raw,width=544,height=288,framerate=10/1 ! v4l2h264enc ! h264parse ! mp4mux ! filesink location=video.mp4

The problem is that I need to process the read frames with OpenCV while the video is being recorded.
Possible solution options I thought are:

1 - Open the webcam in parallel with gstreamer pipelie: doesn't work since webcam is busy.
2 - Open the video output from OpenCV API, retrieve the last frame and process it each time I need: doesn't work because while gstreamer is recording the video, this is not closed. I only can open the video if the pipeline (and consequently the video) has been closed.
3 - Every N frames read, grab 1 frame (to be read and processed from OpenCV) in a JPG/PNG image apart of the video: I don't know if it's possible, neither how to do it in the same pipeline. This option would be the best choise..

Learning the gstreamer API modifying the OpenCV program I already have seems the most laborious option, and perhaps is not necessary.
There is way to make options 1 or 2 or 3 to work ?

Thank you in advance.
Regards,
Simon





_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

Re: Opening a webcam which is already in use by a gst-launch

Arjen Veenhuizen
What one typically would do is write a little app and use a combination of tee and appsink:

gst-launch-1.0 -e v4l2src device=/dev/webcam ! videoconvert ! video/x-raw,width=544,height=288,framerate=10/1 ! v4l2h264enc ! h264parse ! tee name=t ! mp4mux ! filesink location=video.mp4 t. ! decodebin ! videoconvert ! "video/x-raw,format=I420" ! appsink

and wait for the appropriate callback from appsink to get the raw (in this case) yuv420p frame and feed the data into opencv.

Alternatively, to get things running quick and dirty from the command line, have it write the decoded output to a file / named pipe by replacing the appsink element by a filesink element.

Reply | Threaded
Open this post in threaded view
|

Re: Opening a webcam which is already in use by a gst-launch

Yasushi SHOJI-2
Hi,

On Tue, Aug 29, 2017 at 11:07 PM, Arjen Veenhuizen
<[hidden email]> wrote:
> What one typically would do is write a little app and use a combination of
> tee and appsink:
>
> gst-launch-1.0 -e v4l2src device=/dev/webcam ! videoconvert !
> video/x-raw,width=544,height=288,framerate=10/1 ! v4l2h264enc ! h264parse !
> tee name=t ! mp4mux ! filesink location=video.mp4 t. ! decodebin !
> videoconvert ! "video/x-raw,format=I420" ! appsink

Wouldn't it be better to tee right before v4l2h264enc, so that you don't have to
decode the frames you just encoded?

Unfortunately, the current tee[1] implementation do memcpy(), so if you don't
need every single frame, I'd suggest using the "handoff" signal from the
identity element or pad probes[2] to get the frame you want, copy it
when you need it.
If you have plenty of CPU time and memory memory bandwidth, don't
worry about the above.

[1]: http://gstreamer-devel.966125.n4.nabble.com/Multiple-sink-usage-in-im6x-td4683174.html
[2]: https://gstreamer.freedesktop.org/documentation/design/probes.html
--
               yashi
_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

Re: Opening a webcam which is already in use by a gst-launch

Arjen Veenhuizen
Yasushi SHOJI-2 wrote
> Hi,
>
> On Tue, Aug 29, 2017 at 11:07 PM, Arjen Veenhuizen
> &lt;

> arjen@

> &gt; wrote:
>> What one typically would do is write a little app and use a combination
>> of
>> tee and appsink:
>>
>> gst-launch-1.0 -e v4l2src device=/dev/webcam ! videoconvert !
>> video/x-raw,width=544,height=288,framerate=10/1 ! v4l2h264enc ! h264parse
>> !
>> tee name=t ! mp4mux ! filesink location=video.mp4 t. ! decodebin !
>> videoconvert ! "video/x-raw,format=I420" ! appsink
>
> Wouldn't it be better to tee right before v4l2h264enc, so that you don't
> have to
> decode the frames you just encoded?
>
> Unfortunately, the current tee[1] implementation do memcpy(), so if you
> don't
> need every single frame, I'd suggest using the "handoff" signal from the
> identity element or pad probes[2] to get the frame you want, copy it
> when you need it.
> If you have plenty of CPU time and memory memory bandwidth, don't
> worry about the above.
>
> [1]:
> http://gstreamer-devel.966125.n4.nabble.com/Multiple-sink-usage-in-im6x-td4683174.html
> [2]: https://gstreamer.freedesktop.org/documentation/design/probes.html
> --
>                yashi

Ah, yes, off course, put tee before the encoder. Your suggestion to use the
hand-off signal is also a very good one.



--
Sent from: http://gstreamer-devel.966125.n4.nabble.com/
_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

Re: Opening a webcam which is already in use by a gst-launch

simo-zz
In reply to this post by simo-zz
Hello @Arjen and @Yasushi,

Thank you for your suggestions.
I successfully implemented the initial pipeline in C, and now I need to use pads to monitor the data stream.
Question:
Do I need tee and queue even if I want to be able to monitor when each frame as been read ?

@Arjen wrote:
"Alternatively, to get things running quick and dirty from the command line, have it write the decoded output to a file / named pipe by replacing the appsink element by a filesink element"

I tried to execute the following pipe from command line:

gst-launch-1.0 -e v4l2src device=/dev/webcam ! videoconvert ! video/x-raw,width=544,height=288,framerate=10/1 ! v4l2h264enc ! tee name=t ! h264parse ! mp4mux ! filesink location=video.mp4  ! decodebin ! videoconvert ! video/x-raw,format=I420 ! filesink location="file.png"

but gst-launch claims he cannot link filesink0 to decodebin0. It's seems to me that the tee element is not take into account, but I am noob with gst so I can't say a lot..

However I am trying to implement the option #3. Seems to best fit my needs.

Thank you.
Regards,
Simon


El Miércoles 30 de agosto de 2017 10:41, Arjen Veenhuizen <[hidden email]> escribió:


Yasushi SHOJI-2 wrote
> Hi,
>
> On Tue, Aug 29, 2017 at 11:07 PM, Arjen Veenhuizen
> &lt;

> arjen@

> &gt; wrote:
>> What one typically would do is write a little app and use a combination
>> of
>> tee and appsink:
>>
>> gst-launch-1.0 -e v4l2src device=/dev/webcam ! videoconvert !
>> video/x-raw,width=544,height=288,framerate=10/1 ! v4l2h264enc ! h264parse
>> !
>> tee name=t ! mp4mux ! filesink location=video.mp4 t. ! decodebin !
>> videoconvert ! "video/x-raw,format=I420" ! appsink
>
> Wouldn't it be better to tee right before v4l2h264enc, so that you don't
> have to
> decode the frames you just encoded?
>
> Unfortunately, the current tee[1] implementation do memcpy(), so if you
> don't
> need every single frame, I'd suggest using the "handoff" signal from the
> identity element or pad probes[2] to get the frame you want, copy it
> when you need it.
> If you have plenty of CPU time and memory memory bandwidth, don't
> worry about the above.
>
> [1]:
> http://gstreamer-devel.966125.n4.nabble.com/Multiple-sink-usage-in-im6x-td4683174.html
> [2]: https://gstreamer.freedesktop.org/documentation/design/probes.html
> --
>                yashi

Ah, yes, off course, put tee before the encoder. Your suggestion to use the
hand-off signal is also a very good one.



--
Sent from: http://gstreamer-devel.966125.n4.nabble.com/

_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel



_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

Re: Opening a webcam which is already in use by a gst-launch

Yasushi SHOJI-2
Hi,

On Thu, Aug 31, 2017 at 10:26 PM, simo zz <[hidden email]> wrote:
> Hello @Arjen and @Yasushi,
>
> Thank you for your suggestions.
> I successfully implemented the initial pipeline in C, and now I need to use
> pads to monitor the data stream.
> Question:
> Do I need tee and queue even if I want to be able to monitor when each frame
> as been read ?

It depends.

Yes, if you want to stream processed image down to gstreamer pipeline.

No, If all you need to do is to analyze your video data with opencv and your
program don't need a branched gstreamer pipeline, along with the main saving
pipeline, to further process it.
--
          yashi
_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

Re: Opening a webcam which is already in use by a gst-launch

Arjen Veenhuizen
For the record, your pipeline was incorrect. Change it to:


or even better (to skip the encoding -> decoding step):




As a rule of thumb, always put a queue after the tee src-pads to put
processing of that branch on another thread. Note that the output of the
second branch in your pipeline will be a sequence of yuv420p frames, not
png.



--
Sent from: http://gstreamer-devel.966125.n4.nabble.com/
_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel