gst-rtsp-server share v4l2src accross several mounting points

classic Classic list List threaded Threaded
5 messages Options
Reply | Threaded
Open this post in threaded view
|

gst-rtsp-server share v4l2src accross several mounting points

jean-philippe
Hi,

I'm trying to use gst-rtsp-server to implement an RTSP server for an IP
camera.

Here is what I need to do but have not found a way to implement:
- I am looking for a way to have several video streams running at the same
time, streaming video captured from my one v4l2 camera, but with different
encodings.
- I need to be able to add new stream/mounting points without interrupting
the streaming that may already be ongoing.
- I need to be able to capture snapshots (still images) from the camera when
0 or more RTP stream(s) are live, without interrupting the streams that may
already be live.

The v4l2src cannot be reopened, so creating individual pipelines, one for
each RTSP mounting point is not an option. So the following, based on the
GstRTSPMediaFactory does not work:
for (int i = 0; i < 2; ++i) {
   std::string s = "test_" + std::to_string(i);
   factory = gst_rtsp_media_factory_new();
   gst_rtsp_mount_points_add_factory(mounts, s.c_str(), factory)
   gst_rtsp_media_factory_set_launch(factory, "v4l2src ! vpuenc_h264 !
rtph264pay name=pay0 pt96");
   gst_rtsp_media_factory_set_shared (factory, TRUE);
}

With this sort of code, I can stream from test_1 or test_2, but not both.
The reason is that the v4l2src cannot be reopened.

The following pipeline would work for my requirements, but according to
this link
<http://gstreamer-devel.966125.n4.nabble.com/RTSP-Server-Can-t-get-multiple-streams-per-URI-to-work-td4672194.html>  
all streams would be sent once a client connects to the aggregate URI as
each stream does not have its own URI and must instead by filtered on the
client side based on the payload type.
This would be too inefficient for my used case and having to filter on the
payload type on the client side does not fit my requirement.

                  --> q --> fakesink (used for snapshots via last-sample)
v4l2src --> t --> q --> vpuenc_h264 --> rtph264pay name=pay0 pt=96
                  --> q --> vpuenc_h264 --> rtph264pay name=pay1 pt=97

I'm currently contemplating removing the v4l2src from the pipeline used by
the RTSP server and place it into a separate pipeline. I will then need to
somehow forward samples from that v4l2src pipeline to the pipelines used by
the RTSP server.

So something like the following:

Pipelines in my app:
   v4l2src --> appsink
   appsrc --> jpenenc --> filesink (for snapshot)

RTSP mount / pipelines:
   mount_1: appsrc --> vpuenc_h264 --> rtph264pay
   mount_2: appsrc --> vpuenc_h264 --> rtph264pay
   etc...

Then in my app, manually push samples into the appsrc elements for each
mounting points.
This approach seems too hacky to be the right way to proceed and I am
concerned it will introduce inefficiencies (will the data be memcpy-ed when
push into the appsrc?).

So am I missing something here? Is there a simpler way to implement this?

Thanks
JP





--
Sent from: http://gstreamer-devel.966125.n4.nabble.com/
_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

Re: gst-rtsp-server share v4l2src accross several mounting points

Nicolas Dufresne-5


Le jeu. 10 oct. 2019 06 h 41, jean-philippe <[hidden email]> a écrit :
Hi,

I'm trying to use gst-rtsp-server to implement an RTSP server for an IP
camera.

Here is what I need to do but have not found a way to implement:
- I am looking for a way to have several video streams running at the same
time, streaming video captured from my one v4l2 camera, but with different
encodings.
- I need to be able to add new stream/mounting points without interrupting
the streaming that may already be ongoing.
- I need to be able to capture snapshots (still images) from the camera when
0 or more RTP stream(s) are live, without interrupting the streams that may
already be live.

The v4l2src cannot be reopened, so creating individual pipelines, one for
each RTSP mounting point is not an option. So the following, based on the
GstRTSPMediaFactory does not work:
for (int i = 0; i < 2; ++i) {
   std::string s = "test_" + std::to_string(i);
   factory = gst_rtsp_media_factory_new();
   gst_rtsp_mount_points_add_factory(mounts, s.c_str(), factory)
   gst_rtsp_media_factory_set_launch(factory, "v4l2src ! vpuenc_h264 !
rtph264pay name=pay0 pt96");
   gst_rtsp_media_factory_set_shared (factory, TRUE);
}

With this sort of code, I can stream from test_1 or test_2, but not both.
The reason is that the v4l2src cannot be reopened.

The following pipeline would work for my requirements, but according to
this link
<http://gstreamer-devel.966125.n4.nabble.com/RTSP-Server-Can-t-get-multiple-streams-per-URI-to-work-td4672194.html
all streams would be sent once a client connects to the aggregate URI as
each stream does not have its own URI and must instead by filtered on the
client side based on the payload type.
This would be too inefficient for my used case and having to filter on the
payload type on the client side does not fit my requirement.

                  --> q --> fakesink (used for snapshots via last-sample)
v4l2src --> t --> q --> vpuenc_h264 --> rtph264pay name=pay0 pt=96
                  --> q --> vpuenc_h264 --> rtph264pay name=pay1 pt=97

I'm currently contemplating removing the v4l2src from the pipeline used by
the RTSP server and place it into a separate pipeline. I will then need to
somehow forward samples from that v4l2src pipeline to the pipelines used by
the RTSP server.

So something like the following:

Pipelines in my app:
   v4l2src --> appsink
   appsrc --> jpenenc --> filesink (for snapshot)

RTSP mount / pipelines:
   mount_1: appsrc --> vpuenc_h264 --> rtph264pay
   mount_2: appsrc --> vpuenc_h264 --> rtph264pay
   etc...

Then in my app, manually push samples into the appsrc elements for each
mounting points.
This approach seems too hacky to be the right way to proceed and I am
concerned it will introduce inefficiencies (will the data be memcpy-ed when
push into the appsrc?).

Splitting it into its own pipeline is quite common. Though for raw video, it's also a bit challenging. You have to advertise support for VideoMeta to prevent random memcpy (for realignment purpose).


So am I missing something here? Is there a simpler way to implement this?

They might not be all super efficient, it depends on the context. But intervideosink/src is extremely simple. There is also some other proxies which I forgot the name, was is pipesrc/sink ? There is also something called proxysrc/sink, they all have a slight difference, I'm under the impression the last one does not do multiplexing, so you'd need to combine with a tee, which is more programming.


Thanks
JP





--
Sent from: http://gstreamer-devel.966125.n4.nabble.com/
_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel

_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

Re: gst-rtsp-server share v4l2src accross several mounting points

jean-philippe
Hi Nicolas,
Thanks for the reply, intervideosink/src did the trick.
JP



--
Sent from: http://gstreamer-devel.966125.n4.nabble.com/
_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

Re: gst-rtsp-server share v4l2src accross several mounting points

jean-philippe
I return to this topic as the pipeline using intervideosink/src drops frames
when running my SoC (i.MX8M Mini).

The app implements an RTSP server and I'm sharing the v4l2src with several
pipelines.

*Pipeline #1* is OK:  “v4l2src ! vpuenc_h264 ! rtph264pay”

*Pipeline #2* drops frames:
“v4l2src ! intervideosink channel=c sync=false”
“intervideosrc channel=c ! vpuenc_h264 ! rtph264pay”

About 1/3 of frames are dropped intermittently. The pipeline may run well
for 10 minutes and then drops frames and then recover after a few minutes.

I am wondering if the problem is related to the BufferPool used.

I have noticed that when pipeline #2 starts it prints
“gstv4l2bufferpool.c:794: … Uncertain of not enough buffers, enabling copy
threshold”. This does not happen with pipeline #1.

I have also noticed the differences below in the debug outputs that make me
think the BufferPool used is different.

Pipeline #1
v4l2bufferpool
gstv4l2bufferpool.c:509:gst_v4l2_buffer_pool_set_config:<v4l2src0:pool:src>
config GstBufferPoolConfig, caps=(GstCaps)"video/x-raw\,\
format\=\(string\)YUY2\,\ width\=\(int\)1920\,\ height\=\(int\)1080\,\
framerate\=\(fraction\)30/1\,\ pixel-aspect-ratio\=\(fraction\)1/1\,\
colorimetry\=\(string\)bt709\,\ interlace-mode\=\(string\)progressive",
size=(uint)4147200, min-buffers=(uint)5, max-buffers=(uint)5,
allocator=(GstAllocator)"\(GstVpuAllocator\)\ vpuallocator0",
params=(GstAllocationParams)NULL, options=(string)<
GstBufferPoolOptionVideoMeta >;

Pipeline #2
v4l2bufferpool
gstv4l2bufferpool.c:509:gst_v4l2_buffer_pool_set_config:<v4l2src0:pool:src>
config GstBufferPoolConfig, caps=(GstCaps)"video/x-raw\,\
format\=\(string\)YUY2\,\ width\=\(int\)1920\,\ height\=\(int\)1080\,\
framerate\=\(fraction\)30/1\,\ pixel-aspect-ratio\=\(fraction\)1/1\,\
colorimetry\=\(string\)bt709\,\ interlace-mode\=\(string\)progressive",
size=(uint)4147200, min-buffers=(uint)4, max-buffers=(uint)4,
allocator=(GstAllocator)"NULL", params=(GstAllocationParams)NULL;

Could this explain the lost frames? If so, is there any way I can specify
the BufferPool that should be used? Reading the doc on Bufferpool, I cannot
see how to specify a specific Bufferpool on a pad before negotiation.

Is using intervideosink/src a 'bad idea'? Are there better elements to use
to share a v4l2src?



--
Sent from: http://gstreamer-devel.966125.n4.nabble.com/
_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

Re: gst-rtsp-server share v4l2src accross several mounting points

Nicolas Dufresne-5
Le lundi 25 novembre 2019 à 12:22 -0600, jean-philippe a écrit :

> I return to this topic as the pipeline using intervideosink/src drops frames
> when running my SoC (i.MX8M Mini).
>
> The app implements an RTSP server and I'm sharing the v4l2src with several
> pipelines.
>
> *Pipeline #1* is OK:  “v4l2src ! vpuenc_h264 ! rtph264pay”
>
> *Pipeline #2* drops frames:
> “v4l2src ! intervideosink channel=c sync=false”
> “intervideosrc channel=c ! vpuenc_h264 ! rtph264pay”
>
> About 1/3 of frames are dropped intermittently. The pipeline may run well
> for 10 minutes and then drops frames and then recover after a few minutes.
>
> I am wondering if the problem is related to the BufferPool used.
>
> I have noticed that when pipeline #2 starts it prints
> “gstv4l2bufferpool.c:794: … Uncertain of not enough buffers, enabling copy
> threshold”. This does not happen with pipeline #1.
For your information, this means that intervideosink does not reply to
the allocation query. This can have all sort of implication, I think
for your sure case, intervideosink should be fixed. It cannot offer a
pool of course, but it can request more buffers to compensate that
these will be held in a downstream pipeline. Note that it will never be
ideal as the intervideosink protocol is stateless. Maybe the long term
is to add a property.

Enabling GstVideoMeta support in the allocation query (and handling the
absence on the intervideosrc) seems like a better idea, otherwise you
may endup in a situation where you always copy, and to make things
worst you may not be copying in the right type of memory.

>
> I have also noticed the differences below in the debug outputs that make me
> think the BufferPool used is different.
>
> Pipeline #1
> v4l2bufferpool
> gstv4l2bufferpool.c:509:gst_v4l2_buffer_pool_set_config:<v4l2src0:pool:src>
> config GstBufferPoolConfig, caps=(GstCaps)"video/x-raw\,\
> format\=\(string\)YUY2\,\ width\=\(int\)1920\,\ height\=\(int\)1080\,\
> framerate\=\(fraction\)30/1\,\ pixel-aspect-ratio\=\(fraction\)1/1\,\
> colorimetry\=\(string\)bt709\,\ interlace-mode\=\(string\)progressive",
> size=(uint)4147200, min-buffers=(uint)5, max-buffers=(uint)5,
> allocator=(GstAllocator)"\(GstVpuAllocator\)\ vpuallocator0",
> params=(GstAllocationParams)NULL, options=(string)<
> GstBufferPoolOptionVideoMeta >;
>
> Pipeline #2
> v4l2bufferpool
> gstv4l2bufferpool.c:509:gst_v4l2_buffer_pool_set_config:<v4l2src0:pool:src>
> config GstBufferPoolConfig, caps=(GstCaps)"video/x-raw\,\
> format\=\(string\)YUY2\,\ width\=\(int\)1920\,\ height\=\(int\)1080\,\
> framerate\=\(fraction\)30/1\,\ pixel-aspect-ratio\=\(fraction\)1/1\,\
> colorimetry\=\(string\)bt709\,\ interlace-mode\=\(string\)progressive",
> size=(uint)4147200, min-buffers=(uint)4, max-buffers=(uint)4,
> allocator=(GstAllocator)"NULL", params=(GstAllocationParams)NULL;
>
> Could this explain the lost frames? If so, is there any way I can specify
> the BufferPool that should be used? Reading the doc on Bufferpool, I cannot
> see how to specify a specific Bufferpool on a pad before negotiation.
>
> Is using intervideosink/src a 'bad idea'? Are there better elements to use
> to share a v4l2src?
>
>
>
> --
> Sent from: http://gstreamer-devel.966125.n4.nabble.com/
> _______________________________________________
> gstreamer-devel mailing list
> [hidden email]
> https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel

_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel

signature.asc (201 bytes) Download Attachment