GStreamer multi pipeline time sync

classic Classic list List threaded Threaded
14 messages Options
Reply | Threaded
Open this post in threaded view
|

GStreamer multi pipeline time sync

Peter Biro
Hi all,

Im not sure if im at the right place to ask this, feel free to redirect me a more proper forum (I just started to familiarize with GStreamer so any feedback appreciated).

Im working on an application which consist of multiple pipelines:
  • Video source: capture video, do some simple operation on it (encode and overlaying) and passing the output to an appsink
  • Audio source: capture audio and forward it to an appsink
  • Streamer: sending the video (and maybe later audio as well) over as UDP stream or WebRTC (live video)
  • Buffer: a plain c++ class which stores the buffers for a configured time (it can be changed dynamically, and the audio and video buffer time can be different, so it should be possible that the video starts a few minutes before from the buffer before the audio)
  • Filewriter: this is a simple filesink, when its triggered I push the buffers to it's audio and video appsrc with the gst_app_src_push_buffer methond (at the moment I encode the video with VP8 to a webm file using a webmmux).

The pipelines are started at the same time (sequentially after each other), and pushing the buffers to the filewriter starts when a record request arrives.

VideoSource ----------------->    LiveStream
|
\
   ------------>        record trigger
Buffer---------------- ---------------->    FileWriter
AudioSource ----------------->


I have two kinds of issues:
  • I cannot make the audio and video sync in the filewriter
  • During playback the playtime counter not starts from 0 (I think it is somehow receiving the timestamp from the Video source, so when I start the video in VLC it start countig from eg.: 30 seconds which is the amount of time I waited until I triggered the record) and Im not able seek in the video file.

I tried to play around with following approaches:
  • the 'do-timestamp' and 'is-live' properties on the filewriter's receiving appsrc
  • reset the timestamps on the buffers (audio and video) with 'GST_BUFFER_PTS' to a sequentially value starting from zero and  increased with the duration
  • adjust the timestamps on the buffers (audio and video) with 'GST_BUFFER_PTS' with the time difference between the start times of the source and filewriter timelines
  • tried to sync all piplines by setting the same clock on them and use the same basetime on them

I think I miss some fundamental thing here. Can you give me some hints how should I properly do this time synchronisation?

Thanks a lot!

Bests,
Peter

_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

Re: GStreamer multi pipeline time sync

gotsring
Can you describe what you are trying to achieve? It sounds like you want to
grab video and audio (e.g. webcam and mic), combine/mux those streams, then
be able to view the stream it and optionally record it. Something like:

View/Record pipeline (gst-launch-1.0)
videotestsrc ! timeoverlay ! tee name=videotee ! queue ! autovideosink \
audiotestsrc wave=8 ! tee name=audiotee ! queue ! autoaudiosink \
videotee. ! queue ! x264enc ! matroskamux name=muxer ! filesink
location=save_location.mkv async=false \
audiotee. ! queue ! muxer.


Playback (just to test)
gst-play-1.0 save_location.mkv

This should probably not be used exactly, but that's the gist. You can just
google how to mux streams using GStreamer. I also think that queue has a
property that allows you to effectively add a delay to a stream
(min-threshold-time), is this what you wanted?



--
Sent from: http://gstreamer-devel.966125.n4.nabble.com/
_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

Re: GStreamer multi pipeline time sync

Peter Biro
Sure, I can describe it more in detail.

You are right about that the streaming / displaying is mandatory and the recording is optional (it can be triggered).

Unfortunately the actual use case is a bit more complex: when the user hits record the recording also should include the video(and maybe audio too) for the previous 30 seconds(actual time is also configurable) as well. So there should be a live stream and this 'buffered' recording. Also I cannot 'pre-mux' the audio with the video since audio is completely optional and it could be configured to be recorded only when the recording is started (so there would be video for like 30 seconds from the buffer and then audio would join in later). The hard requirement is this (to have X seconds of video before the user hits the record and optional audio attached to it) but I wanted to implement it in a general way so the other part of the application should not be aware if the 'buffered' data is audio or video.

This is why I started with separate pipelines and transferring the data between them with appsinks and appsrcs, and do the buffering and other logic on the application side, I guess this is a naive way of implementing it so if there is any suggestion it is more than welcomed :D

Thanks!

On 2020. Dec 1., at 5:06, gotsring <[hidden email]> wrote:

Can you describe what you are trying to achieve? It sounds like you want to
grab video and audio (e.g. webcam and mic), combine/mux those streams, then
be able to view the stream it and optionally record it. Something like:

View/Record pipeline (gst-launch-1.0)
videotestsrc ! timeoverlay ! tee name=videotee ! queue ! autovideosink \
audiotestsrc wave=8 ! tee name=audiotee ! queue ! autoaudiosink \
videotee. ! queue ! x264enc ! matroskamux name=muxer ! filesink
location=save_location.mkv async=false \
audiotee. ! queue ! muxer.


Playback (just to test)
gst-play-1.0 save_location.mkv

This should probably not be used exactly, but that's the gist. You can just
google how to mux streams using GStreamer. I also think that queue has a
property that allows you to effectively add a delay to a stream
(min-threshold-time), is this what you wanted?



--
Sent from: http://gstreamer-devel.966125.n4.nabble.com/
_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel



_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

AW: GStreamer multi pipeline time sync

Thornton, Keith

Hi,

the buffering can be done with a standard gstreamer queue by dynamically manipulating the threshold and queue size parameters. Have you considered the possibility of recording silence when no audio is present.

Gruesse

 

Von: gstreamer-devel <[hidden email]> Im Auftrag von Peter Biro
Gesendet: Dienstag, 1. Dezember 2020 10:21
An: Discussion of the development of and with GStreamer <[hidden email]>
Betreff: Re: GStreamer multi pipeline time sync

 

Sure, I can describe it more in detail.

 

You are right about that the streaming / displaying is mandatory and the recording is optional (it can be triggered).

 

Unfortunately the actual use case is a bit more complex: when the user hits record the recording also should include the video(and maybe audio too) for the previous 30 seconds(actual time is also configurable) as well. So there should be a live stream and this 'buffered' recording. Also I cannot 'pre-mux' the audio with the video since audio is completely optional and it could be configured to be recorded only when the recording is started (so there would be video for like 30 seconds from the buffer and then audio would join in later). The hard requirement is this (to have X seconds of video before the user hits the record and optional audio attached to it) but I wanted to implement it in a general way so the other part of the application should not be aware if the 'buffered' data is audio or video.

 

This is why I started with separate pipelines and transferring the data between them with appsinks and appsrcs, and do the buffering and other logic on the application side, I guess this is a naive way of implementing it so if there is any suggestion it is more than welcomed :D

 

Thanks!



On 2020. Dec 1., at 5:06, gotsring <[hidden email]> wrote:

 

Can you describe what you are trying to achieve? It sounds like you want to
grab video and audio (e.g. webcam and mic), combine/mux those streams, then
be able to view the stream it and optionally record it. Something like:

View/Record pipeline (gst-launch-1.0)
videotestsrc ! timeoverlay ! tee name=videotee ! queue ! autovideosink \
audiotestsrc wave=8 ! tee name=audiotee ! queue ! autoaudiosink \
videotee. ! queue ! x264enc ! matroskamux name=muxer ! filesink
location=save_location.mkv async=false \
audiotee. ! queue ! muxer.


Playback (just to test)
gst-play-1.0 save_location.mkv

This should probably not be used exactly, but that's the gist. You can just
google how to mux streams using GStreamer. I also think that queue has a
property that allows you to effectively add a delay to a stream
(min-threshold-time), is this what you wanted?



--
Sent from: http://gstreamer-devel.966125.n4.nabble.com/
_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel

 

 


_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

Re: GStreamer multi pipeline time sync

Peter Biro
Thanks! That can work!

 Is there a way how I can 'send silence' from an audio source (or anything in the middle)?

Currently this is my audio pipeline:
alsasrc ! queue ! audioconvert ! appsink name=app_sink

Also  I tried to play around with the 'max-size-time' property on a queue, but I cut that solution since I had issues with disabling only the filesink output. The way I tried it is to redirect the filesink output to the /dev/null and reconfigure it to a proper place when a recording event arrives but for that I had to set the file sink state to GST_STATE_NULL (or GST_STATE_READY it was a while ago) which caused issues with the other parts of he pipeline. But it would be great if this part could also be covered by plain pipeline on the gst side.

On 2020. Dec 1., at 11:38, Thornton, Keith <[hidden email]> wrote:

Hi,
the buffering can be done with a standard gstreamer queue by dynamically manipulating the threshold and queue size parameters. Have you considered the possibility of recording silence when no audio is present.
Gruesse
 
Von: gstreamer-devel <[hidden email]> Im Auftrag von Peter Biro
Gesendet: Dienstag, 1. Dezember 2020 10:21
An: Discussion of the development of and with GStreamer <[hidden email]>
Betreff: Re: GStreamer multi pipeline time sync
 
Sure, I can describe it more in detail.
 
You are right about that the streaming / displaying is mandatory and the recording is optional (it can be triggered).
 
Unfortunately the actual use case is a bit more complex: when the user hits record the recording also should include the video(and maybe audio too) for the previous 30 seconds(actual time is also configurable) as well. So there should be a live stream and this 'buffered' recording. Also I cannot 'pre-mux' the audio with the video since audio is completely optional and it could be configured to be recorded only when the recording is started (so there would be video for like 30 seconds from the buffer and then audio would join in later). The hard requirement is this (to have X seconds of video before the user hits the record and optional audio attached to it) but I wanted to implement it in a general way so the other part of the application should not be aware if the 'buffered' data is audio or video.
 
This is why I started with separate pipelines and transferring the data between them with appsinks and appsrcs, and do the buffering and other logic on the application side, I guess this is a naive way of implementing it so if there is any suggestion it is more than welcomed :D
 
Thanks!


On 2020. Dec 1., at 5:06, gotsring <[hidden email]> wrote:
 
Can you describe what you are trying to achieve? It sounds like you want to
grab video and audio (e.g. webcam and mic), combine/mux those streams, then
be able to view the stream it and optionally record it. Something like:

View/Record pipeline (gst-launch-1.0)
videotestsrc ! timeoverlay ! tee name=videotee ! queue ! autovideosink \
audiotestsrc wave=8 ! tee name=audiotee ! queue ! autoaudiosink \
videotee. ! queue ! x264enc ! matroskamux name=muxer ! filesink
location=save_location.mkv async=false \
audiotee. ! queue ! muxer.


Playback (just to test)
gst-play-1.0 save_location.mkv

This should probably not be used exactly, but that's the gist. You can just
google how to mux streams using GStreamer. I also think that queue has a
property that allows you to effectively add a delay to a stream
(min-threshold-time), is this what you wanted?



--
Sent from: http://gstreamer-devel.966125.n4.nabble.com/
_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
 
 
_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel


_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

Re: GStreamer multi pipeline time sync

jim nualart
In reply to this post by Peter Biro

You could try using audiotestsrc (there's a "silence pattern") and mix it with your alsasrc ... this way "something" is always going down that path.


------------------------------

Message: 5
Date: Tue, 1 Dec 2020 21:31:57 +0100
From: Peter Biro <[hidden email]>
To: Discussion of the development of and with GStreamer
        <[hidden email]>
Subject: Re: GStreamer multi pipeline time sync
Message-ID: <[hidden email]>
Content-Type: text/plain; charset="us-ascii"

Thanks! That can work!

 Is there a way how I can 'send silence' from an audio source (or anything in the middle)?

Currently this is my audio pipeline:
alsasrc ! queue ! audioconvert ! appsink name=app_sink

Also  I tried to play around with the 'max-size-time' property on a queue, but I cut that solution since I had issues with disabling only the filesink output. The way I tried it is to redirect the filesink output to the /dev/null and reconfigure it to a proper place when a recording event arrives but for that I had to set the file sink state to GST_STATE_NULL (or GST_STATE_READY it was a while ago) which caused issues with the other parts of he pipeline. But it would be great if this part could also be covered by plain pipeline on the gst side.

> On 2020. Dec 1., at 11:38, Thornton, Keith <[hidden email]> wrote:
>
> Hi,
> the buffering can be done with a standard gstreamer queue by dynamically manipulating the threshold and queue size parameters. Have you considered the possibility of recording silence when no audio is present.
> Gruesse

> Von: gstreamer-devel <[hidden email]> Im Auftrag von Peter Biro
> Gesendet: Dienstag, 1. Dezember 2020 10:21
> An: Discussion of the development of and with GStreamer <[hidden email]>
> Betreff: Re: GStreamer multi pipeline time sync

> Sure, I can describe it more in detail.

> You are right about that the streaming / displaying is mandatory and the recording is optional (it can be triggered).

> Unfortunately the actual use case is a bit more complex: when the user hits record the recording also should include the video(and maybe audio too) for the previous 30 seconds(actual time is also configurable) as well. So there should be a live stream and this 'buffered' recording. Also I cannot 'pre-mux' the audio with the video since audio is completely optional and it could be configured to be recorded only when the recording is started (so there would be video for like 30 seconds from the buffer and then audio would join in later). The hard requirement is this (to have X seconds of video before the user hits the record and optional audio attached to it) but I wanted to implement it in a general way so the other part of the application should not be aware if the 'buffered' data is audio or video.

> This is why I started with separate pipelines and transferring the data between them with appsinks and appsrcs, and do the buffering and other logic on the application side, I guess this is a naive way of implementing it so if there is any suggestion it is more than welcomed :D

> Thanks!
>
>
> On 2020. Dec 1., at 5:06, gotsring <[hidden email] <mailto:[hidden email]>> wrote:

> Can you describe what you are trying to achieve? It sounds like you want to
> grab video and audio (e.g. webcam and mic), combine/mux those streams, then
> be able to view the stream it and optionally record it. Something like:
>
> View/Record pipeline (gst-launch-1.0)
> videotestsrc ! timeoverlay ! tee name=videotee ! queue ! autovideosink \
> audiotestsrc wave=8 ! tee name=audiotee ! queue ! autoaudiosink \
> videotee. ! queue ! x264enc ! matroskamux name=muxer ! filesink
> location=save_location.mkv async=false \
> audiotee. ! queue ! muxer.
>
>
> Playback (just to test)
> gst-play-1.0 save_location.mkv
>
> This should probably not be used exactly, but that's the gist. You can just
> google how to mux streams using GStreamer. I also think that queue has a
> property that allows you to effectively add a delay to a stream
> (min-threshold-time), is this what you wanted?
>
>
>
> --
> Sent from: http://gstreamer-devel.966125.n4.nabble.com/ <https://eur01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fgstreamer-devel.966125.n4.nabble.com%2F&data=04%7C01%7C%7Cebc950b0e2da41e4760208d895e41c7a%7C28042244bb514cd680347776fa3703e8%7C1%7C1%7C637424154270507442%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C2000&sdata=BCRb9jydO5UqClIBi1Td3mZ3R53rhs0yjebt1Cz1nk4%3D&reserved=0>
> _______________________________________________
> gstreamer-devel mailing list
> [hidden email] <mailto:[hidden email]>
> https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel <https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Fgstreamer-devel&data=04%7C01%7C%7Cebc950b0e2da41e4760208d895e41c7a%7C28042244bb514cd680347776fa3703e8%7C1%7C1%7C637424154270507442%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C2000&sdata=tHkA4r9cpjP4HVyKezzPDZsmbFQcQ2JeAboe0txaWno%3D&reserved=0>


> _______________________________________________
> gstreamer-devel mailing list
> [hidden email] <mailto:[hidden email]>
> https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel <https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.freedesktop.org/archives/gstreamer-devel/attachments/20201201/20e50183/attachment.htm>

------------------------------

Subject: Digest Footer

_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel


------------------------------

End of gstreamer-devel Digest, Vol 119, Issue 3
***********************************************

_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

Re: GStreamer multi pipeline time sync

Tim Müller
In reply to this post by Peter Biro
Hi Peter,

I have an example for save-to-file-with-backlog here:

https://people.freedesktop.org/~tpm/code/test-backlog-recording-h264.c

or (rtp variant):

https://people.freedesktop.org/~tpm/code/test-backlog-recording-h264-rtp.c

for what it's worth.

And audiomixer element can produce silence samples if it operates in
live mode, which will happen if the upstream source is a live/capture
source or you force it into live mode with a dummy audiotestsrc is-
live=true branch.

  Alternatively  .. ! interaudiosink   interaudiosrc ! ...

Cheers
 Tim

--
Tim Müller, Centricular Ltd - http://www.centricular.com

_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

Re: GStreamer multi pipeline time sync

Nicolas Dufresne-5
In reply to this post by Peter Biro


Le mar. 1 déc. 2020 16 h 45, Peter Biro <[hidden email]> a écrit :
Thanks! That can work!

 Is there a way how I can 'send silence' from an audio source (or anything in the middle)?

audiotestsrc wave=silence ! ...
... ! volume volume=0.0 ! ...


Currently this is my audio pipeline:
alsasrc ! queue ! audioconvert ! appsink name=app_sink

Also  I tried to play around with the 'max-size-time' property on a queue, but I cut that solution since I had issues with disabling only the filesink output. The way I tried it is to redirect the filesink output to the /dev/null and reconfigure it to a proper place when a recording event arrives but for that I had to set the file sink state to GST_STATE_NULL (or GST_STATE_READY it was a while ago) which caused issues with the other parts of he pipeline. But it would be great if this part could also be covered by plain pipeline on the gst side.

On 2020. Dec 1., at 11:38, Thornton, Keith <[hidden email]> wrote:

Hi,
the buffering can be done with a standard gstreamer queue by dynamically manipulating the threshold and queue size parameters. Have you considered the possibility of recording silence when no audio is present.
Gruesse
 
Von: gstreamer-devel <[hidden email]> Im Auftrag von Peter Biro
Gesendet: Dienstag, 1. Dezember 2020 10:21
An: Discussion of the development of and with GStreamer <[hidden email]>
Betreff: Re: GStreamer multi pipeline time sync
 
Sure, I can describe it more in detail.
 
You are right about that the streaming / displaying is mandatory and the recording is optional (it can be triggered).
 
Unfortunately the actual use case is a bit more complex: when the user hits record the recording also should include the video(and maybe audio too) for the previous 30 seconds(actual time is also configurable) as well. So there should be a live stream and this 'buffered' recording. Also I cannot 'pre-mux' the audio with the video since audio is completely optional and it could be configured to be recorded only when the recording is started (so there would be video for like 30 seconds from the buffer and then audio would join in later). The hard requirement is this (to have X seconds of video before the user hits the record and optional audio attached to it) but I wanted to implement it in a general way so the other part of the application should not be aware if the 'buffered' data is audio or video.
 
This is why I started with separate pipelines and transferring the data between them with appsinks and appsrcs, and do the buffering and other logic on the application side, I guess this is a naive way of implementing it so if there is any suggestion it is more than welcomed :D
 
Thanks!


On 2020. Dec 1., at 5:06, gotsring <[hidden email]> wrote:
 
Can you describe what you are trying to achieve? It sounds like you want to
grab video and audio (e.g. webcam and mic), combine/mux those streams, then
be able to view the stream it and optionally record it. Something like:

View/Record pipeline (gst-launch-1.0)
videotestsrc ! timeoverlay ! tee name=videotee ! queue ! autovideosink \
audiotestsrc wave=8 ! tee name=audiotee ! queue ! autoaudiosink \
videotee. ! queue ! x264enc ! matroskamux name=muxer ! filesink
location=save_location.mkv async=false \
audiotee. ! queue ! muxer.


Playback (just to test)
gst-play-1.0 save_location.mkv

This should probably not be used exactly, but that's the gist. You can just
google how to mux streams using GStreamer. I also think that queue has a
property that allows you to effectively add a delay to a stream
(min-threshold-time), is this what you wanted?



--
Sent from: http://gstreamer-devel.966125.n4.nabble.com/
_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
 
 
_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel

_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel

_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

AW: GStreamer multi pipeline time sync

Thornton, Keith

Hi,

You could use an outputselector which has a pad to a filesink and a pad to a fakesink.

Gruesse

 

Von: gstreamer-devel <[hidden email]> Im Auftrag von Nicolas Dufresne
Gesendet: Mittwoch, 2. Dezember 2020 01:05
An: Discussion of the development of and with GStreamer <[hidden email]>
Betreff: Re: GStreamer multi pipeline time sync

 

 

Le mar. 1 déc. 2020 16 h 45, Peter Biro <[hidden email]> a écrit :

Thanks! That can work!

 

 Is there a way how I can 'send silence' from an audio source (or anything in the middle)?

 

audiotestsrc wave=silence ! ...

... ! volume volume=0.0 ! ...

 

 

Currently this is my audio pipeline:

alsasrc ! queue ! audioconvert ! appsink name=app_sink

 

Also  I tried to play around with the 'max-size-time' property on a queue, but I cut that solution since I had issues with disabling only the filesink output. The way I tried it is to redirect the filesink output to the /dev/null and reconfigure it to a proper place when a recording event arrives but for that I had to set the file sink state to GST_STATE_NULL (or GST_STATE_READY it was a while ago) which caused issues with the other parts of he pipeline. But it would be great if this part could also be covered by plain pipeline on the gst side.



On 2020. Dec 1., at 11:38, Thornton, Keith <[hidden email]> wrote:

 

Hi,

the buffering can be done with a standard gstreamer queue by dynamically manipulating the threshold and queue size parameters. Have you considered the possibility of recording silence when no audio is present.

Gruesse

 

Von: gstreamer-devel <[hidden email]Im Auftrag von Peter Biro
Gesendet: Dienstag, 1. Dezember 2020 10:21
An: Discussion of the development of and with GStreamer <
[hidden email]>
Betreff: Re: GStreamer multi pipeline time sync

 

Sure, I can describe it more in detail.

 

You are right about that the streaming / displaying is mandatory and the recording is optional (it can be triggered).

 

Unfortunately the actual use case is a bit more complex: when the user hits record the recording also should include the video(and maybe audio too) for the previous 30 seconds(actual time is also configurable) as well. So there should be a live stream and this 'buffered' recording. Also I cannot 'pre-mux' the audio with the video since audio is completely optional and it could be configured to be recorded only when the recording is started (so there would be video for like 30 seconds from the buffer and then audio would join in later). The hard requirement is this (to have X seconds of video before the user hits the record and optional audio attached to it) but I wanted to implement it in a general way so the other part of the application should not be aware if the 'buffered' data is audio or video.

 

This is why I started with separate pipelines and transferring the data between them with appsinks and appsrcs, and do the buffering and other logic on the application side, I guess this is a naive way of implementing it so if there is any suggestion it is more than welcomed :D

 

Thanks!

 

On 2020. Dec 1., at 5:06, gotsring <[hidden email]> wrote:

 

Can you describe what you are trying to achieve? It sounds like you want to
grab video and audio (e.g. webcam and mic), combine/mux those streams, then
be able to view the stream it and optionally record it. Something like:

View/Record pipeline (gst-launch-1.0)
videotestsrc ! timeoverlay ! tee name=videotee ! queue ! autovideosink \
audiotestsrc wave=8 ! tee name=audiotee ! queue ! autoaudiosink \
videotee. ! queue ! x264enc ! matroskamux name=muxer ! filesink
location=save_location.mkv async=false \
audiotee. ! queue ! muxer.


Playback (just to test)
gst-play-1.0 save_location.mkv

This should probably not be used exactly, but that's the gist. You can just
google how to mux streams using GStreamer. I also think that queue has a
property that allows you to effectively add a delay to a stream
(min-threshold-time), is this what you wanted?



--
Sent from: 
http://gstreamer-devel.966125.n4.nabble.com/
_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel

 

 

_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel

 

_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel


_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

Re: GStreamer multi pipeline time sync

Peter Biro
In reply to this post by Tim Müller
Hi Tim.

Thanks for the examples!

I think I tried something similar before but as I remember when I set the state on the filesink (to be able to change the location) to GST_STATE_NULL it was propagated trough the pipeline and it caused glitches in the streaming.

As I see you do kind of the same here with the 'autovideosink' and I guess you are not experience such thing since you block the pad on 'vrecq' to block?

Bests,
Peter

> On 2020. Dec 2., at 0:30, Tim Müller <[hidden email]> wrote:
>
> Hi Peter,
>
> I have an example for save-to-file-with-backlog here:
>
> https://people.freedesktop.org/~tpm/code/test-backlog-recording-h264.c
>
> or (rtp variant):
>
> https://people.freedesktop.org/~tpm/code/test-backlog-recording-h264-rtp.c
>
> for what it's worth.
>
> And audiomixer element can produce silence samples if it operates in
> live mode, which will happen if the upstream source is a live/capture
> source or you force it into live mode with a dummy audiotestsrc is-
> live=true branch.
>
>  Alternatively  .. ! interaudiosink   interaudiosrc ! ...
>
> Cheers
> Tim
>
> --
> Tim Müller, Centricular Ltd - http://www.centricular.com
>
> _______________________________________________
> gstreamer-devel mailing list
> [hidden email]
> https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel

_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

Re: GStreamer multi pipeline time sync

Peter Biro
In reply to this post by Nicolas Dufresne-5
Ahh yeah it is a way to go! Thanks!

On 2020. Dec 2., at 1:05, Nicolas Dufresne <[hidden email]> wrote:



Le mar. 1 déc. 2020 16 h 45, Peter Biro <[hidden email]> a écrit :
Thanks! That can work!

 Is there a way how I can 'send silence' from an audio source (or anything in the middle)?

audiotestsrc wave=silence ! ...
... ! volume volume=0.0 ! ...


Currently this is my audio pipeline:
alsasrc ! queue ! audioconvert ! appsink name=app_sink

Also  I tried to play around with the 'max-size-time' property on a queue, but I cut that solution since I had issues with disabling only the filesink output. The way I tried it is to redirect the filesink output to the /dev/null and reconfigure it to a proper place when a recording event arrives but for that I had to set the file sink state to GST_STATE_NULL (or GST_STATE_READY it was a while ago) which caused issues with the other parts of he pipeline. But it would be great if this part could also be covered by plain pipeline on the gst side.

On 2020. Dec 1., at 11:38, Thornton, Keith <[hidden email]> wrote:

Hi,
the buffering can be done with a standard gstreamer queue by dynamically manipulating the threshold and queue size parameters. Have you considered the possibility of recording silence when no audio is present.
Gruesse
 
Von: gstreamer-devel <[hidden email]> Im Auftrag von Peter Biro
Gesendet: Dienstag, 1. Dezember 2020 10:21
An: Discussion of the development of and with GStreamer <[hidden email]>
Betreff: Re: GStreamer multi pipeline time sync
 
Sure, I can describe it more in detail.
 
You are right about that the streaming / displaying is mandatory and the recording is optional (it can be triggered).
 
Unfortunately the actual use case is a bit more complex: when the user hits record the recording also should include the video(and maybe audio too) for the previous 30 seconds(actual time is also configurable) as well. So there should be a live stream and this 'buffered' recording. Also I cannot 'pre-mux' the audio with the video since audio is completely optional and it could be configured to be recorded only when the recording is started (so there would be video for like 30 seconds from the buffer and then audio would join in later). The hard requirement is this (to have X seconds of video before the user hits the record and optional audio attached to it) but I wanted to implement it in a general way so the other part of the application should not be aware if the 'buffered' data is audio or video.
 
This is why I started with separate pipelines and transferring the data between them with appsinks and appsrcs, and do the buffering and other logic on the application side, I guess this is a naive way of implementing it so if there is any suggestion it is more than welcomed :D
 
Thanks!


On 2020. Dec 1., at 5:06, gotsring <[hidden email]> wrote:
 
Can you describe what you are trying to achieve? It sounds like you want to
grab video and audio (e.g. webcam and mic), combine/mux those streams, then
be able to view the stream it and optionally record it. Something like:

View/Record pipeline (gst-launch-1.0)
videotestsrc ! timeoverlay ! tee name=videotee ! queue ! autovideosink \
audiotestsrc wave=8 ! tee name=audiotee ! queue ! autoaudiosink \
videotee. ! queue ! x264enc ! matroskamux name=muxer ! filesink
location=save_location.mkv async=false \
audiotee. ! queue ! muxer.


Playback (just to test)
gst-play-1.0 save_location.mkv

This should probably not be used exactly, but that's the gist. You can just
google how to mux streams using GStreamer. I also think that queue has a
property that allows you to effectively add a delay to a stream
(min-threshold-time), is this what you wanted?



--
Sent from: http://gstreamer-devel.966125.n4.nabble.com/
_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
 
 
_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel

_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel


_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

Re: GStreamer multi pipeline time sync

Peter Biro
In reply to this post by Thornton, Keith
Yes that could work as well, I will try this too. Thanks!

On 2020. Dec 2., at 7:39, Thornton, Keith <[hidden email]> wrote:

Hi,
You could use an outputselector which has a pad to a filesink and a pad to a fakesink.
Gruesse
 
Von: gstreamer-devel <[hidden email]> Im Auftrag von Nicolas Dufresne
Gesendet: Mittwoch, 2. Dezember 2020 01:05
An: Discussion of the development of and with GStreamer <[hidden email]>
Betreff: Re: GStreamer multi pipeline time sync
 

 

Le mar. 1 déc. 2020 16 h 45, Peter Biro <[hidden email]> a écrit :
Thanks! That can work!
 
 Is there a way how I can 'send silence' from an audio source (or anything in the middle)?
 
audiotestsrc wave=silence ! ...
... ! volume volume=0.0 ! ...
 
 
Currently this is my audio pipeline:
alsasrc ! queue ! audioconvert ! appsink name=app_sink
 
Also  I tried to play around with the 'max-size-time' property on a queue, but I cut that solution since I had issues with disabling only the filesink output. The way I tried it is to redirect the filesink output to the /dev/null and reconfigure it to a proper place when a recording event arrives but for that I had to set the file sink state to GST_STATE_NULL (or GST_STATE_READY it was a while ago) which caused issues with the other parts of he pipeline. But it would be great if this part could also be covered by plain pipeline on the gst side.


On 2020. Dec 1., at 11:38, Thornton, Keith <[hidden email]> wrote:
 
Hi,
the buffering can be done with a standard gstreamer queue by dynamically manipulating the threshold and queue size parameters. Have you considered the possibility of recording silence when no audio is present.
Gruesse
 
Von: gstreamer-devel <[hidden email]Im Auftrag von Peter Biro
Gesendet: Dienstag, 1. Dezember 2020 10:21
An: Discussion of the development of and with GStreamer <
[hidden email]>
Betreff: Re: GStreamer multi pipeline time sync
 
Sure, I can describe it more in detail.
 
You are right about that the streaming / displaying is mandatory and the recording is optional (it can be triggered).
 
Unfortunately the actual use case is a bit more complex: when the user hits record the recording also should include the video(and maybe audio too) for the previous 30 seconds(actual time is also configurable) as well. So there should be a live stream and this 'buffered' recording. Also I cannot 'pre-mux' the audio with the video since audio is completely optional and it could be configured to be recorded only when the recording is started (so there would be video for like 30 seconds from the buffer and then audio would join in later). The hard requirement is this (to have X seconds of video before the user hits the record and optional audio attached to it) but I wanted to implement it in a general way so the other part of the application should not be aware if the 'buffered' data is audio or video.
 
This is why I started with separate pipelines and transferring the data between them with appsinks and appsrcs, and do the buffering and other logic on the application side, I guess this is a naive way of implementing it so if there is any suggestion it is more than welcomed :D
 
Thanks!

 

On 2020. Dec 1., at 5:06, gotsring <[hidden email]> wrote:
 
Can you describe what you are trying to achieve? It sounds like you want to
grab video and audio (e.g. webcam and mic), combine/mux those streams, then
be able to view the stream it and optionally record it. Something like:

View/Record pipeline (gst-launch-1.0)
videotestsrc ! timeoverlay ! tee name=videotee ! queue ! autovideosink \
audiotestsrc wave=8 ! tee name=audiotee ! queue ! autoaudiosink \
videotee. ! queue ! x264enc ! matroskamux name=muxer ! filesink
location=save_location.mkv async=false \
audiotee. ! queue ! muxer.


Playback (just to test)
gst-play-1.0 save_location.mkv

This should probably not be used exactly, but that's the gist. You can just
google how to mux streams using GStreamer. I also think that queue has a
property that allows you to effectively add a delay to a stream
(min-threshold-time), is this what you wanted?



--
Sent from: 
http://gstreamer-devel.966125.n4.nabble.com/
_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
 
 
_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
 
_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel


_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

Re: GStreamer multi pipeline time sync

Peter Biro
In reply to this post by Peter Biro
Hi,


I refactored my project based on Tim's examples, which works perfectly when the pipeline contains only a video source, although I found one strange thing in the behaviour: 
the blocking of a pad work only in cases when there is a "tee" between the source and the blocked element, otherwise it will block the other parts of the pipeline (so I added on extra(unnecessary?) tee for the audio). Is this expected or I do something not properly?  

But when I add an audio source things are getting a bit more complicated.

So I added the audio and moved the cacheing/buffering queue  after the muxer () . But this way it generates invalid file on the output (VLC show this kind of issues: "ps warning: this does not look like an MPEG PS stream, continuing anyway; ps warning: garbage at input from 509, trying to resync...")

nvarguscamerasrc sensor-id=0 sensor-mode=0
! video/x-raw(memory:NVMM), width=(int)1980, height=(int)1080, format=(string)NV12, framerate=(fraction)20/1
! nvvidconv ! textoverlay name=text_overlay ! video/x-raw,format=I420 ! nvvidconv ! nvv4l2vp8enc ! tee name=video_stream_spilt
audiotestsrc ! vorbisenc ! queue ! file_sink_video_mux.
video_stream_spilt. ! queue ! webmmux name=file_sink_video_mux ! tee name=muxed_video_stream_spilt
muxed_video_stream_spilt. ! queue name=file_sink_queue ! filesink name=file_sink location=/tmp/video_out.webm
muxed_video_stream_spilt. ! fakesink
video_stream_spilt. ! rtpvp8pay mtu=1400 ! appsink name=app_sink

I tried to google around, but I not found any similar examples so I guess I cannot "buffer" muxed packets with a queue. Is this approach fundamentally wrong?

After this I tried to add queues before the mux, but this way the pipeline stops when Im unblocking the pads. 

nvarguscamerasrc sensor-id=0 sensor-mode=0
! video/x-raw(memory:NVMM), width=(int)1980, height=(int)1080, format=(string)NV12, framerate=(fraction)20/1
! nvvidconv ! textoverlay name=text_overlay ! video/x-raw,format=I420 ! nvvidconv ! nvv4l2vp8enc ! tee name=video_stream_spilt
audiotestsrc ! vorbisenc ! tee name=audio_stream_spilt
audio_stream_spilt. ! queue name=file_sink_audio_queue ! file_sink_video_mux.
audio_stream_spilt. ! fakesink
video_stream_spilt. ! queue name=file_sink_video_queue ! webmmux name=file_sink_video_mux ! filesink name=file_sink location=/tmp/video_out.webm
video_stream_spilt. ! rtpvp8pay mtu=1400 ! appsink name=app_sink

I will attach the debug output, the corresponding part comes after the "startRecording: dcd - startRecording" log message. Im sure that im doing something wrong, since I think this should work. 

Do you have any idea what am I doing wrong?

Thank you!



On 2020. Dec 2., at 21:27, Peter Biro <[hidden email]> wrote:

Hi Tim.

Thanks for the examples!

I think I tried something similar before but as I remember when I set the state on the filesink (to be able to change the location) to GST_STATE_NULL it was propagated trough the pipeline and it caused glitches in the streaming.

As I see you do kind of the same here with the 'autovideosink' and I guess you are not experience such thing since you block the pad on 'vrecq' to block?

Bests,
Peter

On 2020. Dec 2., at 0:30, Tim Müller <[hidden email]> wrote:

Hi Peter,

I have an example for save-to-file-with-backlog here:

https://people.freedesktop.org/~tpm/code/test-backlog-recording-h264.c

or (rtp variant):

https://people.freedesktop.org/~tpm/code/test-backlog-recording-h264-rtp.c

for what it's worth.

And audiomixer element can produce silence samples if it operates in
live mode, which will happen if the upstream source is a live/capture
source or you force it into live mode with a dummy audiotestsrc is-
live=true branch.

Alternatively  .. ! interaudiosink   interaudiosrc ! ...

Cheers
Tim

--
Tim Müller, Centricular Ltd - http://www.centricular.com

_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel



_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel

gst.log (509K) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: GStreamer multi pipeline time sync

Peter Biro
Hi

I opened a separate thread about queueing the muxed data, so here the the question that remained is what am I doing wrong when I try to block and unblock the two separate queues before the mux. 

Is there any material I should read through in more detail? Or do you have any advice where I should look in the logs since right now i'm a bit lost. 

I tried to execute the blockin and blockings of the pads serially(block the second queue from the first's callback) and in parallel but that did not help. Or what i'm trying to do is conceptually wrong?

Thanks!

Peter

On Mon, Dec 7, 2020 at 12:00 AM Peter Biro <[hidden email]> wrote:
Hi,


I refactored my project based on Tim's examples, which works perfectly when the pipeline contains only a video source, although I found one strange thing in the behaviour: 
the blocking of a pad work only in cases when there is a "tee" between the source and the blocked element, otherwise it will block the other parts of the pipeline (so I added on extra(unnecessary?) tee for the audio). Is this expected or I do something not properly?  

But when I add an audio source things are getting a bit more complicated.

So I added the audio and moved the cacheing/buffering queue  after the muxer () . But this way it generates invalid file on the output (VLC show this kind of issues: "ps warning: this does not look like an MPEG PS stream, continuing anyway; ps warning: garbage at input from 509, trying to resync...")

nvarguscamerasrc sensor-id=0 sensor-mode=0
! video/x-raw(memory:NVMM), width=(int)1980, height=(int)1080, format=(string)NV12, framerate=(fraction)20/1
! nvvidconv ! textoverlay name=text_overlay ! video/x-raw,format=I420 ! nvvidconv ! nvv4l2vp8enc ! tee name=video_stream_spilt
audiotestsrc ! vorbisenc ! queue ! file_sink_video_mux.
video_stream_spilt. ! queue ! webmmux name=file_sink_video_mux ! tee name=muxed_video_stream_spilt
muxed_video_stream_spilt. ! queue name=file_sink_queue ! filesink name=file_sink location=/tmp/video_out.webm
muxed_video_stream_spilt. ! fakesink
video_stream_spilt. ! rtpvp8pay mtu=1400 ! appsink name=app_sink

I tried to google around, but I not found any similar examples so I guess I cannot "buffer" muxed packets with a queue. Is this approach fundamentally wrong?

After this I tried to add queues before the mux, but this way the pipeline stops when Im unblocking the pads. 

nvarguscamerasrc sensor-id=0 sensor-mode=0
! video/x-raw(memory:NVMM), width=(int)1980, height=(int)1080, format=(string)NV12, framerate=(fraction)20/1
! nvvidconv ! textoverlay name=text_overlay ! video/x-raw,format=I420 ! nvvidconv ! nvv4l2vp8enc ! tee name=video_stream_spilt
audiotestsrc ! vorbisenc ! tee name=audio_stream_spilt
audio_stream_spilt. ! queue name=file_sink_audio_queue ! file_sink_video_mux.
audio_stream_spilt. ! fakesink
video_stream_spilt. ! queue name=file_sink_video_queue ! webmmux name=file_sink_video_mux ! filesink name=file_sink location=/tmp/video_out.webm
video_stream_spilt. ! rtpvp8pay mtu=1400 ! appsink name=app_sink

I will attach the debug output, the corresponding part comes after the "startRecording: dcd - startRecording" log message. Im sure that im doing something wrong, since I think this should work. 

Do you have any idea what am I doing wrong?

Thank you!


On 2020. Dec 2., at 21:27, Peter Biro <[hidden email]> wrote:

Hi Tim.

Thanks for the examples!

I think I tried something similar before but as I remember when I set the state on the filesink (to be able to change the location) to GST_STATE_NULL it was propagated trough the pipeline and it caused glitches in the streaming.

As I see you do kind of the same here with the 'autovideosink' and I guess you are not experience such thing since you block the pad on 'vrecq' to block?

Bests,
Peter

On 2020. Dec 2., at 0:30, Tim Müller <[hidden email]> wrote:

Hi Peter,

I have an example for save-to-file-with-backlog here:

https://people.freedesktop.org/~tpm/code/test-backlog-recording-h264.c

or (rtp variant):

https://people.freedesktop.org/~tpm/code/test-backlog-recording-h264-rtp.c

for what it's worth.

And audiomixer element can produce silence samples if it operates in
live mode, which will happen if the upstream source is a live/capture
source or you force it into live mode with a dummy audiotestsrc is-
live=true branch.

Alternatively  .. ! interaudiosink   interaudiosrc ! ...

Cheers
Tim

--
Tim Müller, Centricular Ltd - http://www.centricular.com

_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel



_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel