Grabbing snapshots out of live stream

classic Classic list List threaded Threaded
5 messages Options
Reply | Threaded
Open this post in threaded view
|

Grabbing snapshots out of live stream

ob.lutz
Hey all

I'm trying to write a pipeline that will take video in from a network source, do some transcoding, and send it out to a network sink. No problem there. But I need to be able to get preview snapshots of the stream as its running, either on demand, or just have it continuously dump out frames to a known file periodically. I'm doing some experiments using a v4lsrc and xvimagesink as my fake network source and sink for now. I've found the 'tee' block that looks promising to split my stream into 2, so one can go on its way to the network sink, and the other could maybe write to a file, but all my experiments have only shown me a single frame in my xvimagesink. Am I going down the right path trying to use 'tee' or is there some other set of blocks I should use?

Thanks

------------------------------------------------------------------------------
Check out the new SourceForge.net Marketplace.
It is the best place to buy or sell services for
just about anything Open Source.
http://p.sf.net/sfu/Xq1LFB
_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

Re: Grabbing snapshots out of live stream

Arnout Vandecappelle
On Thursday 08 January 2009 22:48:04 OB Lutz wrote:

> I'm trying to write a pipeline that will take video in from a network
> source, do some transcoding, and send it out to a network sink. No problem
> there. But I need to be able to get preview snapshots of the stream as its
> running, either on demand, or just have it continuously dump out frames to
> a known file periodically. I'm doing some experiments using a v4lsrc and
> xvimagesink as my fake network source and sink for now. I've found the
> 'tee' block that looks promising to split my stream into 2, so one can go
> on its way to the network sink, and the other could maybe write to a file,
> but all my experiments have only shown me a single frame in my xvimagesink.
> Am I going down the right path trying to use 'tee' or is there some other
> set of blocks I should use?

 You probably need to insert a queue behind the tee.  Otherwise you can get
deadlocks because all sinks try to synchronize from the same thread.

 Failing that, can you post the gst-launch command you're using?

 Regards,
 Arnout

--
Arnout Vandecappelle                               arnout at mind be
Senior Embedded Software Architect                 +32-16-286540
Essensium/Mind                                     http://www.mind.be
G.Geenslaan 9, 3001 Leuven, Belgium                BE 872 984 063 RPR Leuven
LinkedIn profile: http://www.linkedin.com/in/arnoutvandecappelle
GPG fingerprint:  D206 D44B 5155 DF98 550D  3F2A 2213 88AA A1C7 C933

------------------------------------------------------------------------------
Check out the new SourceForge.net Marketplace.
It is the best place to buy or sell services for
just about anything Open Source.
http://p.sf.net/sfu/Xq1LFB
_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

Re: Grabbing snapshots out of live stream

ob.lutz
I've managed to get passed the whole single-frame issue, but am getting pipeline errors when actually running it using something closer to my final pipeline.

I've reduced my problem to the following command. This is taking in raw YUY2 data(eventually to be replaced by some sort of mjpeg stream, but I have access to YUY2 currently), shoving it in a tee and writing out frames to a file.

gst-launch -vvv gstrtpbin name=rtpbin \
              tcpserversrc host=127.0.0.1 port=1235 ! videoparse format=YUY2 width=404 height=424 framerate=20/1 bpp=16 ! \
              tee name=splitter splitter.  ! queue ! typefind ! pngenc snapshot=false ! multifilesink sync=false preroll-queue-len=20 location=/tmp/output.png -vvv splitter.

This starts up fine, with:

Setting pipeline to PAUSED ...
Pipeline is PREROLLING ...

 but when I start sending data to it, I error out with:

/pipeline0/videoparse0.src: caps = video/x-raw-yuv, width=(int)404, height=(int)424, format=(fourcc)YUY2, framerate=(fraction)20/1, pixel_aspect_ratio=(fraction)1/1
/pipeline0/splitter.sink: caps = video/x-raw-yuv, width=(int)404, height=(int)424, format=(fourcc)YUY2, framerate=(fraction)20/1, pixel_aspect_ratio=(fraction)1/1
/pipeline0/splitter.src0: caps = video/x-raw-yuv, width=(int)404, height=(int)424, format=(fourcc)YUY2, framerate=(fraction)20/1, pixel_aspect_ratio=(fraction)1/1
/pipeline0/splitter.sink: caps = video/x-raw-yuv, width=(int)404, height=(int)424, format=(fourcc)YUY2, framerate=(fraction)20/1, pixel_aspect_ratio=(fraction)1/1
/pipeline0/queue0.sink: caps = video/x-raw-yuv, width=(int)404, height=(int)424, format=(fourcc)YUY2, framerate=(fraction)20/1, pixel_aspect_ratio=(fraction)1/1
/pipeline0/queue0.src: caps = video/x-raw-yuv, width=(int)404, height=(int)424, format=(fourcc)YUY2, framerate=(fraction)20/1, pixel_aspect_ratio=(fraction)1/1
/pipeline0/typefindelement0.src: caps = video/x-raw-yuv, width=(int)404, height=(int)424, format=(fourcc)YUY2, framerate=(fraction)20/1, pixel_aspect_ratio=(fraction)1/1
/pipeline0/typefindelement0.sink: caps = video/x-raw-yuv, width=(int)404, height=(int)424, format=(fourcc)YUY2, framerate=(fraction)20/1, pixel_aspect_ratio=(fraction)1/1
ERROR: from element /pipeline0/tcpserversrc0: Internal data flow error.
Additional debug info:
gstbasesrc.c(2193): gst_base_src_loop (): /pipeline0/tcpserversrc0:
streaming task paused, reason not-negotiated (-4)
ERROR: pipeline doesn't want to preroll.
Setting pipeline to NULL ...
/pipeline0/typefindelement0.src: caps = NULL
/pipeline0/typefindelement0.sink: caps = NULL
/pipeline0/queue0.src: caps = NULL
/pipeline0/queue0.sink: caps = NULL
/pipeline0/splitter.src0: caps = NULL
/pipeline0/splitter.sink: caps = NULL
/pipeline0/videoparse0.src: caps = NULL
FREEING pipeline ...

Once this works, there will be another queue off of the 'splitter' tee that goes through some processing before heading out to a gsrtpbin/udpsink (hence the part about a gstpbin at the start of the pipeline. removing that part yeilds the same results).

Thanks
-Brian

On Mon, Jan 12, 2009 at 10:33 AM, Arnout Vandecappelle <[hidden email]> wrote:
On Thursday 08 January 2009 22:48:04 OB Lutz wrote:
> I'm trying to write a pipeline that will take video in from a network
> source, do some transcoding, and send it out to a network sink. No problem
> there. But I need to be able to get preview snapshots of the stream as its
> running, either on demand, or just have it continuously dump out frames to
> a known file periodically. I'm doing some experiments using a v4lsrc and
> xvimagesink as my fake network source and sink for now. I've found the
> 'tee' block that looks promising to split my stream into 2, so one can go
> on its way to the network sink, and the other could maybe write to a file,
> but all my experiments have only shown me a single frame in my xvimagesink.
> Am I going down the right path trying to use 'tee' or is there some other
> set of blocks I should use?

 You probably need to insert a queue behind the tee.  Otherwise you can get
deadlocks because all sinks try to synchronize from the same thread.

 Failing that, can you post the gst-launch command you're using?

 Regards,
 Arnout

--
Arnout Vandecappelle                               arnout at mind be
Senior Embedded Software Architect                 +32-16-286540
Essensium/Mind                                     http://www.mind.be
G.Geenslaan 9, 3001 Leuven, Belgium                BE 872 984 063 RPR Leuven
LinkedIn profile: http://www.linkedin.com/in/arnoutvandecappelle
GPG fingerprint:  D206 D44B 5155 DF98 550D  3F2A 2213 88AA A1C7 C933

------------------------------------------------------------------------------
Check out the new SourceForge.net Marketplace.
It is the best place to buy or sell services for
just about anything Open Source.
http://p.sf.net/sfu/Xq1LFB
_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/gstreamer-devel


------------------------------------------------------------------------------
This SF.net email is sponsored by:
SourcForge Community
SourceForge wants to tell your story.
http://p.sf.net/sfu/sf-spreadtheword
_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

Re: Grabbing snapshots out of live stream

Arnout Vandecappelle
On Monday 12 January 2009 21:24:20 OB Lutz wrote:
> gst-launch -vvv gstrtpbin name=rtpbin \
>               tcpserversrc host=127.0.0.1 port=1235 ! videoparse
> format=YUY2 width=404 height=424 framerate=20/1 bpp=16 ! \ tee
> name=splitter splitter.  ! queue ! typefind ! pngenc snapshot=false !
> multifilesink sync=false preroll-queue-len=20 location=/tmp/output.png -vvv
> splitter.

 You don't need a typefind in your pipeline.  It fails to find the type of the
raw video (there's nothing in a raw video stream to indicate what type it is)
and therefore it doesn't negotiate its caps with the pngenc.

 To debug a pipeline, it's convenient to strip it down until something works,
that way you can find out which element causes the problem.  You can replace
a video source with videotestsrc and the sink with fakesink.  That way,
you'll find out that

 videotestsrc ! typefind ! pngenc snapshot=false ! fakesink

doesn't work, while

 videotestsrc ! pngenc snapshot=false ! fakesink

does work.

 Regards,
 Arnout
--
Arnout Vandecappelle                               arnout at mind be
Senior Embedded Software Architect                 +32-16-286540
Essensium/Mind                                     http://www.mind.be
G.Geenslaan 9, 3001 Leuven, Belgium                BE 872 984 063 RPR Leuven
LinkedIn profile: http://www.linkedin.com/in/arnoutvandecappelle
GPG fingerprint:  D206 D44B 5155 DF98 550D  3F2A 2213 88AA A1C7 C933

------------------------------------------------------------------------------
This SF.net email is sponsored by:
SourcForge Community
SourceForge wants to tell your story.
http://p.sf.net/sfu/sf-spreadtheword
_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

Re: Grabbing snapshots out of live stream

ob.lutz
Thanks!

On Wed, Jan 14, 2009 at 3:15 AM, Arnout Vandecappelle <[hidden email]> wrote:
On Monday 12 January 2009 21:24:20 OB Lutz wrote:
> gst-launch -vvv gstrtpbin name=rtpbin \
>               tcpserversrc host=127.0.0.1 port=1235 ! videoparse
> format=YUY2 width=404 height=424 framerate=20/1 bpp=16 ! \ tee
> name=splitter splitter.  ! queue ! typefind ! pngenc snapshot=false !
> multifilesink sync=false preroll-queue-len=20 location=/tmp/output.png -vvv
> splitter.

 You don't need a typefind in your pipeline.  It fails to find the type of the
raw video (there's nothing in a raw video stream to indicate what type it is)
and therefore it doesn't negotiate its caps with the pngenc.

 To debug a pipeline, it's convenient to strip it down until something works,
that way you can find out which element causes the problem.  You can replace
a video source with videotestsrc and the sink with fakesink.  That way,
you'll find out that

 videotestsrc ! typefind ! pngenc snapshot=false ! fakesink

doesn't work, while

 videotestsrc ! pngenc snapshot=false ! fakesink

does work.

 Regards,
 Arnout
--
Arnout Vandecappelle                               arnout at mind be
Senior Embedded Software Architect                 +32-16-286540
Essensium/Mind                                     http://www.mind.be
G.Geenslaan 9, 3001 Leuven, Belgium                BE 872 984 063 RPR Leuven
LinkedIn profile: http://www.linkedin.com/in/arnoutvandecappelle
GPG fingerprint:  D206 D44B 5155 DF98 550D  3F2A 2213 88AA A1C7 C933

------------------------------------------------------------------------------
This SF.net email is sponsored by:
SourcForge Community
SourceForge wants to tell your story.
http://p.sf.net/sfu/sf-spreadtheword
_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/gstreamer-devel


------------------------------------------------------------------------------
This SF.net email is sponsored by:
SourcForge Community
SourceForge wants to tell your story.
http://p.sf.net/sfu/sf-spreadtheword
_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/gstreamer-devel