Why does this video format conversion fail?

classic Classic list List threaded Threaded
4 messages Options
Reply | Threaded
Open this post in threaded view
|

Why does this video format conversion fail?

wally_bkg
This post was updated on .
CONTENTS DELETED
The author has deleted this message.
Reply | Threaded
Open this post in threaded view
|

Re: Why does this video format conversion fail?

wally_bkg
wally_bkg wrote
I need to do it in C but I can illustrate the problem simply with a few gst-lanuch commands.

Basically I have a USB video capture device (Hauppauge WinTV-HVR 950Q) that works with gstreamer if I simply do:

gst-launch v4l2src device=/dev/video2 ! xvimagesink

However I'm having trouble figuring out a caps filter to use that will let me get the buffers in a yuv type format.


On a normal capture card if I do:

gst-launch v4l2src device=/dev/video1 ! video/x-raw-yuv, framerate=\(fraction\)30000/1001, width=640, height=480 ! xvimagesink

It works fine, but change to /dev/video2 (the USB device) I get:
Setting pipeline to PAUSED ...
ERROR: Pipeline doesn't want to pause.
ERROR: from element /GstPipeline:pipeline0/GstV4l2Src:v4l2src0: Could not negotiate format
Additional debug info:
gstbasesrc.c(2719): gst_base_src_start (): /GstPipeline:pipeline0/GstV4l2Src:v4l2src0:
Check your filtered caps, if any


So I tried using ffmpegcolorspace to convert:

gst-launch v4l2src device=/dev/video1 ! ffmpegcolorspace ! video/x-raw-yuv, framerate=\(fraction\)30000/1001, width=640, height=480 ! ffmpegcolorspace ! xvimagesink

And get the same error message as without the ffmpegcolorspace elements between the capsfilter.


One of my main reasons for trying to use gstreamer is to let it do the heavy lifting of dealing with video input and output.   At the end of the day all I want from the appsink element is a pointer to the video data in a format documented well enough that I can pull out a 640x480 intensity (grayscale) image.

Up to getting this device, setting the caps to { video/x-raw-yuv, framerate=\(fraction\)30000/1001, width=640, height=480 } has worked fine for all the capture cards I've tried, and obviously needing to deal with only a single raw format in my code simplifies it greatly.


I'm having trouble in my C code to extract the caps that get negotiated if I leave out the capsfilter from my pipeline.  Any samples out there of how to do it?
I figured out how to extract the caps.

When using /dev/video1 (saa713x card) the "default" Buffer caps: video/x-raw-gray, bpp=(int)8, framerate=(fraction)30000/1001, width=(int)704, height=(int)480

When uisng /dev/video2 (the 950Q USB device) Buffer caps: video/x-raw-rgb, bpp=(int)24, depth=(int)24, red_mask=(int)255, green_mask=(int)65280, blue_mask=(int)16711680, endianness=(int)4321, framerate=(fraction)30000/1001, width=(int)720, height=(int)480

But this doesn't give me any clues as to why ffmpegcolor space can't convert the rgb caps to the yuv or grey caps I'd prefer to use.


Reply | Threaded
Open this post in threaded view
|

Re: Why does this video format conversion fail?

Timothy Braun
Wally,
  Theres other parts to the negotiation besides color space.  Framerate and size are also considered.  You may want to make it resemble:

gst-launch v4l2src device=/dev/video1 ! ffmpegcolorspace ! videoscale ! videorate ! video/x-raw-yuv, framerate=\(fraction\)30000/1001, width=640, height=480 ! ffmpegcolorspace ! xvimagesink

I don't think the second ffmpegcolorspace is needed either.

If you run this with GST_DEBUG=GST_CAPS:3, you will see the logs of the negotiations and you can hopefully decipher whats happening.

Hope this helps.

Tim

On Fri, Dec 10, 2010 at 12:13 PM, wally_bkg <[hidden email]> wrote:


wally_bkg wrote:
>
> I need to do it in C but I can illustrate the problem simply with a few
> gst-lanuch commands.
>
> Basically I have a USB video capture device (Hauppauge WinTV-HVR 950Q)
> that works with gstreamer if I simply do:
>
> gst-launch v4l2src device=/dev/video2 ! xvimagesink
>
> However I'm having trouble figuring out a caps filter to use that will let
> me get the buffers in a yuv type format.
>
>
> On a normal capture card if I do:
>
> gst-launch v4l2src device=/dev/video1 ! video/x-raw-yuv,
> framerate=\(fraction\)30000/1001, width=640, height=480 ! xvimagesink
>
> It works fine, but change to /dev/video2 (the USB device) I get:
>
> Setting pipeline to PAUSED ...
> ERROR: Pipeline doesn't want to pause.
> ERROR: from element /GstPipeline:pipeline0/GstV4l2Src:v4l2src0: Could not
> negotiate format
> Additional debug info:
> gstbasesrc.c(2719): gst_base_src_start ():
> /GstPipeline:pipeline0/GstV4l2Src:v4l2src0:
> Check your filtered caps, if any
>
>
>
> So I tried using ffmpegcolorspace to convert:
>
> gst-launch v4l2src device=/dev/video1 ! ffmpegcolorspace !
> video/x-raw-yuv, framerate=\(fraction\)30000/1001, width=640, height=480 !
> ffmpegcolorspace ! xvimagesink
>
> And get the same error message as without the ffmpegcolorspace elements
> between the capsfilter.
>
>
> One of my main reasons for trying to use gstreamer is to let it do the
> heavy lifting of dealing with video input and output.   At the end of the
> day all I want from the appsink element is a pointer to the video data in
> a format documented well enough that I can pull out a 640x480 intensity
> (grayscale) image.
>
> Up to getting this device, setting the caps to { video/x-raw-yuv,
> framerate=\(fraction\)30000/1001, width=640, height=480 } has worked fine
> for all the capture cards I've tried, and obviously needing to deal with
> only a single raw format in my code simplifies it greatly.
>
>
> I'm having trouble in my C code to extract the caps that get negotiated if
> I leave out the capsfilter from my pipeline.  Any samples out there of how
> to do it?
>
>

I figured out how to extract the caps.

When using /dev/video1 (saa713x card) the "default" Buffer caps:
video/x-raw-gray, bpp=(int)8, framerate=(fraction)30000/1001,
width=(int)704, height=(int)480

When uisng /dev/video2 (the 950Q USB device) Buffer caps: video/x-raw-rgb,
bpp=(int)24, depth=(int)24, red_mask=(int)255, green_mask=(int)65280,
blue_mask=(int)16711680, endianness=(int)4321,
framerate=(fraction)30000/1001, width=(int)720, height=(int)480

But this doesn't give me any clues as to why ffmpegcolor space can't convert
the rgb caps to the yuv or grey caps I'd prefer to use.



--
View this message in context: http://gstreamer-devel.966125.n4.nabble.com/Why-does-this-video-format-conversion-fail-tp3080822p3082344.html
Sent from the GStreamer-devel mailing list archive at Nabble.com.

------------------------------------------------------------------------------
Oracle to DB2 Conversion Guide: Learn learn about native support for PL/SQL,
new data types, scalar functions, improved concurrency, built-in packages,
OCI, SQL*Plus, data movement tools, best practices and more.
http://p.sf.net/sfu/oracle-sfdev2dev
_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/gstreamer-devel


------------------------------------------------------------------------------
Oracle to DB2 Conversion Guide: Learn learn about native support for PL/SQL,
new data types, scalar functions, improved concurrency, built-in packages,
OCI, SQL*Plus, data movement tools, best practices and more.
http://p.sf.net/sfu/oracle-sfdev2dev 
_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

Re: Why does this video format conversion fail?

wally_bkg
Timothy Braun wrote
Wally,
  Theres other parts to the negotiation besides color space.  Framerate and
size are also considered.  You may want to make it resemble:

gst-launch v4l2src device=/dev/video1 ! ffmpegcolorspace ! videoscale !
videorate ! video/x-raw-yuv, framerate=\(fraction\)30000/1001, width=640,
height=480 ! ffmpegcolorspace ! xvimagesink
Thanks, adding videoscale was all that was required, and you are correct the second ffmpegcolorspace is not needed.


This one works:
gst-launch v4l2src device=/dev/video2 ! ffmpegcolorspace ! videoscale ! video/x-raw-yuv, framerate=\(fraction\)30000/1001, width=640, height=480 ! xvimagesink


The PCI cards all can do a yuv format, but apparently the USB Hauppauge 950Q can only do rgb formats (I'm using only the S-video or Composite SD TV inputs it has).


The second ffmpegcolorspace element is only needed if I want to use raw-gray.  Which brings up another question is it more effecient to pass yuv buffers around that contain 50% more data than will be used (intensity only algorithm) or have a second ffmpegcolorspace in the pipeline?