I need binocular video for an image processing application, so it seemed logical to "stack" cameraR and cameraL into a single frame using videobox and videomixer and then pass that into my appsink for processing as a single buffer.
Unfortunately performance is terrible with jerky video and occasional messages about dropped frames. Using Ubuntu 10.04 64-bit, all patches as of this morning and its "stock" gstreamer-0.10.28 setup on an AMD quad core processor with 8GB RAM. (I had tried the PPA version a while back, but it hosed my system, recovering was a PITA so I'm not amenable to trying it again!) Sample pipeline with gst-launch: gst-launch v4l2src device=/dev/video1 ! ffmpegcolorspace ! video/x-raw-yuv,format=\(fourcc\)AYUV,width=640,height=480 !\ videobox border-alpha=0 top=-480 bottom=-48 ! videomixer name=mix sink_0::alpha=1.0 sink_1::alpha=1.0 ! ffmpegcolorspace !\ xvimagesink v4l2src device=/dev/video0 ! ffmpegcolorspace ! video/x-raw-yuv,format=\(fourcc\)AYUV,width=640,height=480 !\ ffmpegcolorspace ! mix. But running two instances gives smooth video despite using significantly more CPU according to top: gst-launch v4l2src device=/dev/video1 ! ffmpegcolorspace ! video/x-raw-yuv,format=\(fourcc\)AYUV,width=640,height=480 !\ ffmpegcolorspace ! xvimagesink v4l2src device=/dev/video0 ! ffmpegcolorspace !\ video/x-raw-yuv,format=\(fourcc\)AYUV,width=640,height=480 ! ffmpegcolorspace ! xvimagesink I've 5 different v4l2 capture devices (two PCI and 3 USB) and all pairs have the perfromance problem including one USB device that has two full frame rate capture inputs (Sensoray 2255S). All pairings give smooth video with two instances. The video has perfect "genlock" as its one VCR output fed to the capture devices with a distribution amplifier. The 48 extra scan lines at the bottom of the stacked frames I planned to fill with barcoded metadata, removing them didn't change anything. In fact I can run both gst-launch commands from separate terminal windows and its very obvious when the videobox/videomixer drops frames. Is there something I'm missing that can fix this? The two cameras in a single buffer would seem to make the downstream code much more understandable, but I can use the two instance solution. Any rational reason why videobox supports video/x-raw-gray and videomixer does not? _______________________________________________ gstreamer-devel mailing list [hidden email] http://lists.freedesktop.org/mailman/listinfo/gstreamer-devel |
You definitely need to insert queue's between the v4l2src's and the videomixer.
-Josh On Thu, Aug 11, 2011 at 11:01 AM, wally bkg <[hidden email]> wrote: I need binocular video for an image processing application, so it seemed logical to "stack" cameraR and cameraL into a single frame using videobox and videomixer and then pass that into my appsink for processing as a single buffer. _______________________________________________ gstreamer-devel mailing list [hidden email] http://lists.freedesktop.org/mailman/listinfo/gstreamer-devel |
Doing so made it worse, much worse. gst-launch v4l2src device=/dev/video5 ! ffmpegcolorspace ! \ video/x-raw-yuv,format=\(fourcc\)AYUV,width=640,height=480 ! \ videobox border-alpha=0 top=-480 bottom=-48 ! queue ! \ videomixer name=mix sink_0::alpha=1.0 sink_1::alpha=1.0 ! \ ffmpegcolorspace ! xvimagesink v4l2src device=/dev/video4 ! ffmpegcolorspace ! \ video/x-raw-yuv,format=\(fourcc\)AYUV,width=640,height=480 ! ffmpegcolorspace ! queue ! mix. |
In reply to this post by wally_bkg
On 08/11/11 17:01, wally bkg wrote:
> I need binocular video for an image processing application, so it > seemed logical to "stack" cameraR and cameraL into a single frame > using videobox and videomixer and then pass that into my appsink for > processing as a single buffer. > > Unfortunately performance is terrible with jerky video and occasional > messages about dropped frames. Using Ubuntu 10.04 64-bit, all patches > as of this morning and its "stock" gstreamer-0.10.28 setup on an AMD > quad core processor with 8GB RAM. (I had tried the PPA version a > while back, but it hosed my system, recovering was a PITA so I'm not > amenable to trying it again!) > > Sample pipeline with gst-launch: > > gst-launch v4l2src device=/dev/video1 ! ffmpegcolorspace ! > video/x-raw-yuv,format=\(fourcc\)AYUV,width=640,height=480 !\ > videobox border-alpha=0 top=-480 bottom=-48 ! videomixer name=mix > sink_0::alpha=1.0 sink_1::alpha=1.0 ! ffmpegcolorspace !\ > xvimagesink v4l2src device=/dev/video0 ! ffmpegcolorspace ! > video/x-raw-yuv,format=\(fourcc\)AYUV,width=640,height=480 !\ > ffmpegcolorspace ! mix. > videomixer. Maybe try to avoid the AYUV conversion, if you don't want to blend. Stefan > > But running two instances gives smooth video despite using > significantly more CPU according to top: > > gst-launch v4l2src device=/dev/video1 ! ffmpegcolorspace ! > video/x-raw-yuv,format=\(fourcc\)AYUV,width=640,height=480 !\ > ffmpegcolorspace ! xvimagesink v4l2src device=/dev/video0 ! > ffmpegcolorspace !\ > video/x-raw-yuv,format=\(fourcc\)AYUV,width=640,height=480 ! > ffmpegcolorspace ! xvimagesink > > I've 5 different v4l2 capture devices (two PCI and 3 USB) and all > pairs have the perfromance problem including one USB device that has > two full frame rate capture inputs (Sensoray 2255S). All pairings > give smooth video with two instances. > > The video has perfect "genlock" as its one VCR output fed to the > capture devices with a distribution amplifier. The 48 extra scan > lines at the bottom of the stacked frames I planned to fill with > barcoded metadata, removing them didn't change anything. In fact I > can run both gst-launch commands from separate terminal windows and > its very obvious when the videobox/videomixer drops frames. > > Is there something I'm missing that can fix this? The two cameras in > a single buffer would seem to make the downstream code much more > understandable, but I can use the two instance solution. > > Any rational reason why videobox supports video/x-raw-gray and > videomixer does not? > > > > > > > > > _______________________________________________ > gstreamer-devel mailing list > [hidden email] > http://lists.freedesktop.org/mailman/listinfo/gstreamer-devel _______________________________________________ gstreamer-devel mailing list [hidden email] http://lists.freedesktop.org/mailman/listinfo/gstreamer-devel |
I like the idea but I can find any documentation about using sink_0:xpos properties. I tried playing with: gst-launch videotestsrc pattern=1 ! \ video/x-raw-yuv,format =\(fourcc\)I420, framerate=\(fraction\)10/1, width=320, height=240 ! \ videomixer name=mix sink_0::alpha=1.0 sink_0::xpos=320 sink_0::ypos=240 sink_1::alpha=1.0 sink_1::xpos=0 sink_1::ypos=0 ! ffmpegcolorspace ! ximagesink \ videotestsrc pattern=0 ! video/x-raw-yuv,format=\(fourcc\)I420, framerate=\(fraction\)10/1, width=320, height=240 ! mix.permuting the sink_0 & sink_1 values doesn't seem to give anything useful. I'd like to end up with two 320x240 images stacked into a 320x480 frame in this example. A sample gst-launch command would be a good start but I need to set the properties when the C code builds the pipeline. |
In reply to this post by Stefan Sauer
Didn't find any useful documentation on videomixer, but I stumbled onto a website with some decent examples ( http://wiki.oz9aec.net/index.php/Gstreamer_Cheat_Sheet#Compositing ) that suggested I needed a third source to set the bigger frame in which to position the two frames This produces two stacked images one on top of the other: gst-launch videotestsrc pattern=0 ! \ video/x-raw-yuv,format =\(fourcc\)I420, framerate=\(fraction\)10/1, width=320, height=240 ! \ videomixer name=mix sink_0::alpha=1.0 sink_0::xpos=0 sink_0::ypos=0 sink_1::alpha=1.0 sink_1::xpos=0 sink_1::ypos=240 ! xvimagesink \ videotestsrc pattern=1 ! video/x-raw-yuv,format=\(fourcc\)I420, framerate=\(fraction\)10/1, width=320, height=240 ! mix. \ videotestsrc pattern=2 ! video/x-raw-yuv,format=\(fourcc\)I420, framerate=\(fraction\)10/1, width=320, height=480 ! mix. It'll be a starting point for me to do it with live video. This produces a side by side stack: gst-launch videotestsrc pattern=0 ! \ video/x-raw-yuv,format =\(fourcc\)I420, framerate=\(fraction\)10/1, width=320, height=240 ! \ videomixer name=mix sink_0::alpha=1.0 sink_0::xpos=0 sink_0::ypos=0 sink_1::alpha=1.0 sink_1::xpos=320 sink_1::ypos=0 ! xvimagesink \ videotestsrc pattern=1 ! video/x-raw-yuv,format=\(fourcc\)I420, framerate=\(fraction\)10/1, width=320, height=240 ! mix. \ videotestsrc pattern=2 ! video/x-raw-yuv,format=\(fourcc\)I420, framerate=\(fraction\)10/1, width=640, height=240 ! mix. What would be the most efficient way to produce the bigger frame in which to position the two subframes? None of the large frame should be visible. |
On 08/16/11 16:02, wally_bkg wrote:
> Stefan Kost wrote: >> >> Do without videobox and use sink_0:xpos and sink_0:ypos properties on >> videomixer. Maybe try to avoid the AYUV conversion, if you don't want to >> blend. >> >> > Didn't find any useful documentation on videomixer, but I stumbled onto a > website with some decent examples ( > http://wiki.oz9aec.net/index.php/Gstreamer_Cheat_Sheet#Compositing ) that > suggested I needed a third source to set the bigger frame in which to > position the two frames http://gstreamer.freedesktop.org/data/doc/gstreamer/head/gst-plugins-good-plugins/html/gst-plugins-good-plugins-videomixer.html http://gstreamer.freedesktop.org/data/doc/gstreamer/head/gst-plugins-good-plugins/html/GstVideoMixerPad.html the last example pipeline uses the xpos/ypos attributes. Stefan > > This produces two stacked images one on top of the other: > > gst-launch videotestsrc pattern=0 ! \ > video/x-raw-yuv,format =\(fourcc\)I420, framerate=\(fraction\)10/1, > width=320, height=240 ! \ > videomixer name=mix sink_0::alpha=1.0 sink_0::xpos=0 sink_0::ypos=0 > sink_1::alpha=1.0 sink_1::xpos=0 sink_1::ypos=240 ! xvimagesink \ > videotestsrc pattern=1 ! video/x-raw-yuv,format=\(fourcc\)I420, > framerate=\(fraction\)10/1, width=320, height=240 ! mix. \ > videotestsrc pattern=2 ! video/x-raw-yuv,format=\(fourcc\)I420, > framerate=\(fraction\)10/1, width=320, height=480 ! mix. > > It'll be a starting point for me to do it with live video. > > This produces a side by side stack: > > gst-launch videotestsrc pattern=0 ! \ > video/x-raw-yuv,format =\(fourcc\)I420, framerate=\(fraction\)10/1, > width=320, height=240 ! \ > videomixer name=mix sink_0::alpha=1.0 sink_0::xpos=0 sink_0::ypos=0 > sink_1::alpha=1.0 sink_1::xpos=320 sink_1::ypos=0 ! xvimagesink \ > videotestsrc pattern=1 ! video/x-raw-yuv,format=\(fourcc\)I420, > framerate=\(fraction\)10/1, width=320, height=240 ! mix. \ > videotestsrc pattern=2 ! video/x-raw-yuv,format=\(fourcc\)I420, > framerate=\(fraction\)10/1, width=640, height=240 ! mix. > > > What would be the most efficient way to produce the bigger frame in which to > position the two subframes? None of the large frame should be visible. > > -- > View this message in context: http://gstreamer-devel.966125.n4.nabble.com/Why-is-videobox-videomixer-dropping-frames-tp3736107p3747292.html > Sent from the GStreamer-devel mailing list archive at Nabble.com. > _______________________________________________ > gstreamer-devel mailing list > [hidden email] > http://lists.freedesktop.org/mailman/listinfo/gstreamer-devel _______________________________________________ gstreamer-devel mailing list [hidden email] http://lists.freedesktop.org/mailman/listinfo/gstreamer-devel |
Yeah, and these were what suggested I needed to use videobox to expand one frame into a size large enough to hold the other, which worked but didn't have good enough performance for live video. |
On 08/16/11 20:23, wally_bkg wrote:
> Stefan Kost wrote: >> On 08/16/11 16:02, wally_bkg wrote: >>> Stefan Kost wrote: >> C'mon, you are around for a while >> http://gstreamer.freedesktop.org/data/doc/gstreamer/head/gst-plugins-good-plugins/html/gst-plugins-good-plugins-videomixer.html >> >> http://gstreamer.freedesktop.org/data/doc/gstreamer/head/gst-plugins-good-plugins/html/GstVideoMixerPad.html >> >> the last example pipeline uses the xpos/ypos attributes. >> >> Stefan >> >> > Yeah, and these were what suggested I needed to use videobox to expand one > frame into a size large enough to hold the other, which worked but didn't > have good enough performance for live video. The last one is not using videobox and demonstrating the xpos and ypos properties ... Stefan _______________________________________________ gstreamer-devel mailing list [hidden email] http://lists.freedesktop.org/mailman/listinfo/gstreamer-devel |
Free forum by Nabble | Edit this page |