System Info
Platform: Nano B01 Gstreamer: v1.14.5 Raspi cam: v2.1 (IMX219) Custom plugin repo: gst-snapshot-plugin on Github Goal: I want to record at 120fps while saving a snapshot image every 500ms to send to a neural net. What I’ve done: I installed the custom plugin linked above and got it running (after hours of debugging). It works fine for the sample pipeline (copy-pasted below) provided in the repo’s README, but… Problem: The plugin’s src pad and sink pad are both in RGB format and I’m not sure how to convert my pipeline to RGB (and then later from RGB to h264) What I’ve tried: I’ve tried using various combinations with the videoconvert element but get errors such as: “WARNING: erroneous pipeline: could not link videoconvert0 to snapshotfilter0” and “videoconvert0 can’t handle caps video/x-raw, width=(int)1280, height=(int)720, framerate=(fraction)120/1, format=(string)RGB” Example failing pipeline: gst-launch-1.0 nvarguscamerasrc ! 'video/x-raw(memory:NVMM),width=1280, height=720, framerate=120/1, format=NV12' ! videoconvert ! 'video/x-raw,width=1280, height=720, framerate=120/1, format=RGB' ! snapshotfilter trigger=true framedelay=60 filetype="jpeg" location="image.jpg" ! videoconvert ! omxh264enc ! qtmux ! filesink location=tester1.mp4 -e Sample pipeline from plugin’s repo (works, but doesn’t do exactly what I want): gst-launch-1.0 videotestsrc num-buffers=20 ! snapshotfilter trigger=true framedelay=15 filetype="jpeg" location="image.jpg" ! videoconvert ! xvimagesink Additional info/question: I can record 720p@120fps using a basic pipeline without any issues (82% cpu) Will this plugin/format conversation completely kill my fps, without doubt? Would I maybe be better of doing this with CUDA? -- Sent from: http://gstreamer-devel.966125.n4.nabble.com/ _______________________________________________ gstreamer-devel mailing list [hidden email] https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel |
Typically you don't have to specify what format to convert to when you use a
videoconvert or nvvidconv element. It should automatically figure out what it can use to make it work. You also don't have to restate things like framerate and image dimensions if they're staying the same. I don't have snapshotfilter, but you can do the same with videorate and multifilesink. Try this out: gst-launch-1.0 nvarguscamerasrc ! 'video/x-raw(memory:NVMM), width=1280, height=720, framerate=120/1' ! tee name=t \ t. ! queue ! nvvidconv ! videorate ! 'video/x-raw, framerate=2/1' ! jpegenc ! multifilesink location="snapshot_%04d.jpg" \ t. ! queue ! nvvidconv ! omxh264enc ! h264parse ! qtmux ! filesink location=tester1.mp4 -e -- Sent from: http://gstreamer-devel.966125.n4.nabble.com/ _______________________________________________ gstreamer-devel mailing list [hidden email] https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel |
Sadly, nvvidconv doesn’t support RGB (or any other 24bit RGB format). Just RGBx (32 bits) and friends. So yes, a custom CUDA kernel could do it, or you could take a look at VPI which already has a CUDA based conversion to RGB.
_______________________________________________ gstreamer-devel mailing list [hidden email] https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel |
Free forum by Nabble | Edit this page |