Hi, I have an MJPG web camera which I need to capture, decode, and scale the output. I have noticed in at least 2 hardware mjpeg decoders - a buffer alignment, meaning that there is a space of unused bytes between the Y and UV planes in the output. I have noticed this in both the omxmjpegdec of the Raspberry Pi, and the mppjmpegdec of the Rockchip 3288 (tinkerboard). When pulling 1920*1080 frames, the UV plane offset is at a stride of 1088 rather than 1080. Unfortunately, passing the YUV output to videoscale results in a scaled video output where the UV plane is shifted 16 pixels (8 bytes represents 16 UV pixels) downwards. You can see my post here https://github.com/rockchip-linux/gstreamer-rockchip/issues/41, where I am pointing it out to the Rockchip team (replete with images), however I don't know where in the pipeline this soft of thing is supposed to be handled. I can see in the Rockchip code, they are correctly storing the plane[1] offset in the VideoInfo - but it doesn't look like gstvideoscale pays any attention to it. Is there an element that can 'repack' the frame to remove the offset? Or another simple, yet processor efficient way to handle this? Thanks and regards _______________________________________________ gstreamer-devel mailing list [hidden email] https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel |
Le sam. 29 déc. 2018 21 h 42, Adam Langley <[hidden email]> a écrit :
videoscale uses gst_video_frame_map() to access the memory pointer, so it does account for this. It is likely a bug in the respective decoder or firmware.
_______________________________________________ gstreamer-devel mailing list [hidden email] https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel |
Free forum by Nabble | Edit this page |