Hi, I've been able to do video capture on iOS 5.1, but after upgrade to 6.0, there's a green stripe across the top of the video and some color separation. This happens only when using AVCaptureSessionPresetMedium (480x360). I'm trying to track down if the issue is in recent changes to AVFoundation or in the way gstreamer handles them. I've tried filing a defect with Apple, but they're requesting more information about use of API (and taking stride into account). I assume they want to see if there's some mishandling with postprocessing of video stream. I'm not familiar enough with avfvideosrc to understand padding might be might be taking place. Any help in better understanding this would be appreciated. Thanks! -Richard |
Hello Richard,
You seem to have done the porting of gstreamer on ios 5.1 ? Thanks Kapil On Tue, Oct 23, 2012 at 2:01 PM, rpappalax <[hidden email]> wrote:
_______________________________________________ gstreamer-devel mailing list [hidden email] http://lists.freedesktop.org/mailman/listinfo/gstreamer-devel |
In reply to this post by rpappalax
The problem is a mis-alignment in your capture buffers somewhere. There are several YUV formats, some mix the components, some don't. This looks like a planar format that separates the Y from the UV components and the luma buffer doesn't align with the chroma buffers, though I guess that the chroma buffers do align. That is why the green stripe at the top and elsewhere in the image (the tops of your fingers).
On Tue, Oct 23, 2012 at 4:31 AM, rpappalax <[hidden email]> wrote:
_______________________________________________ gstreamer-devel mailing list [hidden email] http://lists.freedesktop.org/mailman/listinfo/gstreamer-devel |
That makes sense. I've looked through coremediabuffer.c, corevideobuffer.c and the other apis in applemedia, but I'm still not sure how I might isolate where this buffer misalignment is taking place. I'd appreciate any guidance on this you'd be willing to provide.
Also, since I never saw this issue prior to the upgrade to iOS6, would there be a way to isolate if the issue were within the AVFoundation itself? Thanks! |
I don't know anything about the code you are working with, especially iOS. What you will need to do is set GST_DEBUG to get debugging info on the proper plugin(s). You could start with GST_DEBUG=*:2 which will give you some debugging info on all plugins. Look to see what plugins look 'interesting' and adjust the value. The higher the number the more output you get. The syntax is: GST_DEBUG=ModuleName:level;Module2:level2;etc. I don't know how to set an environment variable in iOS. Sorry I can't give you more help.
On Tue, Oct 23, 2012 at 11:04 AM, rpappalax <[hidden email]> wrote: That makes sense. I've looked through coremediabuffer.c, corevideobuffer.c _______________________________________________ gstreamer-devel mailing list [hidden email] http://lists.freedesktop.org/mailman/listinfo/gstreamer-devel |
In reply to this post by rpappalax
We've received a response back from Apple that (as of iOS6), they are now adding padding to the video source. So just passing the pointer onto GstBuffer (the way avfvideosrc currently does) will no longer work, for resolutions/orientations where padding has been added.
------ Apple: ------ Both the stride (bytes of padding added to each row), as well as the extended rows (rows of padding at the bottom or top of the buffer) are important. The extended rows are what changed in iOS6; you can find out how many rows of padding are added to the buffer using: CVPixelBufferGetExtendedPixels(pixelBuffer, &columnsLeft, &columnsRight, &rowsTop, &rowsBottom) In the example you gave (Medium preset on iOS6) you should be seeing rowsBottom = 8. The stride is effectively CVPixelBufferGetBytesPerRowOfPlane() and includes padding (if any). When no padding is present CVPixelBufferGetBytesPerRowOfPlane() will be equal to CVPixelBufferGetWidth(), otherwise it'll be greater. ------ Question: ------ If we were using Apple's buffer handling, I assume this would be an non-issue. But since avfvideosrc just passes a data pointer and overall size of data directly onto GstBuffer, I assume we'll now need to post-process this padding within GStreamer. Does anyone know if it's possible to set a flag with padding info on a GstBuffer created from a Core Foundation CMSampleBuffer? Our objective is to avoid allocating an entirely new buffer and the performance hit that might come with it. Here's the way the new video frame image looks:
|
Free forum by Nabble | Edit this page |