Memory allocation error when trying to map a video frame

classic Classic list List threaded Threaded
5 messages Options
Reply | Threaded
Open this post in threaded view
|

Memory allocation error when trying to map a video frame

jeremi.wojcicki
Dear all,

I am developing an application for Android (using ndk, my code is inspired by tutorial-3) in which I want to redirect video stream to an OpenGL (ES 3.0) texture, that later on will be used in own rendering thread. Following some very useful threads [1] on this forum I've come up with this pipeline:

videotestsrc ! video/x-raw,format=RGB,width=1280,height=720,framerate=30/1,pixel-aspect-ratio=1/1 ! glupload ! appsink

I have managed to successfully share my renderers OpenGL context with gstreamer in a response to a bus call [2] GST_MESSAGE_NEED_CONTEXT:

            // get GStreamer display from the native OpenGL EGL vars
            gst_gl_display = gst_gl_display_egl_new_with_egl_display(gl_display);
            gst_gl_context = gst_gl_context_new_wrapped(gst_gl_display, *gl_context,
                                                        GST_GL_PLATFORM_EGL, GST_GL_API_GLES2);

            // set the context
            context = gst_context_new("gst.gl.app_context", TRUE);
            s = gst_context_writable_structure(context);
            gst_structure_set(s, "context", GST_GL_TYPE_CONTEXT, gst_gl_context, NULL);

            gst_element_set_context(GST_ELEMENT (message->src), context);

Callback is called two times during run-time, so I expect it is for the glupload and appsink elements, which seems logical to me. I set appsink "new-sample" callback and there I get my sample, extract buffer. I also get the current caps from the appsink sinkpad, to seem whether the caps were properly negotiated and I receive this:
appsink0 caps: video/x-raw(memory:GLMemory), format=(string)RGB, width=(int)1280, height=(int)720, framerate=(fraction)30/1, multiview-mode=(string)mono, pixel-aspect-ratio=(fraction)1/1, interlace-mode=(string)progressive, texture-target=(string)2D

Sample seem to be in the GL memory and I wanted! However when I then try to map the frame [3] with:

gst_video_frame_map(&frames[robin], &info, buffer, (GstMapFlags) (GST_MAP_READ | (GST_MAP_FLAG_LAST << 1)) )

I receive a follwing error:

           gstgldebug.c:303:_gst_gl_debug_callback:<glcontextegl0> high: GL error from API id:150, Error:glGetQueryObjectui64vEXT::invalid query object
            gstgldebug.c:303:_gst_gl_debug_callback:<glcontextegl0> high: GL error from API id:1, Error:glBeginQueryEXT::failed to allocate CPU memory
            gstgldebug.c:303:_gst_gl_debug_callback:<glcontextegl0> high: GL error from API id:148, Error:glEndQueryEXT::query name is 0

Even tough this error looks quite horrible the software does not crash... Which is good on one side, but I surely I do not want to leave it this way. The texture id I receive from the appsink is valid and OpenGL renders it well (I pass it the my renderer object which is global, but I use pthread_mutex to avoid any unexpected behaviours). Here is the full code of my appsink callback:

https://hastebin.com/lozujofuda.hs

One more problem that I struggle with, however I do not know whether it is related to this error, is that sometimes when I run the app (~25% of times) actually the texture (from test source) does not show up. I keep getting frames and valid texture IDs but the texture after rendering is just black.

I would be thankful for for some guidance with this issue, because it is quite a "niche" problem and I do not seem to finding anything on google.

Refs
[1] http://gstreamer-devel.966125.n4.nabble.com/OpenGL-Texture-via-GstGLUpload-So-close-Proper-Post-td4670295.html
[2] http://ystreet00.blogspot.it/2015/09/gstreamer-16-and-opengl-contexts.html
[3] https://lists.freedesktop.org/archives/gstreamer-devel/2015-March/052126.html
Reply | Threaded
Open this post in threaded view
|

Re: Memory allocation error when trying to map a video frame

Matthew Waters
On 07/07/17 01:44, jeremi.wojcicki wrote:

> Dear all,
>
> I am developing an application for Android (using ndk, my code is inspired
> by tutorial-3) in which I want to redirect video stream to an OpenGL (ES
> 3.0) texture, that later on will be used in own rendering thread. Following
> some very useful threads [1] on this forum I've come up with this pipeline:
>
> videotestsrc !
> video/x-raw,format=RGB,width=1280,height=720,framerate=30/1,pixel-aspect-ratio=1/1
> ! glupload ! appsink
>
> I have managed to successfully share my renderers OpenGL context with
> gstreamer in a response to a bus call [2] GST_MESSAGE_NEED_CONTEXT:
>
>             // get GStreamer display from the native OpenGL EGL vars
>             gst_gl_display =
> gst_gl_display_egl_new_with_egl_display(gl_display);
>             gst_gl_context = gst_gl_context_new_wrapped(gst_gl_display,
> *gl_context,
>                                                         GST_GL_PLATFORM_EGL,
> GST_GL_API_GLES2);
>
>             // set the context
>             context = gst_context_new("gst.gl.app_context", TRUE);
>             s = gst_context_writable_structure(context);
>             gst_structure_set(s, "context", GST_GL_TYPE_CONTEXT,
> gst_gl_context, NULL);
>
>             gst_element_set_context(GST_ELEMENT (message->src), context);
>
> Callback is called two times during run-time, so I expect it is for the
> glupload and appsink elements, which seems logical to me. I set appsink
> "new-sample" callback and there I get my sample, extract buffer. I also get
> the current caps from the appsink sinkpad, to seem whether the caps were
> properly negotiated and I receive this:
> appsink0 caps: video/x-raw(memory:GLMemory), format=(string)RGB,
> width=(int)1280, height=(int)720, framerate=(fraction)30/1,
> multiview-mode=(string)mono, pixel-aspect-ratio=(fraction)1/1,
> interlace-mode=(string)progressive, texture-target=(string)2D
>
> Sample seem to be in the GL memory and I wanted! However when I then try to
> map the frame [3] with:
>
> gst_video_frame_map(&frames[robin], &info, buffer, (GstMapFlags)
> (GST_MAP_READ | (GST_MAP_FLAG_LAST << 1)) )
>
> I receive a follwing error:
>
>            gstgldebug.c:303:_gst_gl_debug_callback:<glcontextegl0> high: GL
> error from API id:150, Error:glGetQueryObjectui64vEXT::invalid query object
>             gstgldebug.c:303:_gst_gl_debug_callback:<glcontextegl0> high: GL
> error from API id:1, Error:glBeginQueryEXT::failed to allocate CPU memory
>             gstgldebug.c:303:_gst_gl_debug_callback:<glcontextegl0> high: GL
> error from API id:148, Error:glEndQueryEXT::query name is 0
>
> Even tough this error looks quite horrible the software does not crash...
> Which is good on one side, but I surely I do not want to leave it this way.
> The texture id I receive from the appsink is valid and OpenGL renders it
> well (I pass it the my renderer object which is global, but I use
> pthread_mutex to avoid any unexpected behaviours). Here is the full code of
> my appsink callback:
This is most likely a bug in libgstgl not creating query objects
correctly on your platform.

Can you file a bug about this with at least a GST_DEBUG=gl*:6 log and
the hardware you're running this on?

> https://hastebin.com/lozujofuda.hs <https://hastebin.com/lozujofuda.hs>  
>
> One more problem that I struggle with, however I do not know whether it is
> related to this error, is that sometimes when I run the app (~25% of times)
> actually the texture (from test source) does not show up. I keep getting
> frames and valid texture IDs but the texture after rendering is just black.
>
> I would be thankful for for some guidance with this issue, because it is
> quite a "niche" problem and I do not seem to finding anything on google.

A couple of things about this approach.

1. You need to be careful with references.  The texture id is only
guaranteed to be valid while there is a mapping open on it.  I see
you're passing the texture handle to another function.  If that function
stores the texture and it is referenced later, then the output is
undefined as you don't actually hold a reference to the texture.
2. When using multiple OpenGL contexts, one needs to ensure that the
synchronisation is correct between them.  There is a meta on most
video/x-raw(memory:GLMemory) buffers called GstGLSyncMeta that performs
this.  The flow is set sync point in one GL context, wait in the other.
The other requirement is that after you've synchronized the GL contexts,
you need to rebind the texture with glBindTexture for the updates to be
propogated correctly to the new GL context.

Hope that helps.
-Matt

_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel

signature.asc (527 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: Memory allocation error when trying to map a video frame

jeremi.wojcicki
Hi Matt,

Thanks for helping me out with it. Here's what I have done so far:

1. I reported the bug as asked. I have changed my pipeline to generate test source already in GPU memory:

gltestsrc is-live=true ! video/x-raw(memory:GLMemory),format=RGBA,width=1280,height=720,framerate=11/3,pixel-aspect-ratio=1/1 ! glcolorconvert ! video/x-raw(memory:GLMemory),format=RGB ! appsink

Then the error does not occur, and I believe it is more efficient (less memory copying etc.).

2. Regarding undefined references - I have created a temporary, primitive yet working solution: a circular buffer where I store my frames. As you can see here https://hastebin.com/ucojaqunak.cpp I unmap the old frames at the beginning of the function and then overwrite it with a new one. I intend to write a more elaborate mechanism of sharing it between threads in the future, but I guess its good enough for the testing the rest. I check my textures in the drawing thread with glIsTexture and they seem to be all valid. I also can seen that the textureID value is increasing by one with each frame and then jumps back to the starting value (2 or sth), which would indicate that the frames stay in memory as desired.

3. The synchronization and randomly occurring black texture, is the most difficult topic, that I do not fully understand yet. Let me describe the problem a bit more in detail. Whether the texture will be black or not is somewhat "decided" on application startup, there is not flickering or anything of the image during execution. If the texture will show up video keeps on going flawlessly. If it will not show up in the beginning it will stay like this, so you have to restart. I had some gut feeling that it may be related to some synchronization issues but I didn't have a clear idea how to approach the problem.

You wrote that while using multiple OpenGL contexts I am supposed to ensure proper synchronization. This somewhat already confuses me, because I though that after sharing my OpenGL renderers context with GST elements there aren't multiple contexts anymore, but multiple threads instead. How is it then? (excuse my noob questions, as I am not experienced in gstreamer nor opengl that well).

Let me understand a bit better the issue of synchronization between opengl and gstreamer. I shared OpenGL context between several gstreamer elements (in my case all of them: source, glcolorconvert and appsink). Do these elements already manage the synchronization between each other or should I take care of it myself, since I "forced" them to share the context?

I am guessing that the most critical part is to sync my rendering thread with the gst pipeline. Can it mess up my own rendering pipeline if I don't? (in the sense, that when my thread is performing some drawing operation to the screen buffer the gst will perform simultaneously some other operations to a framebuffer that can cause some unexpected behaviors in both of them)?

You proposed to use GstGLSyncMeta to ensure the synchronization. I could not find any exhaustive documentation on the topic, so I would kindly ask you to guide me through the process.

So lets assume I would like to set the syncpoint in my opengl rendering thread and make the appsink wait. Should it be sth like this?

// the renderer thread
OnDraw(){

// ... some drawing calls etc.

glFinish();
gst_gl_sync_meta_set_sync_point (sync_meta,  context);
}

Should sync_meta be taken once from gst at startup or updated continuously during run time?
On the gstreamer side should I add the gst_gl_sync_meta_wait in the appsink callback?
Will it work if appsink frames arrive less often (like 15fps) then the opengl scene draws (~60fps)?

Sorry for an avalanche of questions, but without some help I like a dog in the fog :)

Thanks,
Jeremi


Reply | Threaded
Open this post in threaded view
|

Re: Re: Memory allocation error when trying to map a video frame

Matthew Waters
On 10/07/17 00:22, jeremi.wojcicki wrote:

> Hi Matt,
>
> Thanks for helping me out with it. Here's what I have done so far:
>
> 1. I reported the bug as asked. I have changed my pipeline to generate test
> source already in GPU memory:
>
> gltestsrc is-live=true !
> video/x-raw(memory:GLMemory),format=RGBA,width=1280,height=720,framerate=11/3,pixel-aspect-ratio=1/1
> ! glcolorconvert ! video/x-raw(memory:GLMemory),format=RGB ! appsink
>
> Then the error does not occur, and I believe it is more efficient (less
> memory copying etc.).
Right, the errors would only appear when you are transferring data
to/from RAM/GPU as that's the only place that uses GL queries.

> 2. Regarding undefined references - I have created a temporary, primitive
> yet working solution: a circular buffer where I store my frames. As you can
> see here  https://hastebin.com/ucojaqunak.cpp
> <https://hastebin.com/ucojaqunak.cpp>   I unmap the old frames at the
> beginning of the function and then overwrite it with a new one. I intend to
> write a more elaborate mechanism of sharing it between threads in the
> future, but I guess its good enough for the testing the rest. I check my
> textures in the drawing thread with glIsTexture and they seem to be all
> valid. I also can seen that the textureID value is increasing by one with
> each frame and then jumps back to the starting value (2 or sth), which would
> indicate that the frames stay in memory as desired.
They stay in memory and are reused through the GstBufferPool mechanism.
However the usage of them is not synchonized and data races may occur as
you are not holding on to the reference to the OpenGL texture and as
such, GStreamer may write data into it while you're reading data from
it.  Keeping a reference on the buffer/sample/memory while you're using
the texture fixes this.

> 3. The synchronization and randomly occurring black texture, is the most
> difficult topic, that I do not fully understand yet. Let me describe the
> problem a bit more in detail. Whether the texture will be black or not is
> somewhat "decided" on application startup, there is not flickering or
> anything of the image during execution. If the texture will show up video
> keeps on going flawlessly. If it will not show up in the beginning it will
> stay like this, so you have to restart. I had some gut feeling that it may
> be related to some synchronization issues but I didn't have a clear idea how
> to approach the problem.
>
> You wrote that while using multiple OpenGL contexts I am supposed to ensure
> proper synchronization. This somewhat already confuses me, because I though
> that after sharing my OpenGL renderers context with GST elements there
> aren't multiple contexts anymore, but multiple threads instead. How is it
> then? (excuse my noob questions, as I am not experienced in gstreamer nor
> opengl that well).
>
> Let me understand a bit better the issue of synchronization between opengl
> and gstreamer. I shared OpenGL context between several gstreamer elements
> (in my case all of them: source, glcolorconvert and appsink). Do these
> elements already manage the synchronization between each other or should I
> take care of it myself, since I "forced" them to share the context?
It's not like that at all.

The OpenGL context you pass into GStreamer using the GstContext query
(aka, application GL context), is not touched by GStreamer at all as it
has it's own internal OpenGL context (aka GStreamer GL context).
GStreamer will create it's GL context to be shared with the application
GL context so that some GL resources (textures, shaders, etc) can be
shared between them.  However the GL state machine of each GL context is
completely separate.

When OpenGL context's are shared, there is by definition, more than one
OpenGL context in the application.  As a result, synchronizing these
context's is required when accessing data that is used by more than one
OpenGL context.

> I am guessing that the most critical part is to sync my rendering thread
> with the gst pipeline. Can it mess up my own rendering pipeline if I don't?
> (in the sense, that when my thread is performing some drawing operation to
> the screen buffer the gst will perform simultaneously some other operations
> to a framebuffer that can cause some unexpected behaviors in both of them)?

No, GStreamer's and the application's GL contexts are mostly separate
and have separate GL state attached to them.

> You proposed to use GstGLSyncMeta to ensure the synchronization. I could not
> find any exhaustive documentation on the topic, so I would kindly ask you to
> guide me through the process.

Look up any references to glFenceSync(), as that's what GstGLSyncMeta
uses internally.

gst_gl_sync_meta_set_sync_point() inserts an event into the GL command
stream that can be waited/polled on later.  This is performed by all
upstream OpenGL elements in the pipeline at the end of their OpenGL
processing.

As a consumer, one should call gst_gl_sync_meta_wait_gpu/cpu() when you
need access to the data and depending on if your going to access from
RAM or from the GPU with more OpenGL commands.

> So lets assume I would like to set the syncpoint in my opengl rendering
> thread and make the appsink wait. Should it be sth like this?
>
> // the renderer thread
> OnDraw(){
>
> // ... some drawing calls etc.
>
> glFinish();
> gst_gl_sync_meta_set_sync_point (sync_meta,  context);
These two calls should only be gst_gl_sync_meta_wait_gpu ().

> }
>
> Should sync_meta be taken once from gst at startup or updated continuously
> during run time?
> On the gstreamer side should I add the gst_gl_sync_meta_wait in the appsink
> callback?

Where you call gst_gl_sync_meta_wait() is up to you however, there must
be a OpenGL context current in the thread in which you call
gst_gl_sync_meta_wait_*().

> Will it work if appsink frames arrive less often (like 15fps) then the
> opengl scene draws (~60fps)?

Yes, as long as you keep references to your data correctly.

> Sorry for an avalanche of questions, but without some help I like a dog in
> the fog :)
>
> Thanks,
> Jeremi
>
>
>
>
>
>
> --
> View this message in context: http://gstreamer-devel.966125.n4.nabble.com/Memory-allocation-error-when-trying-to-map-a-video-frame-tp4683716p4683744.html
> Sent from the GStreamer-devel mailing list archive at Nabble.com.
>


_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel

signature.asc (527 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: Re: Memory allocation error when trying to map a video frame

jeremi.wojcicki
Hi Matt,

Thanks for your answer, it helped me a lot to understand the basics. For the moment I have not succeeded to implement the synchronization, tough. Due to lack of time I have stepped back to the less elegant appsink solution using the host memory instead of GPU and performing uploading textures myself.

I have used your advice on managing references to the buffer. I hold the references in my OpenGL thread, and dispose of them in the beginning of the OnDraw call if new frames arrived. Works smoothly, no undefined references and such.

I will surely attempt again to move towards a "pure" GPU memory side solution in the future, as good performance is of my concern, but i do not have enough time to spend on it at the moment.