Hello all,
I am using gstreamer 1.14 on an aarch64 based embedded Linux system with glimagesink. This particular platform uses an X11 EGL backend for GPU usage. I can successfully get video images on my screen, but am running into some real performance bottlenecks with dropped frames if the images get relatively large. This occurs in any pipeline, even pure gstreamer OpenGL ones. So for simplified testing purposes, to try and find the bottleneck, I have been using a simple OpenGL pipeline like: gst-inspect-1.0 gltestsrc ! glimagesink Turning on various levels of gst debugging, and also comparing the behavior of running the same pipeline on my x64 Linux machine, I managed to observe some different behavior between the two. My PC seems to allocate a handful of textures (and buffers obviously) when the pipeline is created, and then appears to reuse these throughout the lifetime of the application. The embedded system however, allocates a new texture with every frame. I believe that this may be causing the performance penalty I am seeing and essentially blocking the pipeline for a longer period of time with each frame. Obviously I have two different display platforms here so it's not an apples to apples comparison. What I am trying to do however is find where and why this behavior is invoked for this display platform to determine if it can be optimized. The call to actually create the texture is _gl_tex_create() within gstglmemory but I am continuing to try and trace this further up the pipe. In the mean time, I wanted to reach out here and see if this rings any bells with anyone that might have had a similar experience. Thanks! Sincerely, Ken Sloat _______________________________________________ gstreamer-devel mailing list [hidden email] https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel |
The code for reusing textures is the same no matter the OpenGL platform
(EGL, GLX, etc), Window system (X11, Wayland, etc) and is part of GstGLBufferPool usage. If your pipeline is not using some kind of bufferpool (or holding onto buffers indefinitely) then textures will be created every frame. Another failure case is if the GstBuffer is modified in some incompatible way that the GstGLBufferPool throws away the buffer instead of reusing it. This all depends entire on your pipeline and would require debugging as to what case you're hitting. Cheers -Matt On 7/10/20 3:07 am, Kenneth Sloat wrote: > Hello all, > > I am using gstreamer 1.14 on an aarch64 based embedded Linux system with glimagesink. This particular platform uses an X11 EGL backend for GPU usage. I can successfully get video images on my screen, but am running into some real performance bottlenecks with dropped frames if the images get relatively large. This occurs in any pipeline, even pure gstreamer OpenGL ones. So for simplified testing purposes, to try and find the bottleneck, I have been using a simple OpenGL pipeline like: > > gst-inspect-1.0 gltestsrc ! glimagesink > > Turning on various levels of gst debugging, and also comparing the behavior of running the same pipeline on my x64 Linux machine, I managed to observe some different behavior between the two. > > My PC seems to allocate a handful of textures (and buffers obviously) when the pipeline is created, and then appears to reuse these throughout the lifetime of the application. > > The embedded system however, allocates a new texture with every frame. I believe that this may be causing the performance penalty I am seeing and essentially blocking the pipeline for a longer period of time with each frame. > > Obviously I have two different display platforms here so it's not an apples to apples comparison. What I am trying to do however is find where and why this behavior is invoked for this display platform to determine if it can be optimized. The call to actually create the texture is _gl_tex_create() within gstglmemory but I am continuing to try and trace this further up the pipe. In the mean time, I wanted to reach out here and see if this rings any bells with anyone that might have had a similar experience. > > Thanks! > > Sincerely, > Ken Sloat > > _______________________________________________ > gstreamer-devel mailing list > [hidden email] > https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel _______________________________________________ gstreamer-devel mailing list [hidden email] https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel signature.asc (499 bytes) Download Attachment |
Hi Matt,
Thanks for your reply. Note I had a typo in my original message I meant gst-launch-1.0 not inspect. > The code for reusing textures is the same no matter the OpenGL platform > (EGL, GLX, etc), Window system (X11, Wayland, etc) and is part of > GstGLBufferPool usage. That is good to know and definitely helps narrow my search and debugging. > If your pipeline is not using some kind of > bufferpool (or holding onto buffers indefinitely) then textures will be > created every frame. Another failure case is if the GstBuffer is > modified in some incompatible way that the GstGLBufferPool throws away > the buffer instead of reusing it. This all depends entire on your > pipeline and would require debugging as to what case you're hitting. So I am just using a very simple gst-launch pipeline with gltestsrc and glimagesink, no other customization beyond some caps for framerate and such. Looking at the allocation functions it looks like each wants to use a buffer pool, and I do see debug messages that a buffer pool was created. I can also see messages on the board where the buffer is being freed. I need to get more familiar with how all these elements work internally however. I did also generate some graphs to compare my pipelines to make sure that caps and everything is the same, and it appears to be so. The only difference I see is that one platform uses a glx context and the other an egl one. I will continue to debug and let you know what I find or if I have any more questions. Thanks! Sincerely, Ken Sloat > Cheers > -Matt > > On 7/10/20 3:07 am, Kenneth Sloat wrote: >> Hello all, >> >> I am using gstreamer 1.14 on an aarch64 based embedded Linux system with glimagesink. This particular platform uses an X11 EGL backend for GPU usage. I can successfully get video images on my screen, but am running into some real performance bottlenecks with dropped frames if the images get relatively large. This occurs in any pipeline, even pure gstreamer OpenGL ones. So for simplified testing purposes, to try and find the bottleneck, I have been using a simple OpenGL pipeline like: >> >> gst-inspect-1.0 gltestsrc ! glimagesink >> >> Turning on various levels of gst debugging, and also comparing the behavior of running the same pipeline on my x64 Linux machine, I managed to observe some different behavior between the two. >> >> My PC seems to allocate a handful of textures (and buffers obviously) when the pipeline is created, and then appears to reuse these throughout the lifetime of the application. >> >> The embedded system however, allocates a new texture with every frame. I believe that this may be causing the performance penalty I am seeing and essentially blocking the pipeline for a longer period of time with each frame. >> >> Obviously I have two different display platforms here so it's not an apples to apples comparison. What I am trying to do however is find where and why this behavior is invoked for this display platform to determine if it can be optimized. The call to actually create the texture is _gl_tex_create() within gstglmemory but I am continuing to try and trace this further up the pipe. In the mean time, I wanted to reach out here and see if this rings any bells with anyone that might have had a similar experience. >> >> Thanks! >> >> Sincerely, >> Ken Sloat >> >> _______________________________________________ >> gstreamer-devel mailing list >> [hidden email] >> https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel gstreamer-devel mailing list [hidden email] https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel |
Free forum by Nabble | Edit this page |