Dear GStreamers, I'm thinking of writing an OpenCL plugin that will add an arbitrary 2d OpenCl kernel into the pipeline; possible applications are edge detection, debayering... Is there any existing element I could use as a template? One twist is that GPUs give the highest performance in asynchronous mode, so the element will need to pull buffers from upstream, schedule them to the GPU, and only push the processed buffers downstream when the GPU completes. Also, it would be good to have the option of keeping the memory on the device, in order to apply a series of kernels without costly move to host and back to device. Simple use case will be a kernel stored in a single user-specified text file that will be passed to the plugin, compiled for a specified device, and then executed on the buffers. For more complex situations, plugin could be sub-classed with multiple kernels. Future work could involve exploring other languages such as SYCL, which some say is the future of open source compute: (But Mommy, I don't want to use CUDA ! ) Any advice or guidance would be greatly appreciated. Many Thanks, Aaron _______________________________________________ gstreamer-devel mailing list [hidden email] https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel |
I hope there is an expert on this list who can answer your question. I don't have an answer for you, but I wanted to say that this is potentially a very useful thing to do. I think there is something about processing streams of information, which gstreamer does quite well, which seems central to the way that sensory signals *should* be processed (e.g. in the context of computer vision problems). I can imagine doing something like the Van Essen diagram with gstreamer and OpenCL. I think having some kind of example around basic convolution -- even something extremely basic (e.g. sobel edge detection) -- would be a great example to work from. I had not heard about SYCL until today, but I just noticed that SYCL seems to extend some C++ AMP concepts (where GPU code is written as C++ lambda expressions). Personally I think this is the correct approach. On Sun, Apr 7, 2019 at 7:05 PM Aaron Boxer <[hidden email]> wrote:
_______________________________________________ gstreamer-devel mailing list [hidden email] https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel |
In reply to this post by boxerab@gmail.com
On Mon, Apr 8, 2019 at 7:40 AM Aaron Boxer <[hidden email]> wrote:
> Is there any existing element I could use as a template? > One twist is that GPUs give the highest performance in asynchronous mode, so > the element will need to pull buffers from upstream, schedule them to > the GPU, and only push the processed buffers downstream when the GPU > completes. Also, it would be good to have the option of keeping the memory > on the device, in order to apply a series of kernels without costly move > to host and back to device. > The gstgl library does exactly all this. See, for example, inside gst-plugins-base/gst-libs/gst/gl/gstglwindow.c, gst_gl_window_default_send_message() which for instance runs all GL functions inside the glib main context attached to the window. You can probably do something similar. For keeping memory on the device, that's already supported by GstBuffer and by GstMemory, for instance GL and DMA-BUF memories are always on the device. You can add a new caps type for device-side compute buffers. Matthew (ystreet) might've spent some time on this already. Cheers, Nirbheek _______________________________________________ gstreamer-devel mailing list [hidden email] https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel |
Thanks, guys. I will take a look at gstgl. On Tue, Apr 9, 2019 at 6:39 AM Nirbheek Chauhan <[hidden email]> wrote: On Mon, Apr 8, 2019 at 7:40 AM Aaron Boxer <[hidden email]> wrote: _______________________________________________ gstreamer-devel mailing list [hidden email] https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel |
Free forum by Nabble | Edit this page |