Hi,
This is an idea that's been brewing in my head for a bit. After thinking about it for a while and poking some people on IRC, I'm pretty convinced it's the best way forward. Here's a list of problems I'd like to see solved: 1) Correctly identify video in the GStreamer elements (stride, width, height, size of image and components) In the short while I recently hacked on plugins, I found bugs in lots of places, from common to obscure formats. And those were in pretty common elements (theoraenc/dec, videotestsrc). Using the APIs in gstvideo pretty much solves this problem for the current set of plugins. 2) Allow drawing onto different video formats There is actually multiple issues here: For a start, elements that draw to various YUV formats often get it wrong - mostly in corner cases. Others take shortcuts that degrade the quality of the video (like videotestsrc not computing the average for U and V pixels for subsampled planes). Examples of elements doing drawing operations start with elements like videocrop, videobox that resize the input or videotestsrc that draws rectangles. Next step there's videomixer and textoverlay that compose various input streams. And then there's various effect elements like smpte or effectv or even videoscale at the top end. And almost all of these elements support only a very limited set of colorspaces - I420 and AYUV mostly. (Also, I always dreamed of doing an mplayer gstreamer filter that responds to keypresses and displays the volume/brightness etc UI on top of the video. That's really hard to do currently.) 3) Allow better interaction between applications consuming video and GStreamer This is mostly related to web browsers, but applies to Flash, Clutter, games and probably lots of other things, too: They all want to get access to the video data and do stuff with it. Currently this often involves a colorspace conversion to RGB and then stuffing that into a Cairo surface. It would be much nicer if Cairo and pixman supported YUV so the colorpsace conversion could be omitted when the hardware accepts it. The same goes in the other direction: I'd like to capture the screen as YUV, not as RGB, if I record it to Theora video. 4) Allow hw-acceleration in the video pipeline Decoding a H264 stream in hardware, rendering sutitles on top of it, scaling it to fit and displaying as fullscreen video on my computer can in theory all be done in hardware. Unfortunately, GStreamer currently lacks infrastructure for this, so all this stuff ends up being done in software. 5) Figuring out the porper format to use is an art So where do you put the conversion element? Do you even have to put one? Newcomers trip over these problems a lot and I still hate having to edit gst-launch lines because I forgot some converter element somewhere and now negotiation fails. I'd like this to happen automatically. Of course, it doesn't mean unnecessary colorspace conversions should happen, and I also should be able to force a certain format if I want to (important for testing). These are the steps I'd like to propose as a solution: 1) Add extensive YUV support to pixman The goal is to add an infrastructure so one can support at least the formats supported by ffmpegcolorspace today. In fact the ffmpeg infrastructure fits pretty well to pixman, but I'm not sure if a straight port is acceptable license-wise. 2) Add support to Cairo to create surfaces from any pixman image I'm not sure how hard this would be, as it basically circumvents cairo_format_t - might be possible to hook it into image surfaces or might be better to use a different surface backend. But it'd just add a single function like cairo_pixman_surface_create (pixman_image_t *image); 3) Add a new caps to gstreamer: video/cairo I'm not sure yet about the specific properties required, but certainly framerate, width and height are required. Probably an optional pixman-format is required, too. Buffers passed in this format only contain a reference to a cairo surface in the buffer's data. 4) Port elements to use this cairo API Either add new elements (cairovideotestsrc, cairocolorspace) or add support for the old ones. While doing this, refine and improve cairo or pixman, so the elements can be implemented as nicely as possible. A lot of code inside GStreamer should go away 5) Finalize APIs for pixman, cairo and GStreamer in unison After enough code got ported (not sure if those should be separate branches or if it should be part of experimental releases), we sit together and finalize the API. At this point GStremaer elements switch to using video/cairo as the default data passing format. 6) For next major GStreamer release, remove video/x-raw-* The old formats are not needed anymore, they can be removed. All elements are ported to the new API. I think these steps would solve most of the problems I outlined above. Of course some questions have come up about this that Id like to answer before somebody has to ask this question in here: 1) "This is never gonna be fast enough" I don't see why. Most of the operations people care about are just memcpys and pixman is very good at detecting them and making them fast. In fact, pixman has a huge infrastructure dedicated to speeding up things that GStreamer cannot match. And no, the current scarce usage of liboil doesn't count. Currently in a lot of cases unnecessary colorspace conversions cost a lot of performance and these will go away if every element supports every format. In short: I wouldn't have proposed this if I'd think it'd make stuff slower. 2) "I will have less control over what happens" No you won't. You'll be able to use the same formats as today and access their data just like today. You just use pixman functions instead of gst_video_* functions. I don't intend to move control away from developers. The goal is to make life simpler for developers, not harder. 3) "Adding new features to GStreamer will be a lot harder" This is only halfway true. You will still be able to write elements like you do today by accessing the raw data of the surface. Of course, if you want to add a new YUV format, it will require support in pixman, and this requires more work (or even depending on unstable versions of pixman). On the other hand, once pixman supports that element, all other GStreamer elements will support it automatically and you can start rendering subtitles onto it. I also do not believe that adding more formats is somehow a common thing that happens very often, so it can easily wait until the next pixman or cairo release. But yes, depending on other libraries reduces your options. 4) "Cairo/GStreamer developers will not like that" In fact, I talked to both of the maintainers and the response in both cases was pretty positive, but skeptical about the feasability of such a project, mostly fueled by preconceptions about what Cairo or GStreamer is and how it works. I consider myself part of both the Cairo and GStreamer comunities and know the code in quite some detail and I do think it's a very good fit. So, opinions, questions, encouragement or anything else? Benjamin ------------------------------------------------------------------------------ Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day trial. Simplify your report design, integration and deployment - and focus on what you do best, core application coding. Discover what's new with Crystal Reports now. http://p.sf.net/sfu/bobj-july _______________________________________________ gstreamer-devel mailing list [hidden email] https://lists.sourceforge.net/lists/listinfo/gstreamer-devel |
On Fri, Sep 11, 2009 at 07:02:48PM +0200, Benjamin Otte wrote:
> Hi, > > This is an idea that's been brewing in my head for a bit. After > thinking about it for a while and poking some people on IRC, I'm > pretty convinced it's the best way forward. > > Here's a list of problems I'd like to see solved: > > 1) Correctly identify video in the GStreamer elements (stride, width, > height, size of image and components) > In the short while I recently hacked on plugins, I found bugs in lots > of places, from common to obscure formats. And those were in pretty > common elements (theoraenc/dec, videotestsrc). Using the APIs in > gstvideo pretty much solves this problem for the current set of > plugins. Just because an element is common doesn't mean it's been reviewed in several years. videotestsrc predates gstvideo, and hasn't been updated to use gstvideo. It should. But I've been waiting until I've moved some additional video frame stuff from Cog/Schroedinger into GStreamer. The cog/schro is well-tested and calculation of frame component sizes is more obvious than in videotestsrc. Also, it supports more formats. > 2) Allow drawing onto different video formats > There is actually multiple issues here: For a start, elements that > draw to various YUV formats often get it wrong - mostly in corner > cases. Others take shortcuts that degrade the quality of the video > (like videotestsrc not computing the average for U and V pixels for > subsampled planes). See cog. > 4) Allow hw-acceleration in the video pipeline > Decoding a H264 stream in hardware, rendering sutitles on top of it, > scaling it to fit and displaying as fullscreen video on my computer > can in theory all be done in hardware. Unfortunately, GStreamer > currently lacks infrastructure for this, so all this stuff ends up > being done in software. What can be implemented has been implemented in gst-plugins-gl, and works relatively well. It needs to be connected with VDPAU/VAAPI, but that's something that still requires work at a lower level. dave... ------------------------------------------------------------------------------ Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day trial. Simplify your report design, integration and deployment - and focus on what you do best, core application coding. Discover what's new with Crystal Reports now. http://p.sf.net/sfu/bobj-july _______________________________________________ gstreamer-devel mailing list [hidden email] https://lists.sourceforge.net/lists/listinfo/gstreamer-devel |
In reply to this post by Lawrence Auster-2
Support for YUV formats in pixman/cairo would be useful for us in Gecko, and for other apps I'm sure. I'd love to see this in cairo!
Rob -- "He was pierced for our transgressions, he was crushed for our iniquities; the punishment that brought us peace was upon him, and by his wounds we are healed. We all, like sheep, have gone astray, each of us has turned to his own way; and the LORD has laid on him the iniquity of us all." [Isaiah 53:5-6] ------------------------------------------------------------------------------ Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day trial. Simplify your report design, integration and deployment - and focus on what you do best, core application coding. Discover what's new with Crystal Reports now. http://p.sf.net/sfu/bobj-july _______________________________________________ gstreamer-devel mailing list [hidden email] https://lists.sourceforge.net/lists/listinfo/gstreamer-devel |
In reply to this post by Lawrence Auster-2
On 11-09-09 19:02, Benjamin Otte wrote:
> 1) "This is never gonna be fast enough" > I don't see why. Most of the operations people care about are just > memcpys and pixman is very good at detecting them and making them > fast. On the platforms I'm working on (ARM SoCs with 'video' hardware) everything that even resembles a memcpy is going to be slow. The effective DDR bandwidth is about 300MiB/s that is shared with the framebuffer. When using video there are a few things helping us: * Overlays that support YUV in hardware (with an XV driver) * Overlays that support scaling in hardware (with an XV driver) * NEON optimizations for various things in software And sometimes we even have a DSP to do the hard things for us (bitstream parsing, frame decoding), but even then you need to really take care not to do a memcpy, so you point the framebuffer pointer to the frame decoded by the DSP instead of copying it. I'm a high level kind of guy, so my question is: are such optimizations still possible with your proposal? regards, Koen ------------------------------------------------------------------------------ Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day trial. Simplify your report design, integration and deployment - and focus on what you do best, core application coding. Discover what's new with Crystal Reports now. http://p.sf.net/sfu/bobj-july _______________________________________________ gstreamer-devel mailing list [hidden email] https://lists.sourceforge.net/lists/listinfo/gstreamer-devel |
In reply to this post by Lawrence Auster-2
Am Freitag, den 11.09.2009, 19:02 +0200 schrieb Benjamin Otte:
> These are the steps I'd like to propose as a solution: > > 1) Add extensive YUV support to pixman > The goal is to add an infrastructure so one can support at least the > formats supported by ffmpegcolorspace today. In fact the ffmpeg > infrastructure fits pretty well to pixman, but I'm not sure if a > straight port is acceptable license-wise. > > 2) Add support to Cairo to create surfaces from any pixman image > I'm not sure how hard this would be, as it basically circumvents > cairo_format_t - might be possible to hook it into image surfaces or > might be better to use a different surface backend. But it'd just add > a single function like cairo_pixman_surface_create (pixman_image_t > *image); cairo stuff has to support all the pixman surface formats? I mean, would it be possible later to set YUV colors instead of RGB colors in cairo, would it be possible to draw lines or whatever in cairo on some YUV surface, etc? In that case you probably have a lot of work to do ;) > 3) Add a new caps to gstreamer: video/cairo > I'm not sure yet about the specific properties required, but certainly > framerate, width and height are required. Probably an optional > pixman-format is required, too. Buffers passed in this format only > contain a reference to a cairo surface in the buffer's data. (Should really be video/x-cairo). The properties should probably include the pixman-format too because this way elements can still give preferences (if they work really better on some format than on another) or if elements can still only work on a single format (because they need to fiddle with the bits themself instead of using cairo). > 4) Port elements to use this cairo API > Either add new elements (cairovideotestsrc, cairocolorspace) or add > support for the old ones. While doing this, refine and improve cairo > or pixman, so the elements can be implemented as nicely as possible. A > lot of code inside GStreamer should go away That would be the same as gst-plugins-gl works nowadays. I'm all for it, that's definitely a good idea. Not sure if we want a cairo dependency on every video element now already though... > 6) For next major GStreamer release, remove video/x-raw-* > The old formats are not needed anymore, they can be removed. All > elements are ported to the new API. Which means that cairo/pixman must have a good framework in place to also add new formats easily. If that's given it might make sense, yes. cairo/pixman should also support 8 bit and 16 bit grayscale and the different Bayer formats too then btw. Also it would mean, that if you have some codec that decodes into some weird colorformat that is not supported by pixman/cairo yet, that you need to wait for pixman/cairo to support it and gst-plugins-foo to depend on that version or that you have to do conversions internally. > 4) "Cairo/GStreamer developers will not like that" > In fact, I talked to both of the maintainers and the response in both > cases was pretty positive, but skeptical about the feasability of such > a project, mostly fueled by preconceptions about what Cairo or > GStreamer is and how it works. I consider myself part of both the > Cairo and GStreamer comunities and know the code in quite some detail > and I do think it's a very good fit. Until step 4 of your plan it definitely makes sense and should be done. Start a GIT repository for gst-plugins-cairo and I'd help you to get the GStreamer part of things done ;) This could also be done today with just supporting RGB/RGBA. All other steps might not make sense not sure what the cairo/pixman people think about supporting random color formats. Also it would mean that GStreamer depends on cairo as a required dependency. But I guess cairo/pixman are at least portable enough to work everywhere. ------------------------------------------------------------------------------ Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day trial. Simplify your report design, integration and deployment - and focus on what you do best, core application coding. Discover what's new with Crystal Reports now. http://p.sf.net/sfu/bobj-july _______________________________________________ gstreamer-devel mailing list [hidden email] https://lists.sourceforge.net/lists/listinfo/gstreamer-devel signature.asc (205 bytes) Download Attachment |
In reply to this post by Lawrence Auster-2
Am Freitag, den 11.09.2009, 19:02 +0200 schrieb Benjamin Otte:
> [...] > 6) For next major GStreamer release, remove video/x-raw-* > The old formats are not needed anymore, they can be removed. All > elements are ported to the new API. > [...] Oh another thing. Maybe only make video/x-cairo the prefered option but keep video/x-raw-* as an option. ------------------------------------------------------------------------------ Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day trial. Simplify your report design, integration and deployment - and focus on what you do best, core application coding. Discover what's new with Crystal Reports now. http://p.sf.net/sfu/bobj-july _______________________________________________ gstreamer-devel mailing list [hidden email] https://lists.sourceforge.net/lists/listinfo/gstreamer-devel signature.asc (205 bytes) Download Attachment |
In reply to this post by Lawrence Auster-2
Hi Benjamin,
All of this sounds good to me. Below are a few comments on how YUV formats could be integrated in pixman. > 1) Add extensive YUV support to pixman Extensive YUV support would be a very useful addition to pixman. Apart from the benefits you listed, I think it also makes sense to have YUV support in XRender as a more powerful way of doing textured video than Xv. * Tiles Writing one pixel in a chroma subsampled format requires access to a 2x2 tile of RGB pixels, but the current general compositing only provides one scanline. A solution to that may be to move to a tiled architectured where general_composite() processes destination tiles instead of scanlines. This would require changing all the scanline accessors, but hopefully that is a mostly mechanical process. Aside from hopefully solving the subsampling problem, tiles would also have better cache behavior for rotated or filtered sources. * Format specification Pixman already has some support for YUV formats: PIXMAN_yuy2 = PIXMAN_FORMAT(16,PIXMAN_TYPE_YUY2,0,0,0,0), PIXMAN_yv12 = PIXMAN_FORMAT(12,PIXMAN_TYPE_YV12,0,0,0,0) but you can't write to them because of the subsampling problem mentioned above, and so there is pixman_format_supported_destination() API. It was probably a mistake to add that API, and future formats should always be supported for both reading and writing. Having a pixman_format_type_t like PIXMAN_TYPE_YUY2 and TYPE_YV12 for each video format is not going to scale, so we'll need some new scheme to describe video formats. I don't know enough about video formats to have an opinion on how to do this, but I don't think there is anything particularly great about the two exisiting format codes, so hopefully we can get away with deprecating them and respecifiying within the new scheme. Thanks, Soren ------------------------------------------------------------------------------ Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day trial. Simplify your report design, integration and deployment - and focus on what you do best, core application coding. Discover what's new with Crystal Reports now. http://p.sf.net/sfu/bobj-july _______________________________________________ gstreamer-devel mailing list [hidden email] https://lists.sourceforge.net/lists/listinfo/gstreamer-devel |
In reply to this post by Sebastian Dröge-7
Sebastian Dröge <sebastian.droege <at> collabora.co.uk> writes:
> As you can get a cairo_t from every surface, would this mean that all > cairo stuff has to support all the pixman surface formats? I mean, would > it be possible later to set YUV colors instead of RGB colors in cairo, > would it be possible to draw lines or whatever in cairo on some YUV > surface, etc? > In that case you probably have a lot of work to do ;) > Yes, the idea is that you have a function like this: cairo_surface_t *gst_cairo_create_surface (GstBuffer *buffer); It would look at the buffer's caps and create the right surface from it. So writing a colorspace element would look like this: static GstFlowReturn gst_cairo_colorspace_transform (GstBaseTransform * btrans, GstBuffer * inbuf, GstBuffer * outbuf) { cairo_surface_t *in, *out; cairo_t *cr; in = gst_cairo_create_surface (inbuf); out = gst_cairo_create_surface (outbuf); cr = cairo_create (out; cairo_set_source (cr, in); cairo_set_operator (cr, CAIRO_OPERATOR_SOURCE); cairo_paint (cr); cairo_destroy (cr); cairo_surface_destroy (in); cairo_surface_destroy (out); return GST_FLOW_OK; } Making that element also a video scaler is one more cairo_scale(). The work required to get this working is pretty small as both pixman and cairo are _very_ generic, you basically just need to implement a read_pixel and store_pixel vfunc for every format that returns an ARGB guint32 and everything will just work. The complexity comes from making it work fast, but even that is not hard, as pixman has a very sophisticated acceleration architecture. So it's just writing the accelerated versions, which I intend to do for all the ones that already exist and then focus on I420 and AYUV. > > 4) Port elements to use this cairo API > > Either add new elements (cairovideotestsrc, cairocolorspace) or add > > support for the old ones. While doing this, refine and improve cairo > > or pixman, so the elements can be implemented as nicely as possible. A > > lot of code inside GStreamer should go away > > That would be the same as gst-plugins-gl works nowadays. I'm all for it, > that's definitely a good idea. Not sure if we want a cairo dependency on > every video element now already though... > old ones. Not sure what the preferred way is. I guess historically gst has gone the "write new ones" route. Both ways should be equally feasible. > Which means that cairo/pixman must have a good framework in place to > also add new formats easily. If that's given it might make sense, yes. > > cairo/pixman should also support 8 bit and 16 bit grayscale and the > different Bayer formats too then btw. > As I said before: Adding new formats is easy, what is hard is making them fast. :) > Also it would mean, that if you have some codec that decodes into some > weird colorformat that is not supported by pixman/cairo yet, that you > need to wait for pixman/cairo to support it and gst-plugins-foo to > depend on that version or that you have to do conversions internally. > Well, there's quite a few formats that are only used by one or two codecs (like upside down raw video in AVI). I think it makes a lot of sense to not expose them and require separate elements for them. > All other steps might not make sense not sure what the cairo/pixman > people think about supporting random color formats. Also it would mean > that GStreamer depends on cairo as a required dependency. But I guess > cairo/pixman are at least portable enough to work everywhere. > Yeah, the hard dependency of plugins-base on cairo would be necessary. But considering cairo is a blessed dep today (textoverlay), it shouldn't be that hard to argue? Benjamin ------------------------------------------------------------------------------ Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day trial. Simplify your report design, integration and deployment - and focus on what you do best, core application coding. Discover what's new with Crystal Reports now. http://p.sf.net/sfu/bobj-july _______________________________________________ gstreamer-devel mailing list [hidden email] https://lists.sourceforge.net/lists/listinfo/gstreamer-devel |
Benjamin Otte schrieb:
> Sebastian Dröge <sebastian.droege <at> collabora.co.uk> writes: > > >> As you can get a cairo_t from every surface, would this mean that all >> cairo stuff has to support all the pixman surface formats? I mean, would >> it be possible later to set YUV colors instead of RGB colors in cairo, >> would it be possible to draw lines or whatever in cairo on some YUV >> surface, etc? >> In that case you probably have a lot of work to do ;) >> >> > Yes, the idea is that you have a function like this: > cairo_surface_t *gst_cairo_create_surface (GstBuffer *buffer); > It would look at the buffer's caps and create the right surface from it. > So writing a colorspace element would look like this: > static GstFlowReturn > gst_cairo_colorspace_transform (GstBaseTransform * btrans, GstBuffer * inbuf, > GstBuffer * outbuf) > { > cairo_surface_t *in, *out; > cairo_t *cr; > > in = gst_cairo_create_surface (inbuf); > out = gst_cairo_create_surface (outbuf); > cr = cairo_create (out; > cairo_set_source (cr, in); > cairo_set_operator (cr, CAIRO_OPERATOR_SOURCE); > cairo_paint (cr); > cairo_destroy (cr); > cairo_surface_destroy (in); > cairo_surface_destroy (out); > > return GST_FLOW_OK; > } > Making that element also a video scaler is one more cairo_scale(). > > The work required to get this working is pretty small as both pixman and cairo > are _very_ generic, you basically just need to implement a read_pixel and > store_pixel vfunc for every format that returns an ARGB guint32 and everything > will just work. > optimized (and/or vectorized). If I recall right "graphics-drivers" for turbo pascal in dos worked that way and they were slow! Stefan > The complexity comes from making it work fast, but even that is not hard, as > pixman has a very sophisticated acceleration architecture. So it's just writing > the accelerated versions, which I intend to do for all the ones that already > exist and then focus on I420 and AYUV. > > >>> 4) Port elements to use this cairo API >>> Either add new elements (cairovideotestsrc, cairocolorspace) or add >>> support for the old ones. While doing this, refine and improve cairo >>> or pixman, so the elements can be implemented as nicely as possible. A >>> lot of code inside GStreamer should go away >>> >> That would be the same as gst-plugins-gl works nowadays. I'm all for it, >> that's definitely a good idea. Not sure if we want a cairo dependency on >> every video element now already though... >> >> > We could deprecate the old elements and add new ones or we could improvie the > old ones. Not sure what the preferred way is. I guess historically gst has gone > the "write new ones" route. > Both ways should be equally feasible. > > >> Which means that cairo/pixman must have a good framework in place to >> also add new formats easily. If that's given it might make sense, yes. >> >> cairo/pixman should also support 8 bit and 16 bit grayscale and the >> different Bayer formats too then btw. >> >> > As I said before: Adding new formats is easy, what is hard is making them fast. > :) > > >> Also it would mean, that if you have some codec that decodes into some >> weird colorformat that is not supported by pixman/cairo yet, that you >> need to wait for pixman/cairo to support it and gst-plugins-foo to >> depend on that version or that you have to do conversions internally. >> >> > Well, there's quite a few formats that are only used by one or two codecs (like > upside down raw video in AVI). I think it makes a lot of sense to not expose > them and require separate elements for them. > > >> All other steps might not make sense not sure what the cairo/pixman >> people think about supporting random color formats. Also it would mean >> that GStreamer depends on cairo as a required dependency. But I guess >> cairo/pixman are at least portable enough to work everywhere. >> >> > Yeah, the hard dependency of plugins-base on cairo would be necessary. But > considering cairo is a blessed dep today (textoverlay), it shouldn't be that > hard to argue? > > Benjamin > > > ------------------------------------------------------------------------------ > Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day > trial. Simplify your report design, integration and deployment - and focus on > what you do best, core application coding. Discover what's new with > Crystal Reports now. http://p.sf.net/sfu/bobj-july > _______________________________________________ > gstreamer-devel mailing list > [hidden email] > https://lists.sourceforge.net/lists/listinfo/gstreamer-devel > ------------------------------------------------------------------------------ Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day trial. Simplify your report design, integration and deployment - and focus on what you do best, core application coding. Discover what's new with Crystal Reports now. http://p.sf.net/sfu/bobj-july _______________________________________________ gstreamer-devel mailing list [hidden email] https://lists.sourceforge.net/lists/listinfo/gstreamer-devel |
Stefan Kost <ensonic <at> hora-obscura.de> writes:
> But having read_pixel/write_pixel will be slow. How can that be > optimized (and/or vectorized). If I recall right "graphics-drivers" for > turbo pascal in dos worked that way and they were slow! > As I said, pixman is very sophisticated. The first step is implementing the pixman_image ops - see http://cgit.freedesktop.org/pixman/tree/pixman/pixman-private.h#n160 - I think fetch_scanline_raw_32 and store_scanline_raw_32 are required, the rest can will use defaults. Once you've done that, all cairo ops work on this format. And it'll probably be incredibly slow. After that, you have a look at pixman_implementation_t - see http://cgit.freedesktop.org/pixman/tree/pixman/pixman-private.h#n372 - and realize that for every rendering operation, there is a vfunc that you can implement specifically tuned for that operation. So you can make "AYUV OVER I420" or "I420 SOURCE RGB" really fast and keep "YVU9 DIFFERENCE NV12" as slow as you want. Of course, that's a lot of work. So there's also steps in between, like the ability to implement blit and fill function for simple copies or fills with a single color - see http://cgit.freedesktop.org/pixman/tree/pixman/pixman-private.h#n405 - which get called automatically by the general functions. Of course, those functions can be separately optimized for different architectures - see http://cgit.freedesktop.org/pixman/tree/pixman/pixman-arm-neon.c for an example - so you can do all the optimizations you want. So I fail to see any reason why some important code would be slow. Benjamin ------------------------------------------------------------------------------ Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day trial. Simplify your report design, integration and deployment - and focus on what you do best, core application coding. Discover what's new with Crystal Reports now. http://p.sf.net/sfu/bobj-july _______________________________________________ gstreamer-devel mailing list [hidden email] https://lists.sourceforge.net/lists/listinfo/gstreamer-devel |
Benjamin Otte schrieb:
> Stefan Kost <ensonic <at> hora-obscura.de> writes: > > >> But having read_pixel/write_pixel will be slow. How can that be >> optimized (and/or vectorized). If I recall right "graphics-drivers" for >> turbo pascal in dos worked that way and they were slow! >> >> > As I said, pixman is very sophisticated. > > The first step is implementing the pixman_image ops - see > http://cgit.freedesktop.org/pixman/tree/pixman/pixman-private.h#n160 - I think > fetch_scanline_raw_32 and store_scanline_raw_32 are required, the rest can will > use defaults. > Once you've done that, all cairo ops work on this format. And it'll probably be > incredibly slow. > > After that, you have a look at pixman_implementation_t - see > http://cgit.freedesktop.org/pixman/tree/pixman/pixman-private.h#n372 - and > realize that for every rendering operation, there is a vfunc that you can > implement specifically tuned for that operation. So you can make "AYUV OVER > I420" or "I420 SOURCE RGB" really fast and keep "YVU9 DIFFERENCE NV12" as slow > as you want. > > Of course, that's a lot of work. So there's also steps in between, like the > ability to implement blit and fill function for simple copies or fills with a > single color - see > http://cgit.freedesktop.org/pixman/tree/pixman/pixman-private.h#n405 - which get > called automatically by the general functions. > > Of course, those functions can be separately optimized for different > architectures - see > http://cgit.freedesktop.org/pixman/tree/pixman/pixman-arm-neon.c for an example > - so you can do all the optimizations you want. > > So I fail to see any reason why some important code would be slow. > That sounds better then, but also mean that the work is not pretty small as you've said earlier. Anyway that is to be expected. Now we just need something like orc that can help you to write all those variants for all those datatypes :) Stefan > Benjamin > > > ------------------------------------------------------------------------------ > Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day > trial. Simplify your report design, integration and deployment - and focus on > what you do best, core application coding. Discover what's new with > Crystal Reports now. http://p.sf.net/sfu/bobj-july > _______________________________________________ > gstreamer-devel mailing list > [hidden email] > https://lists.sourceforge.net/lists/listinfo/gstreamer-devel > ------------------------------------------------------------------------------ Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day trial. Simplify your report design, integration and deployment - and focus on what you do best, core application coding. Discover what's new with Crystal Reports now. http://p.sf.net/sfu/bobj-july _______________________________________________ gstreamer-devel mailing list [hidden email] https://lists.sourceforge.net/lists/listinfo/gstreamer-devel |
In reply to this post by Soeren Sandmann-2
Soeren Sandmann wrote: > Hi Benjamin, > > All of this sounds good to me. Below are a few comments on how YUV > formats could be integrated in pixman. > >> 1) Add extensive YUV support to pixman > > Extensive YUV support would be a very useful addition to pixman. Apart > from the benefits you listed, I think it also makes sense to have YUV > support in XRender as a more powerful way of doing textured video than > Xv. > > * Tiles > > Writing one pixel in a chroma subsampled format requires access to a > 2x2 tile of RGB pixels, but the current general compositing only > provides one scanline. This is true of 4:1:1 (and other things with "1" in them but that is the only common one). Most of my work with compressed YUV has been with 4:2:2 which is alternating yuyvyuyv... and can be directly written from a single scanline. > A solution to that may be to move to a tiled architectured where > general_composite() processes destination tiles instead of > scanlines. This would require changing all the scanline accessors, but > hopefully that is a mostly mechanical process. This probably won't help if the borders of the tiles do not line up with the blocks needed. If the input is translated by 1 pixel vertically then you will need multiple tiles to write portions of the image. In general I consider tiled apis to make things unnecessarily complicated. The majority of cairo input is packed into an array. You either need to require images to be padded out to a multiple of tile size, or you need to greatly complicate things with "partial tiles" with whatever code is needed to avoid ever addressing the non-existent parts of the tiles. Scanlines are instead trivial to extract from a packed array, and save the overhead of having to think about the vertical iterator, allow translation and cropping and vertical flipping to be done in-place, and require small enough amounts of temporary storage that it tends to be done in the cpu cache. I think storage into 4:1:1 must be done by keeping the previous scanline in a temporary buffer and combining them in the scanline processor that is writing to the buffer. > Aside from hopefully solving the subsampling problem, tiles would also > have better cache behavior for rotated or filtered sources. No the performance is TERRIBLE for filters. A filter near the edge of a tile will require an entire neighboring tile. In scanlines the filter always gets only the exact input scanlines needed. Tiles do help for rotation of giant images, and for drawing a section out of the center of an image. But neither of these are common operations for Cairo, which really wants to draw images that are smaller than the screen fast. Tiles are also helpful for operations that only change a small portion of the image. But I doubt cairo is going to be altered to use a referenced-counted set of tiles for all storage, I suspect it is much faster to always use memory laid out such that it is in the form that the hardware wants. This makes it impossible to use this advantage of tiles. I think compression of YUV data would have to be done by buffers on I/O. Except for detecting literal copies with a translation that allows it, I don't think cairo or pixman should attempt to do anything with compressed YUV, just like it does not attempt to do anything with compressed jpeg data. It should decompose it into YUV channels. Even then YUV support requires special code as it is not RGB with different primaries. Black is .5 in the UV channels. If this is to be fast at all all the compositing operations need to be changed. It is not as bad as it might look, the interdependence of the cahnnels should cancel out of the math, but the calculation of UV will be differnt than the ones for Y and RGB. ------------------------------------------------------------------------------ Come build with us! The BlackBerry® Developer Conference in SF, CA is the only developer event you need to attend this year. Jumpstart your developing skills, take BlackBerry mobile applications to market and stay ahead of the curve. Join us from November 9-12, 2009. Register now! http://p.sf.net/sfu/devconf _______________________________________________ gstreamer-devel mailing list [hidden email] https://lists.sourceforge.net/lists/listinfo/gstreamer-devel |
In reply to this post by Lawrence Auster-2
On Sep 11, 2009, at 12:02 PM, Benjamin Otte wrote: > Hi, > > This is an idea that's been brewing in my head for a bit. After > thinking about it for a while and poking some people on IRC, I'm > pretty convinced it's the best way forward. so.. is the proposal that you would pass a cairo_surface_t in the GstBuffer (GstVideoBuffer?) Or would you continue to pass the buffer as just a byte ptr? I'm curious if the cairo dependency would only be for elements that are touching pixels? Or would platforms that have all the heavy lifting done on some sorts of hardware and/or coprocessor(s) also inherit this dependency? BR, -R ------------------------------------------------------------------------------ Come build with us! The BlackBerry® Developer Conference in SF, CA is the only developer event you need to attend this year. Jumpstart your developing skills, take BlackBerry mobile applications to market and stay ahead of the curve. Join us from November 9-12, 2009. Register now! http://p.sf.net/sfu/devconf _______________________________________________ gstreamer-devel mailing list [hidden email] https://lists.sourceforge.net/lists/listinfo/gstreamer-devel |
In reply to this post by Bill Spitzak-3
Bill Spitzak <[hidden email]> writes:
> In general I consider tiled apis to make things unnecessarily > complicated. The majority of cairo input is packed into an array. You > either need to require images to be padded out to a multiple of tile > size, or you need to greatly complicate things with "partial tiles" > with whatever code is needed to avoid ever addressing the non-existent > parts of the tiles. What I'm proposing is not to actually *store* the images in tiles, but simply to *access* them in a tiled pattern. So I'm not proposing any externally visible tiled *API* for now. (Though I think support for tiled storage may also be interesting for various reasons). > > Aside from hopefully solving the subsampling problem, tiles would also > > have better cache behavior for rotated or filtered sources. > > No the performance is TERRIBLE for filters. A filter near the edge of > a tile will require an entire neighboring tile. In scanlines the > filter always gets only the exact input scanlines needed. Consider processing an image with a 9x9 filter kernel. If you process it on a scanline by scanline basis, you will need to keep 9 scanlines in flight at the same time. This is more than will typically fit in L1, so if the cache replacement policy is Least Recently Used, then each processed cacheline will cause nine cache misses. So processing 32 scanlines of 64 cache lines causes 32 * 64 * 9 = 18432 cache misses. On the other hand processing 32 tiles of 32x32 pixels causes a total of 32 tiles times 32 + 8 rows times 2 cachelines = 2560 misses. > Tiles do help for rotation of giant images, and for drawing a section > out of the center of an image. But neither of these are common > operations for Cairo, which really wants to draw images that are > smaller than the screen fast. I don't see why tiles don't help for small rotations too. Cache lines are pretty small. Soren ------------------------------------------------------------------------------ Come build with us! The BlackBerry® Developer Conference in SF, CA is the only developer event you need to attend this year. Jumpstart your developing skills, take BlackBerry mobile applications to market and stay ahead of the curve. Join us from November 9-12, 2009. Register now! http://p.sf.net/sfu/devconf _______________________________________________ gstreamer-devel mailing list [hidden email] https://lists.sourceforge.net/lists/listinfo/gstreamer-devel |
In reply to this post by Lawrence Auster-2
So, here is an update on what happened on this in the last 3 weeks.
(Warning: It might get quite in depth in both Cairo and GStreamer terminology, so if you don't know about some things I'm talking about, don't hesitate to ask me about it in a reply or on IRC. I'll probably assume more than rudimentary GStreamer and Cairo knowledge in this mail. I want to keep it short and to the point.) When I'm talking about test results, those were created on a Macbook 2.2 with an Intel 945 GPU on Karmic. Don't expect this to be as performant on old hard-/software. I would however expect it to be as performant on recent X servers with Intel and Radeons. But you have been warned. :) What have I done so far? I've written code to implement my ideas. The code exists in public git branches and is expected to work, should you want to test it. The code should compile fine on any somewhat recent distro. Read: If you can compile git master of gstreamer and cairo, you can compile this code. Of course, it's alpha quality, so expect it to change quickly. But it should definitely compile and run. pixman: http://cgit.freedesktop.org/~company/pixman/log/?h=yuv I added support for most YUV formats that GStreamer supports today. The missing ones weren't added because they weren't necessary to prove my point. I also enhanced the API to allow creating planar images. The code is not yet optimized in any way, but I intend to hook in David's ORC code to accelerate common YUV operations. cairo: http://cgit.freedesktop.org/~company/cairo/log/?h=yuv I exported one function to be able to use any pixman format and be able to create planar image surfaces. A bunch of bugfixes were necessary, too. They're all landed in git master though. gst-plugins-cairo: http://cgit.freedesktop.org/~company/gst-plugins-cairo This new repository contains a library libgstcairo and a bunch of plugins using that library. The library does three things. First it abstracts the caps handling. This allows adding new caps to the library without the need to update the elements. It also adds a bunch of support functions that make writing caps nego code a lot simpler. Second, it contains code to create cairo surfaces from GstBuffers and vice versa. And last but not least it introduces a new format "video/x-cairo" that allows passing cairo surfaces in buffers. As this is all done transparently, elements will render to GL or whatever surfaces the moment they become available to libgstcairo without the need to recompile them. The elements implement the functionality of the most common GStreamer raw video elements. So far, there are (in order of creation and with the elements they're intended to replace): - cairocolorspace (ffmpegcolorspace) - puzzle - cairotestsrc (videotestsrc) - pangotimeoverlay (timeoverlay) - cairoxsink (ximagesink/xvmagesink) - cairoscale (videoscale) I'll code more elements and implement features for the current ones as I get around to it. In particular a full videomixer and textoverlay replacement are on my list. These elements are parrallel-installable to curent GStreamer elements; they will not override any existing elements. What did I learn so far? Cairo looks like the perfect match for GStreamer video handling, even when talking about memory buffers only. I was surprised at how quickly I could achieve progress and that there are no features I had to leave behind while porting elements. In fact, elements gained features because they support more video/x-raw-* formats now than they did before. Also, the code required got a LOT smaller. Most current elements duplicate the code to handle formats (wc -l for elements: ffmpegcolorspace: 7400, videoscale: 4600, videotestsrc: 3250) while gst-cairo hooks into the Cairo code with little overhead (libgstcairo: 1100 lines, cairocolorspace: 150, cairotestsrc: 900, cairoscale: 170). So we can talk about orders of magnitude of code that gets saved while not losing features. The performance when compared wth the default elements is somewhere between 5x slower and 3x faster, depending on what one is doing and without me having done any optimizations. I expect performance to be at least equal to current code but likely better once all optimizations are hooked up. So much for backwards compatibility. (So much for Cairo being "slow", when it can beat GStreamer noticably in some cases. ;)) The interesting thing is video/x-cairo. This can allow running a whole pipeline on the GPU without any need to move the data in main memory. When it works, its performance improvements can be measured in orders of magnitude again. A simple example (real/user times from "time", gst-launch pipeline used): 8.631s - 4.712s - videotestsrc num-buffers=1000 ! video/x-raw-yuv,width=800,height=600 ! xvimagesink sync=false 6.581s - 4.564s - videotestsrc num-buffers=1000 ! video/x-raw-rgb,width=800,height=600 ! ximagesink sync=false 0.632s - 0.488s - cairotestsrc num-buffers=1000 ! video/x-cairo,width=800,height=600 ! cairoxsink sync=false Or a somewhat more demanding example: 18.843s - 15.585s - videotestsrc num-buffers=1000 ! timeoverlay ! video/x-raw-yuv,width=800,height=600 ! xvimagesink sync=false 21.552s - 17.237s - videotestsrc num-buffers=1000 ! timeoverlay ! video/x-raw-rgb,width=800,height=600 ! ximagesink sync=false 1.187s - 0.668s - cairotestsrc num-buffers=1000 ! pangotimeoverlay ! video/x-cairo,width=800,height=600 ! cairoxsink sync=false Getting these performance gains requires access to hardware buffers in the whole pipeline. And the current design of hardware access libraries (both GL and X) and GStreamer doesn't make it any easier. Which brings us to the next point: What are the remaining issues? First of all: For memory buffers there are no remaining issues. You can probably use cairocolorspace and cairoscale as drop-in replacements without any issues today. With that said, the one big remaining issue is: Get things reliably hardware-accelerated. There's a noticable difference between all elements being accelerated and all but one element being accelerated. While the code falls back to software rendering whenever something is not supported - so there's no internal flow errors or even crashes - but it's a performance difference. Usually it's the difference between no CPU usage and one busy core. (on a lighter note, with a CPU meter it's easy to detect if the whole pipeline is properly accelerated.) Here's a list of isues I'm facing (from higher to lower layer): - GStreamer threading Code that involves GstBuffers can be called pretty much by any thread at any time - to be exact: buffers can be read by multiple threads, but only one thread at a time may write to it. This thread however may change. So there is a challenge in making sure that cairo surfaces that are kept inside buffers don't step on each other's toes from multiple threads. This is not an issue with image surfaces, but it is an issue with at least GL and X. Not sure about DirectFB, DirectX or DRM, but I'd suspect they have similar issues. Getting this right is possible today, but it would be nice if GStreamer would tell buffers when they are passing a thread boundary. I suspect this is not easy to do before 0.11 though. - GStreamer buffer allocation Buffer allocation code has quite some places where it simply returns a memory buffer when caps do not match. While gstcairo handles this fine and falls back to software rendering, it'll get slow. So making sure that in a pipeline like cairotestsrc ! video/x-cairo,width=800,height=600 ! cairoscale ! video/x-cairo,width=600,height=450 ! cairoxsink the cairotestsrc can allocate a 800x600 buffer from cairoxsink is desirable. While some ideas exist on how to make this work, there's no supported way of making it happen. - Cairo meta surfaces copy whole surfaces One way of solving the aforementioned issue and a nice way to handle the threading issues listed above is to use meta surfaces and only replay them in the sink element. Unfortunately, Cairo copies image buffers when generating snapshots, which kills performance right there. It would be nice if there was a way to keep surfaces unchanged until they are modified (copy in begin_modification maybe). If I comment out that code, this solution works fast today. It has one drawback though: As meta surfaces keep references to source surfaces around, we'd need to have threadsafe source surfaces. So we'd either need to not use GL/X surfaces at all (and rely on meta surfaces only) or find a way to copy the surfaces when moving over thread boundaries. - Rendering to subsampled images I didn't spend a lot of time after finding an ok solution, but it's a challenging task to support rendering to vertically subsampled images with pixman's scanline based approach. On one hand this is a pretty futile attempt, as the subsampling will result in artifacts no matter what one does, but on the other hand it'd be nice if rendering would work, so one could support subtitles or even overlays as seen on TV. And the most-used YUV format (I420/YV12) is horizontally subsampled. It's not terribly important as in most cases conversion can be done as the last step, but if somebody has a solution to the problem, I'm all ears. - Ways to upload YUV data There is currently no good way to get an accelerated upload of YUV data to X. The only ways I'm aware of are GL (see gst-plugins-gl for code) and xv. So either we'll need to continue converting to RGB in the X case and focus on using cairo-gl, make cairo use xv or convince the X people that such a thing as YUV uploads would be a welcome addition to Xrender. It seems X people are currently at XDC getting all excited about Wayland, so I don't have very high hopes. - Using the GPU's video decoding abilities As GPUs can decode videos in all the recent formats, it would be nice if there could be elements that take raw MPEG, H264 or whatever frames and stuck the result into a cairo buffer, preferrably in the GPU. This would also get around the issue above for most videos people watch today. Unfortunately it seems the people involved in these projects haven't yet figured out if they want to name it vaapi, vdpau, xvba or xvmc (alphabetical order here, I have no preferences), what capitalization scheme they want to use or if they even want to make it open source. So it seems this is going nowhere in the forseeable future, too. What now? When talking on IRC about this I realized that there is quite a lack of knowledge on all sides - my knowledge often isn't deep enough, GStreamer people don't know enough about state-of-the-art video handling and its gains and pitfalls, Cairo and X people lack knowledge about the requirements for video playback - and this lack of knowledge often results in preconceptions that lead to wrong decisions and make life harder on all sides. It would be nice if there was a way to get you people together and actually educate each other about this process. I'd suggest a hackfest, but I'm not sure what others think and who to approach for funding and locations. There's also the issue of the maintainers' opinions about this code. As there is quite a few projects involved (GStreamer, Cairo, pixman, and possibly X) and I'm not really interested in spending a lot of work on polishing code that ends up in some demo repository or gets rejected. It'd also be nice to get a review rather sooner than later so I can fix design issues that are in need of updating while not having so much code depend on it. And of course, I like people reviewing and complimenting me on my code. :) Then there's gst-plugins-gl. The GL plugins and my work touch on some of the same issues (most notably hardware acceleration) and I'd like to make sure this code can work with their approach and make use cairo-gl buffers internally. The best possible outcome from my point of view would be if we could port the GL plugins to use gstcairo and make gstcairo provide the required functionality. That way we'd get rid of the need to put explicit upload and download elements and gain the ability to do all the GL stuff. Unfortunately I lack knowledge about GL, so it'd be nice if someone else could look at that. I think that's all for now, Benjamin ------------------------------------------------------------------------------ Come build with us! The BlackBerry® Developer Conference in SF, CA is the only developer event you need to attend this year. Jumpstart your developing skills, take BlackBerry mobile applications to market and stay ahead of the curve. Join us from November 9-12, 2009. Register now! http://p.sf.net/sfu/devconf _______________________________________________ gstreamer-devel mailing list [hidden email] https://lists.sourceforge.net/lists/listinfo/gstreamer-devel |
2009/9/30 Benjamin Otte <[hidden email]>:
> What now? > > When talking on IRC about this I realized that there is quite a lack > of knowledge on all sides - my knowledge often isn't deep enough, > GStreamer people don't know enough about state-of-the-art video > handling and its gains and pitfalls, Cairo and X people lack knowledge > about the requirements for video playback - and this lack of knowledge > often results in preconceptions that lead to wrong decisions and make > life harder on all sides. > It would be nice if there was a way to get you people together and > actually educate each other about this process. I'd suggest a > hackfest, but I'm not sure what others think and who to approach for > funding and locations. For funding and locations you should approach someone at the board, I approached Behdad for the Gtk+ theming hackfest and I organized it in my workplace. The location would be nicer to decide once you have an idea on who's coming so that you can find a location that suits them better (and therefore it'll make it cheaper to do). As the mozilla foundation is also involved in both Cairo and video these days I'm pretty sure we can find a way for them to support the hackfest as well. So for the first step, try to come up with a list of people that would be interested in going to such hackfest. BTW, Great work!!! I can't wait to see some results. -- Cheers, Alberto Ruiz ------------------------------------------------------------------------------ Come build with us! The BlackBerry® Developer Conference in SF, CA is the only developer event you need to attend this year. Jumpstart your developing skills, take BlackBerry mobile applications to market and stay ahead of the curve. Join us from November 9-12, 2009. Register now! http://p.sf.net/sfu/devconf _______________________________________________ gstreamer-devel mailing list [hidden email] https://lists.sourceforge.net/lists/listinfo/gstreamer-devel |
In reply to this post by Lawrence Auster-2
Excerpts from Benjamin Otte's message of Wed Sep 30 07:06:04 -0700 2009:
> What have I done so far? Fantastic stuff, Benjamin! Thanks for doing all this and writing it all up. > - Cairo meta surfaces copy whole surfaces > One way of solving the aforementioned issue and a nice way to handle > the threading issues listed above is to use meta surfaces and only > replay them in the sink element. Unfortunately, Cairo copies image > buffers when generating snapshots, which kills performance right > there. The meta-surface image-copying predates a bunch of copy-on-write work in cairo for more efficient snapshots. It should be the case that what the meta-surface does here is to create a *snapshot* of each surface, (not an explicit copy). And if that's the case, and it's simply a matter of making the snapshot code just delay that copy until actually needed, then that should be easy with the infrastructure in place in cairo now. > - Ways to upload YUV data > There is currently no good way to get an accelerated upload of YUV > data to X. The only ways I'm aware of are GL (see gst-plugins-gl for > code) and xv. So either we'll need to continue converting to RGB in > the X case and focus on using cairo-gl, make cairo use xv or convince > the X people that such a thing as YUV uploads would be a welcome > addition to Xrender. > It seems X people are currently at XDC getting all excited about > Wayland, so I don't have very high hopes. I don't understand your final point here at all. Having a bunch of X people together at the same time seems an ideal way to make progress here. I'll go ahead and read your paragraph here to the group this morning and let you know what comes of it. > When talking on IRC about this I realized that there is quite a lack > of knowledge on all sides - my knowledge often isn't deep enough, > GStreamer people don't know enough about state-of-the-art video > handling and its gains and pitfalls, Cairo and X people lack knowledge > about the requirements for video playback - and this lack of knowledge > often results in preconceptions that lead to wrong decisions and make > life harder on all sides. > It would be nice if there was a way to get you people together and > actually educate each other about this process. I'd suggest a > hackfest, but I'm not sure what others think and who to approach for > funding and locations. address these kinds of cross-project issues that are hard to solve when many get-togethers address each project in isolation. Unfortunately, Plumbers just happened last week so the next one will be a year away. As for funding, the X.org Foundation has funds and is very interested in supporting X-related development events like this. The X.org Foundation Board of Directors just stood up in XDC and asked for people to request funds they need to get work done, (whether for hardware or for hackfests, etc.). So I think this is an ideal case. When you have some details put together, please feel free to email the board as a whole ([hidden email]) or me individually, (as a board member for at least the next few months). Again, great work! -Carl ------------------------------------------------------------------------------ Come build with us! The BlackBerry® Developer Conference in SF, CA is the only developer event you need to attend this year. Jumpstart your developing skills, take BlackBerry mobile applications to market and stay ahead of the curve. Join us from November 9-12, 2009. Register now! http://p.sf.net/sfu/devconf _______________________________________________ gstreamer-devel mailing list [hidden email] https://lists.sourceforge.net/lists/listinfo/gstreamer-devel signature.asc (197 bytes) Download Attachment |
Free forum by Nabble | Edit this page |