Video 3D support

classic Classic list List threaded Threaded
17 messages Options
Reply | Threaded
Open this post in threaded view
|

Video 3D support

Martin Bisson
Hi,

I'm currently working on adding video 3D support for GStreamer, and I need a little advice from the community.

To summarize, I'm basically (starting by) developing a plugin that would take 2 video streams and convert them into one single stream.  The way I see it, this plugin would be made in the way of this example http://www.youtube.com/profile?v=UTOwV5IJq48&user=inouek3D (see the "3D" drop down at the bottom right of the video).  This means that there would be a property in the plugin controlling the format of the output : the two streams could be merged side-to-side, one-on-top-of-the-other, interleaved, in a red-green combined video, etc.

My work is based on the proposal made for Google Summer of Code : http://gstreamer.freedesktop.org/wiki/Video3DSupport , and I would like to discuss the issue discussed in https://bugzilla.gnome.org/show_bug.cgi?id=611157.  I'm actually trying to get input on what needs to be added to GStreamer in order to provide proper 3D video support, by adding info to the 3D stream.  The different options could be any combination of:

1) doing nothing : the resulting stream would just be treated as a "normal" video stream
2) adding caps : the caps could have information about the 3D video information (left-right, top-bottom, red-green, etc.)
3) adding buffer flags : the info would be in the buffer flags, like audio streams (number of "channels" (left/right video streams seen as left/right sound channels), etc.)

I would like to hear from interested people to know what would be the best way to add the 3D video info to the stream.  As a fairly new user to GStreamer, I think the simplest solution (consider the merged stream as a simple, normal, video stream) is viable, but I guess adding information to the stream might be useful when it comes to send this stream to 3D devices or to encoders.  That's why I'm asking you guys what flags and/or caps you think would be useful.

Up 'till know, I have implemented a simple plugin stacking 2 streams one on top of the other using GstCollectPads.  If it can be any use to the discussion, I'll gladly send it (after cleaning up a little bit of my experimental dirt).  Also, if I'm not being clear, please say so and I'll do my best to make it clearer.

Hope to hear back from you guys,

Martin

------------------------------------------------------------------------------


_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

Re: Video 3D support

Stefan Sauer
Am 18.05.2010 10:12, schrieb Martin Bisson:

> Hi,
>
> I'm currently working on adding video 3D support for GStreamer, and I
> need a little advice from the community.
>
> To summarize, I'm basically (starting by) developing a plugin that would
> take 2 video streams and convert them into one single stream.  The way I
> see it, this plugin would be made in the way of this example
> http://www.youtube.com/profile?v=UTOwV5IJq48&user=inouek3D
> <http://www.youtube.com/profile?v=UTOwV5IJq48&user=inouek3D> (see the
> "3D" drop down at the bottom right of the video).  This means that there
> would be a property in the plugin controlling the format of the output :
> the two streams could be merged side-to-side, one-on-top-of-the-other,
> interleaved, in a red-green combined video, etc.
>
> My work is based on the proposal made for Google Summer of Code :
> http://gstreamer.freedesktop.org/wiki/Video3DSupport , and I would like
> to discuss the issue discussed in
> https://bugzilla.gnome.org/show_bug.cgi?id=611157.  I'm actually trying
> to get input on what needs to be added to GStreamer in order to provide
> proper 3D video support, by adding info to the 3D stream.  The different
> options could be any combination of:
>
> 1) doing nothing : the resulting stream would just be treated as a
> "normal" video stream
> 2) adding caps : the caps could have information about the 3D video
> information (left-right, top-bottom, red-green, etc.)
> 3) adding buffer flags : the info would be in the buffer flags, like
> audio streams (number of "channels" (left/right video streams seen as
> left/right sound channels), etc.)
>
> I would like to hear from interested people to know what would be the
> best way to add the 3D video info to the stream.  As a fairly new user
> to GStreamer, I think the simplest solution (consider the merged stream
> as a simple, normal, video stream) is viable, but I guess adding
> information to the stream might be useful when it comes to send this
> stream to 3D devices or to encoders.  That's why I'm asking you guys
> what flags and/or caps you think would be useful.
>
As no one replied yet, please start with the last proposal in the comment. (caps
and buffer flags).

> Up 'till know, I have implemented a simple plugin stacking 2 streams one
> on top of the other using GstCollectPads.  If it can be any use to the
> discussion, I'll gladly send it (after cleaning up a little bit of my
> experimental dirt).  Also, if I'm not being clear, please say so and
> I'll do my best to make it clearer.

Please set up a github/gitourious barnch of gst-plugin-bad, there are several
people on irc then can help. Thats the most easiest way for us to follow. Don't
worry about having commented parts in the code for now :)

Stefan

>
> Hope to hear back from you guys,
>
> Martin
>
>
>
> ------------------------------------------------------------------------------
>
>
>
>
> _______________________________________________
> gstreamer-devel mailing list
> [hidden email]
> https://lists.sourceforge.net/lists/listinfo/gstreamer-devel


------------------------------------------------------------------------------

_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

Re: Video 3D support

David Schleef-2
In reply to this post by Martin Bisson
On Tue, May 18, 2010 at 07:12:26AM +0000, Martin Bisson wrote:

> My work is based on the proposal made for Google Summer of Code :
> http://gstreamer.freedesktop.org/wiki/Video3DSupport , and I would like to
> discuss the issue discussed in
> https://bugzilla.gnome.org/show_bug.cgi?id=611157.  I'm actually trying to
> get input on what needs to be added to GStreamer in order to provide proper
> 3D video support, by adding info to the 3D stream.  The different options
> could be any combination of:
>
> 1) doing nothing : the resulting stream would just be treated as a "normal"
> video stream
> 2) adding caps : the caps could have information about the 3D video
> information (left-right, top-bottom, red-green, etc.)
> 3) adding buffer flags : the info would be in the buffer flags, like audio
> streams (number of "channels" (left/right video streams seen as left/right
> sound channels), etc.)

The goal, imo, is to define how GStreamer handles stereo video
*natively*, that is, elements can automatically differentiate
between normal and stereo video, caps negotiation works as one
would expect, playback of a stereo video on a normal monitor
would allow for the option of viewing in mono or with red/green
glasses, etc.  A lower goal would be to create elements that
manipulate stereo video, but entirely manually.  I'm only
concerned with the former.

There are two main options: Use pairs of well-known raw image
layouts (i.e., I420, YUY2, UYVY, etc.), probably consecutive
in memory, for the left and right images.  Or, define a bunch
of new fourccs that mean "stereo video" that correspond to
existing layouts, but enhanced for stereo.

Using existing layouts: I recommend packing the two pictures
consecutively in memory, left then right.  The main rationale is
that conversion to a normal picture is simply changing the size
and/or pointer of the buffer and changing caps.  Other packing
arrangements might be important in the future, so having a
manditory caps field marking the packing would be a good idea.

I recommend using a new caps type, perhaps video/x-raw-yuv-stereo
or some such, instead of using video/x-raw-yuv,stereo=true.  Using
video/x-raw-yuv leads to endless compatibility problems: elements
that currently handle video/x-raw-yuv would silently do the wrong
thing with stereo video.  Using x-raw-yuv would mean width/height
in the caps would be double the *actual* width/height of the mono
video, which is hacky.  Also, converting from stereo to mono in
many cases would require copying.

Defining new fourccs:  This has the obvious disadvantage that
we'd either need to keep these internal to GStreamer, or make them
well-known enough for other people to use them.  Integrating with
existing elements (and libgstvideo) is straightfoward.  Adding
new layouts (side-by-side, top-bottom, memory consecutive, etc.)
is simple, although adds *lots* more fourccs.

I think that overall, using new fourccs would involve writing
less code and be less prone to bugs.  It is my preference.

Oh yeah, other methods such as dual pads for left/right, or buffer
flags, are non-starters: our attempts at dual pads for float audio
failed miserably, and we don't have any buffer flags available.
And there are other reasons which I don't feel like enumerating
right now.



dave...


------------------------------------------------------------------------------

_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

Re: Video 3D support

bparker-2

What output formats are you planning to support? What I would love to see is mainly playback of side-by-side files in quad-buffered opengl (nvidia quadro/3dvision output to 120hz monitor, active glasses) and horizontal interlaced (Zalman monitor, circular polarized passive glasses)  format.

2010/05/23 0:51 "David Schleef" <[hidden email]>:

On Tue, May 18, 2010 at 07:12:26AM +0000, Martin Bisson wrote:
> My work is based on the proposal ma...

The goal, imo, is to define how GStreamer handles stereo video
*natively*, that is, elements can automatically differentiate
between normal and stereo video, caps negotiation works as one
would expect, playback of a stereo video on a normal monitor
would allow for the option of viewing in mono or with red/green
glasses, etc.  A lower goal would be to create elements that
manipulate stereo video, but entirely manually.  I'm only
concerned with the former.

There are two main options: Use pairs of well-known raw image
layouts (i.e., I420, YUY2, UYVY, etc.), probably consecutive
in memory, for the left and right images.  Or, define a bunch
of new fourccs that mean "stereo video" that correspond to
existing layouts, but enhanced for stereo.

Using existing layouts: I recommend packing the two pictures
consecutively in memory, left then right.  The main rationale is
that conversion to a normal picture is simply changing the size
and/or pointer of the buffer and changing caps.  Other packing
arrangements might be important in the future, so having a
manditory caps field marking the packing would be a good idea.

I recommend using a new caps type, perhaps video/x-raw-yuv-stereo
or some such, instead of using video/x-raw-yuv,stereo=true.  Using
video/x-raw-yuv leads to endless compatibility problems: elements
that currently handle video/x-raw-yuv would silently do the wrong
thing with stereo video.  Using x-raw-yuv would mean width/height
in the caps would be double the *actual* width/height of the mono
video, which is hacky.  Also, converting from stereo to mono in
many cases would require copying.

Defining new fourccs:  This has the obvious disadvantage that
we'd either need to keep these internal to GStreamer, or make them
well-known enough for other people to use them.  Integrating with
existing elements (and libgstvideo) is straightfoward.  Adding
new layouts (side-by-side, top-bottom, memory consecutive, etc.)
is simple, although adds *lots* more fourccs.

I think that overall, using new fourccs would involve writing
less code and be less prone to bugs.  It is my preference.

Oh yeah, other methods such as dual pads for left/right, or buffer
flags, are non-starters: our attempts at dual pads for float audio
failed miserably, and we don't have any buffer flags available.
And there are other reasons which I don't feel like enumerating
right now.



dave...



------------------------------------------------------------------------------

__________________...


------------------------------------------------------------------------------


_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

Re: Video 3D support

Martin Bisson
In reply to this post by David Schleef-2
David Schleef wrote:

> I recommend using a new caps type, perhaps video/x-raw-yuv-stereo
> or some such, instead of using video/x-raw-yuv,stereo=true.  Using
> video/x-raw-yuv leads to endless compatibility problems: elements
> that currently handle video/x-raw-yuv would silently do the wrong
> thing with stereo video.  Using x-raw-yuv would mean width/height
> in the caps would be double the *actual* width/height of the mono
> video, which is hacky.  Also, converting from stereo to mono in
> many cases would require copying.
>
> Defining new fourccs:  This has the obvious disadvantage that
> we'd either need to keep these internal to GStreamer, or make them
> well-known enough for other people to use them.  Integrating with
> existing elements (and libgstvideo) is straightfoward.  Adding
> new layouts (side-by-side, top-bottom, memory consecutive, etc.)
> is simple, although adds *lots* more fourccs.
>  
This means that we would have something like :
- video/x-raw-yuv-stereo-side-by-side
- video/x-raw-yuv-stereo-top-bottom
- video/x-raw-yuv-stereo-row-interleaved

And the same thing for different layouts like yuy2 and uyvy?  Planar
layouts like I420 might be problematic for row interleaving though,
because the u and v values spread to 2 lines...

So these caps would describe 3D streams that carry the 2 images.  But
what about streams that actually are combinations of the 2 images, like
red/cyan streams?  I guess since these would be displayed on normal
devices, they could just be normal streams, i.e. a
video/x-raw-yuv-stereo-side-by-side stream could be converted to
red/cyan stream with caps video/x-raw-yuv.

I think this caps approach sounds like a good, simple approach that
would carry the information we need.  I'm not sure how I would deal with
interlaced frames yet, I'll have to look into that.

Thanks for your reply,

Martin

------------------------------------------------------------------------------

_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

Re: Video 3D support

Martin Bisson
In reply to this post by bparker-2
bparker wrote:
>
> What output formats are you planning to support? What I would love to
> see is mainly playback of side-by-side files in quad-buffered opengl
> (nvidia quadro/3dvision output to 120hz monitor, active glasses) and
> horizontal interlaced (Zalman monitor, circular polarized passive
> glasses)  format.
>
For 3D streams, I plan to support side-by-side, top-bottom and
horizontal interlaced.  Dave mentioned memory consecutive, which would
be the same thing as top-bottom for non-planar formats.  I guess I could
add it for planar formats, if needed...

I also plan to add support for red/cyan and red/green image types.

Mart

------------------------------------------------------------------------------

_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

Re: Video 3D support

bparker-2
Yes, those would be the input formats, but what output formats will
there be? There are many different ways to output 3D streams.

Attached is a list from the windows "Stereoscopic Player" program
(which is also bundled with nvidia glasses/zalman monitors and others)
that shows what output formats it supports. This and the DepthQ player
seem to be the most featureful and up to date programs that play back
3D videos.

If you can't see the image, here is a shorter list of its output formats:

Source (Plays back unaltered, for spanned dual projector or new model 3DTV)
Monoscopic (Shows left or right eye of stereo stream only)
Dual Screen Output
NVIDIA Stereo Driver (3D Vision)
StereoBright™
Quad Buffered OpenGL (hardware page-flipping)
Sharp 3D Display
3D Enabled DLP-TV (uses checkerboard format)
iZ3D Monitor
Tridelity SL Series 3D Displays
SIS Attachment 4
Side By Side
Over/Under
Row Interlaced
Column Interlaced
True Anaglyph Red - Blue
True Anaglyph Red - Green
Gray Anaglyph Red - Cyan
Gray Anaglyph Yellow - Blue
Gray Anaglyph Green - Magenta
Half Color Anaglyph Red - Cyan
Half Color Anaglyph Yellow - Blue
Half Color Anaglyph Green - Magenta
Color Anaglyph Red - Cyan
Color Anaglyph Yellow - Blue
Color Anaglyph Green - Magenta
Optimized Anaglyph Red - Cyan
Optimized Anaglyph Yellow - Blue
Optimized Anaglyph Green - Magenta

Dual projector setup (with polarized filters) or late-model LED 3DTV's
(Samsung, Panasonic) do not require any special output method from
gstreamer to view 3D, but:

In order to have mainstream adoption of 3D playback with gstreamer, in
my opinion, these are the most widely used output methods that need to
be supported:

Unaltered (like with the new LED 3DTV's, could also support automatic
switching to 3D mode if using HDMI 1.4)
Quad-Buffered OpenGL (supported on professional cards like Nvidia Quadro)
Nvidia 3D Vision
Horizontal Interlacing (Zalman monitor)
3D DLP TV (checkerboard)
Anaglyph

Also to support Autostereoscopic displays (lenticular or parallax
barrier), a multi-view output method would be required.
For example with lenticular displays, typically you output 8 or 9
different camera angles, and for parallax, a lot of times you use a
custom 3x3 grid made from a stereo stream to enable the multiple
viewing angles.

If you have any questions I would be glad to help.

-bp

2010/05/23 19:00 "Martin Bisson" <[hidden email]>:

bparker wrote:
>
> What output formats are you planning to support? What I would love to
> see is m...

For 3D streams, I plan to support side-by-side, top-bottom and
horizontal interlaced.  Dave mentioned memory consecutive, which would
be the same thing as top-bottom for non-planar formats.  I guess I could
add it for planar formats, if needed...

I also plan to add support for red/cyan and red/green image types.

Mart

------------------------------------------------------------------------------

___________________...

------------------------------------------------------------------------------


_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/gstreamer-devel

stereoplayer.jpg (155K) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: Video 3D support

Gustavo Orrillo-2
http://www.depthq.com/

2010/5/23 bparker <[hidden email]>
Yes, those would be the input formats, but what output formats will
there be? There are many different ways to output 3D streams.

Attached is a list from the windows "Stereoscopic Player" program
(which is also bundled with nvidia glasses/zalman monitors and others)
that shows what output formats it supports. This and the DepthQ player
seem to be the most featureful and up to date programs that play back
3D videos.

If you can't see the image, here is a shorter list of its output formats:

Source (Plays back unaltered, for spanned dual projector or new model 3DTV)
Monoscopic (Shows left or right eye of stereo stream only)
Dual Screen Output
NVIDIA Stereo Driver (3D Vision)
StereoBright™
Quad Buffered OpenGL (hardware page-flipping)
Sharp 3D Display
3D Enabled DLP-TV (uses checkerboard format)
iZ3D Monitor
Tridelity SL Series 3D Displays
SIS Attachment 4
Side By Side
Over/Under
Row Interlaced
Column Interlaced
True Anaglyph Red - Blue
True Anaglyph Red - Green
Gray Anaglyph Red - Cyan
Gray Anaglyph Yellow - Blue
Gray Anaglyph Green - Magenta
Half Color Anaglyph Red - Cyan
Half Color Anaglyph Yellow - Blue
Half Color Anaglyph Green - Magenta
Color Anaglyph Red - Cyan
Color Anaglyph Yellow - Blue
Color Anaglyph Green - Magenta
Optimized Anaglyph Red - Cyan
Optimized Anaglyph Yellow - Blue
Optimized Anaglyph Green - Magenta

Dual projector setup (with polarized filters) or late-model LED 3DTV's
(Samsung, Panasonic) do not require any special output method from
gstreamer to view 3D, but:

In order to have mainstream adoption of 3D playback with gstreamer, in
my opinion, these are the most widely used output methods that need to
be supported:

Unaltered (like with the new LED 3DTV's, could also support automatic
switching to 3D mode if using HDMI 1.4)
Quad-Buffered OpenGL (supported on professional cards like Nvidia Quadro)
Nvidia 3D Vision
Horizontal Interlacing (Zalman monitor)
3D DLP TV (checkerboard)
Anaglyph

Also to support Autostereoscopic displays (lenticular or parallax
barrier), a multi-view output method would be required.
For example with lenticular displays, typically you output 8 or 9
different camera angles, and for parallax, a lot of times you use a
custom 3x3 grid made from a stereo stream to enable the multiple
viewing angles.

If you have any questions I would be glad to help.

-bp

2010/05/23 19:00 "Martin Bisson" <[hidden email]>:

bparker wrote:
>
> What output formats are you planning to support? What I would love to
> see is m...

For 3D streams, I plan to support side-by-side, top-bottom and
horizontal interlaced.  Dave mentioned memory consecutive, which would
be the same thing as top-bottom for non-planar formats.  I guess I could
add it for planar formats, if needed...

I also plan to add support for red/cyan and red/green image types.

Mart

------------------------------------------------------------------------------

___________________...

------------------------------------------------------------------------------


_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/gstreamer-devel



------------------------------------------------------------------------------


_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

Re: Video 3D support

Martin Bisson
In reply to this post by bparker-2
Thanks, I took a look a "Stereoscopic Player".  At the end of my
project, I'd like to make a simple GTK example that would do something
similar.

The way I see it, 2 "normal" video streams would be put together in one
3D stream with the caps mentioned in the previous mail : side-by-side,
bottom-top, row-interleaved.  That would be the way a 3D stream would
make its way through the pipeline.  Then, a 3D stream could be output in
a number of different formats like the ones available in Stereoscopic
Player.  I figured that for simple "software" streams, like anaglyph,
the caps could be those of a normal stream.

For the other formats, well first I don't have access to that kind of
hardware...  There might be some head-mounted displays I could get a
hand on at my university, but I'm not even sure.  Anyway, would we
actually need caps for those streams?  I figured that the output formats
related to a specific hardware would be part of a plugin.  For example,
you would plug a normal 3D stream into the output plugin that could do
what is needed for either NVidia 3D Vision, iZ3D or whatever.  If that's
correct, there might be an option on the output plugin to set what type
of output but no new caps would be needed.

Hope that makes sense,

Mart

bparker wrote:

> Yes, those would be the input formats, but what output formats will
> there be? There are many different ways to output 3D streams.
>
> Attached is a list from the windows "Stereoscopic Player" program
> (which is also bundled with nvidia glasses/zalman monitors and others)
> that shows what output formats it supports. This and the DepthQ player
> seem to be the most featureful and up to date programs that play back
> 3D videos.
>
> If you can't see the image, here is a shorter list of its output formats:
>
> Source (Plays back unaltered, for spanned dual projector or new model 3DTV)
> Monoscopic (Shows left or right eye of stereo stream only)
> Dual Screen Output
> NVIDIA Stereo Driver (3D Vision)
> StereoBright™
> Quad Buffered OpenGL (hardware page-flipping)
> Sharp 3D Display
> 3D Enabled DLP-TV (uses checkerboard format)
> iZ3D Monitor
> Tridelity SL Series 3D Displays
> SIS Attachment 4
> Side By Side
> Over/Under
> Row Interlaced
> Column Interlaced
> True Anaglyph Red - Blue
> True Anaglyph Red - Green
> Gray Anaglyph Red - Cyan
> Gray Anaglyph Yellow - Blue
> Gray Anaglyph Green - Magenta
> Half Color Anaglyph Red - Cyan
> Half Color Anaglyph Yellow - Blue
> Half Color Anaglyph Green - Magenta
> Color Anaglyph Red - Cyan
> Color Anaglyph Yellow - Blue
> Color Anaglyph Green - Magenta
> Optimized Anaglyph Red - Cyan
> Optimized Anaglyph Yellow - Blue
> Optimized Anaglyph Green - Magenta
>
> Dual projector setup (with polarized filters) or late-model LED 3DTV's
> (Samsung, Panasonic) do not require any special output method from
> gstreamer to view 3D, but:
>
> In order to have mainstream adoption of 3D playback with gstreamer, in
> my opinion, these are the most widely used output methods that need to
> be supported:
>
> Unaltered (like with the new LED 3DTV's, could also support automatic
> switching to 3D mode if using HDMI 1.4)
> Quad-Buffered OpenGL (supported on professional cards like Nvidia Quadro)
> Nvidia 3D Vision
> Horizontal Interlacing (Zalman monitor)
> 3D DLP TV (checkerboard)
> Anaglyph
>
> Also to support Autostereoscopic displays (lenticular or parallax
> barrier), a multi-view output method would be required.
> For example with lenticular displays, typically you output 8 or 9
> different camera angles, and for parallax, a lot of times you use a
> custom 3x3 grid made from a stereo stream to enable the multiple
> viewing angles.
>
> If you have any questions I would be glad to help.
>
> -bp
>
> 2010/05/23 19:00 "Martin Bisson" <[hidden email]>:
>
> bparker wrote:
>  
>> What output formats are you planning to support? What I would love to
>> see is m...
>>    
>
> For 3D streams, I plan to support side-by-side, top-bottom and
> horizontal interlaced.  Dave mentioned memory consecutive, which would
> be the same thing as top-bottom for non-planar formats.  I guess I could
> add it for planar formats, if needed...
>
> I also plan to add support for red/cyan and red/green image types.
>
> Mart
>
> ------------------------------------------------------------------------------
>
> ___________________...
>  
>
> ------------------------------------------------------------------------
>
> ------------------------------------------------------------------------
>
> ------------------------------------------------------------------------------
>
>  
> ------------------------------------------------------------------------
>
> _______________________________________________
> gstreamer-devel mailing list
> [hidden email]
> https://lists.sourceforge.net/lists/listinfo/gstreamer-devel
>  


------------------------------------------------------------------------------

_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

Re: Video 3D support

Stefan Sauer
In reply to this post by David Schleef-2
On 23.05.2010 07:47, David Schleef wrote:

> On Tue, May 18, 2010 at 07:12:26AM +0000, Martin Bisson wrote:
>  
>> My work is based on the proposal made for Google Summer of Code :
>> http://gstreamer.freedesktop.org/wiki/Video3DSupport , and I would like to
>> discuss the issue discussed in
>> https://bugzilla.gnome.org/show_bug.cgi?id=611157.  I'm actually trying to
>> get input on what needs to be added to GStreamer in order to provide proper
>> 3D video support, by adding info to the 3D stream.  The different options
>> could be any combination of:
>>
>> 1) doing nothing : the resulting stream would just be treated as a "normal"
>> video stream
>> 2) adding caps : the caps could have information about the 3D video
>> information (left-right, top-bottom, red-green, etc.)
>> 3) adding buffer flags : the info would be in the buffer flags, like audio
>> streams (number of "channels" (left/right video streams seen as left/right
>> sound channels), etc.)
>>    
> The goal, imo, is to define how GStreamer handles stereo video
> *natively*, that is, elements can automatically differentiate
> between normal and stereo video, caps negotiation works as one
> would expect, playback of a stereo video on a normal monitor
> would allow for the option of viewing in mono or with red/green
> glasses, etc.  A lower goal would be to create elements that
> manipulate stereo video, but entirely manually.  I'm only
> concerned with the former.
>  
Ack. That should be the scope of the project too. Thats why we'd like to
get feedback on caps/flags.

> There are two main options: Use pairs of well-known raw image
> layouts (i.e., I420, YUY2, UYVY, etc.), probably consecutive
> in memory, for the left and right images.  Or, define a bunch
> of new fourccs that mean "stereo video" that correspond to
> existing layouts, but enhanced for stereo.
>
> Using existing layouts: I recommend packing the two pictures
> consecutively in memory, left then right.  The main rationale is
> that conversion to a normal picture is simply changing the size
> and/or pointer of the buffer and changing caps.  Other packing
> arrangements might be important in the future, so having a
> manditory caps field marking the packing would be a good idea.
>  
yep. over/under would mean left than right. We should support left/right
for interoperability, but its slower to process.
> I recommend using a new caps type, perhaps video/x-raw-yuv-stereo
> or some such, instead of using video/x-raw-yuv,stereo=true.  Using
> video/x-raw-yuv leads to endless compatibility problems: elements
> that currently handle video/x-raw-yuv would silently do the wrong
> thing with stereo video.  Using x-raw-yuv would mean width/height
> in the caps would be double the *actual* width/height of the mono
> video, which is hacky.  Also, converting from stereo to mono in
> many cases would require copying.
>  
We should not change the semantics of width/height, so if a plugin is
not knowing about stereo=true, it would process only first half of the
buffer, that is the left frame. If the element is inplace, the 2nd half
is not modified. If it is creating a new buffer, we loose the
stereo=true. For left/right packing we would get garbage :/

> Defining new fourccs:  This has the obvious disadvantage that
> we'd either need to keep these internal to GStreamer, or make them
> well-known enough for other people to use them.  Integrating with
> existing elements (and libgstvideo) is straightfoward.  Adding
> new layouts (side-by-side, top-bottom, memory consecutive, etc.)
> is simple, although adds *lots* more fourccs.
>
> I think that overall, using new fourccs would involve writing
> less code and be less prone to bugs.  It is my preference.
>  
I Honestly don't like it so much as its badly scales :/ To be sure you
mean instead of
video/x-raw-yuv, format="I420" we do video/x-raw-yuv,
format="S420",layout="over/under" (yeah crap). Or
video/x-raw-yuv-stereo, format="I420",layout="over/under" ?

> Oh yeah, other methods such as dual pads for left/right, or buffer
> flags, are non-starters: our attempts at dual pads for float audio
> failed miserably, and we don't have any buffer flags available.
> And there are other reasons which I don't feel like enumerating
> right now.
>
>  
Don't worry, I remmeber some issue with these :)

Stefan

>
> dave...
>
>
> ------------------------------------------------------------------------------
>
> _______________________________________________
> gstreamer-devel mailing list
> [hidden email]
> https://lists.sourceforge.net/lists/listinfo/gstreamer-devel
>  


------------------------------------------------------------------------------

_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

Re: Video 3D support

Stefan Sauer
In reply to this post by Martin Bisson
On 24.05.2010 01:52, Martin Bisson wrote:

> David Schleef wrote:
>  
>> I recommend using a new caps type, perhaps video/x-raw-yuv-stereo
>> or some such, instead of using video/x-raw-yuv,stereo=true.  Using
>> video/x-raw-yuv leads to endless compatibility problems: elements
>> that currently handle video/x-raw-yuv would silently do the wrong
>> thing with stereo video.  Using x-raw-yuv would mean width/height
>> in the caps would be double the *actual* width/height of the mono
>> video, which is hacky.  Also, converting from stereo to mono in
>> many cases would require copying.
>>
>> Defining new fourccs:  This has the obvious disadvantage that
>> we'd either need to keep these internal to GStreamer, or make them
>> well-known enough for other people to use them.  Integrating with
>> existing elements (and libgstvideo) is straightfoward.  Adding
>> new layouts (side-by-side, top-bottom, memory consecutive, etc.)
>> is simple, although adds *lots* more fourccs.
>>  
>>    
> This means that we would have something like :
> - video/x-raw-yuv-stereo-side-by-side
> - video/x-raw-yuv-stereo-top-bottom
> - video/x-raw-yuv-stereo-row-interleaved
>  

Lets see what david replies. I think he rather meant

video/x-raw-yuv-stereo, format={"I420", "UVYV" ,..}, layout={"side-by-side", "over-under", ...}, ...

> And the same thing for different layouts like yuy2 and uyvy?  Planar
> layouts like I420 might be problematic for row interleaving though,
> because the u and v values spread to 2 lines...
>
> So these caps would describe 3D streams that carry the 2 images.  But
> what about streams that actually are combinations of the 2 images, like
> red/cyan streams?  I guess since these would be displayed on normal
> devices, they could just be normal streams, i.e. a
> video/x-raw-yuv-stereo-side-by-side stream could be converted to
> red/cyan stream with caps video/x-raw-yuv.
>  
Reprendered Red/cyan is not really detectable and normaly we would just
render it as it is. I one would want to convert read/cyan into grayscale
over under one can use a hand-crafted pipeline. Also the red/cyan output
from the 3dmuxer would not show any sign of being 3d on its caps anymore.

Stefan

> I think this caps approach sounds like a good, simple approach that
> would carry the information we need.  I'm not sure how I would deal with
> interlaced frames yet, I'll have to look into that.
>
> Thanks for your reply,
>
> Martin
>
> ------------------------------------------------------------------------------
>
> _______________________________________________
> gstreamer-devel mailing list
> [hidden email]
> https://lists.sourceforge.net/lists/listinfo/gstreamer-devel
>  


------------------------------------------------------------------------------

_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

Re: Video 3D support

Clark, Rob
On 05/25/2010 08:42 AM, Stefan Kost wrote:

> On 24.05.2010 01:52, Martin Bisson wrote:
>    
>> David Schleef wrote:
>>
>>      
>>> I recommend using a new caps type, perhaps video/x-raw-yuv-stereo
>>> or some such, instead of using video/x-raw-yuv,stereo=true.  Using
>>> video/x-raw-yuv leads to endless compatibility problems: elements
>>> that currently handle video/x-raw-yuv would silently do the wrong
>>> thing with stereo video.  Using x-raw-yuv would mean width/height
>>> in the caps would be double the *actual* width/height of the mono
>>> video, which is hacky.  Also, converting from stereo to mono in
>>> many cases would require copying.
>>>
>>> Defining new fourccs:  This has the obvious disadvantage that
>>> we'd either need to keep these internal to GStreamer, or make them
>>> well-known enough for other people to use them.  Integrating with
>>> existing elements (and libgstvideo) is straightfoward.  Adding
>>> new layouts (side-by-side, top-bottom, memory consecutive, etc.)
>>> is simple, although adds *lots* more fourccs.
>>>
>>>
>>>        
>> This means that we would have something like :
>> - video/x-raw-yuv-stereo-side-by-side
>> - video/x-raw-yuv-stereo-top-bottom
>> - video/x-raw-yuv-stereo-row-interleaved
>>
>>      
> Lets see what david replies. I think he rather meant
>
> video/x-raw-yuv-stereo, format={"I420", "UVYV" ,..}, layout={"side-by-side", "over-under", ...}, ...
>    



One suggestion..  if we do introduce new caps mimetype values, can we
tackle rowstride at same time?  (And maybe better support for normal
interlaced?  And anything else that someone sees to be missing in
current caps?)  I guess rowstride would simplify handling at least of
side-by-side layout..

I've heard the argument against introducing new mimetypes because it
would slow down caps negotiation..  well, I'm not usually dealing with
pipelines with 100's of elements so I've not been too concerned with
it.  But I guess if there are enough things that could be added as
required fields in one go to justify video/x-raw-yuv-full (or something
like that), then we could deprecate video/x-raw-yuv and that would solve
the performance issue.

BR,
-R


>    
>> And the same thing for different layouts like yuy2 and uyvy?  Planar
>> layouts like I420 might be problematic for row interleaving though,
>> because the u and v values spread to 2 lines...
>>
>> So these caps would describe 3D streams that carry the 2 images.  But
>> what about streams that actually are combinations of the 2 images, like
>> red/cyan streams?  I guess since these would be displayed on normal
>> devices, they could just be normal streams, i.e. a
>> video/x-raw-yuv-stereo-side-by-side stream could be converted to
>> red/cyan stream with caps video/x-raw-yuv.
>>
>>      
> Reprendered Red/cyan is not really detectable and normaly we would just
> render it as it is. I one would want to convert read/cyan into grayscale
> over under one can use a hand-crafted pipeline. Also the red/cyan output
> from the 3dmuxer would not show any sign of being 3d on its caps anymore.
>
> Stefan
>
>    
>> I think this caps approach sounds like a good, simple approach that
>> would carry the information we need.  I'm not sure how I would deal with
>> interlaced frames yet, I'll have to look into that.
>>
>> Thanks for your reply,
>>
>> Martin
>


------------------------------------------------------------------------------

_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

Re: Video 3D support

David Schleef-2
In reply to this post by Stefan Sauer
On Tue, May 25, 2010 at 04:29:39PM +0300, Stefan Kost wrote:
> > I think that overall, using new fourccs would involve writing
> > less code and be less prone to bugs.  It is my preference.
> >  
> I Honestly don't like it so much as its badly scales :/ To be sure you
> mean instead of
> video/x-raw-yuv, format="I420" we do video/x-raw-yuv,
> format="S420",layout="over/under" (yeah crap). Or
> video/x-raw-yuv-stereo, format="I420",layout="over/under" ?

I meant "video/x-raw-yuv,format=S420,width=1280,height=720" for the
native way that GStreamer handled stereo video.  (Funny that you
used S420, as that was exactly the method for mangling fourcc's that
I was thinking: I420 -> S420, UYVY -> SYVY, etc.)

And no "layout" property.  We already have a fourcc to indicate layout
(as well as other stuff, under which rug we now also sweep stereo/mono).

Side-by-side and top-bottom is a misuderstanding of what "native"
means.  These are methods of packing two pictures into a single
picture for the purposes of shoving it through software that only
understands one picture.  We want a system that understands *two*
pictures.

As a data point, H.264 handles stereo by doubling the number of
pictures in the stream and ordering them left/right/left/right.  The
closest match in a GStreamer API would be to use buffer flags, but
that's gross because a) we don't have any buffer flags available
unless we steal them from miniobject, b) we still would need a
field (stereo=true) in the caps, which would cause compatibility
issues, c) some existing elements would work fine (videoscale),
others would fail horribly (videorate).

The second closest match is what I recommended: format=(fourcc)S420
(as above), indicating two I420 pictures consecutive in memory.  A
stereo H.264 decoder can be modified to decode to these buffers
easily.

On the display side, I only really have experience with X output
to shutter stereo goggles using OpenGL: you upload separate pictures
for right and left, and the driver flips between them.  In this
case, the packing in memory is only slightly important -- memory
consective order would be easier to code, but the graphics engine
could easily be programmed to handle top/bottom or side-by-side.

HDMI 1.4, curiously, has support for half a dozen stereo layouts.
This is a design we should strive to avoid.

On the question of "How do I handle side-by-side video":  Use an
element to convert to the native format.  Let's call it
'videotostereo'.  Thus, if you have video like on this page:
http://www.stereomaker.net/sample/index.html, you would use
something like:

  filesrc ! decodebin ! videotostereo method=side-by-side !
    stereoglimagesink

Assuming you don't have stereo output, but want to watch one of
the mono channels:

  filesrc ! decodebin ! videotostereo method=side-by-side !
    ffmpegcolorspace ! xvimagesink

Converting the above video to red/cyan anaglyph:

  filesrc ! decodebin ! videotostereo method=side-by-side !
    videofromstereo method=red-cyan-anaglyph !  xvimagesink



dave...


------------------------------------------------------------------------------

_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

Re: Video 3D support

Martin Bisson
In reply to this post by Stefan Sauer
Stefan Kost wrote:
On 24.05.2010 01:52, Martin Bisson wrote:
  
David Schleef wrote:
  
    
I recommend using a new caps type, perhaps video/x-raw-yuv-stereo
or some such, instead of using video/x-raw-yuv,stereo=true.  Using
video/x-raw-yuv leads to endless compatibility problems: elements
that currently handle video/x-raw-yuv would silently do the wrong
thing with stereo video.  Using x-raw-yuv would mean width/height
in the caps would be double the *actual* width/height of the mono
video, which is hacky.  Also, converting from stereo to mono in
many cases would require copying.

Defining new fourccs:  This has the obvious disadvantage that
we'd either need to keep these internal to GStreamer, or make them
well-known enough for other people to use them.  Integrating with
existing elements (and libgstvideo) is straightfoward.  Adding
new layouts (side-by-side, top-bottom, memory consecutive, etc.)
is simple, although adds *lots* more fourccs.
  
    
      
This means that we would have something like :
- video/x-raw-yuv-stereo-side-by-side
- video/x-raw-yuv-stereo-top-bottom
- video/x-raw-yuv-stereo-row-interleaved
  
    

Lets see what david replies. I think he rather meant

video/x-raw-yuv-stereo, format={"I420", "UVYV" ,..}, layout={"side-by-side", "over-under", ...}, ...
  
You're right, this makes more sense...  I've started to experiment with the 3 new caps I've mentioned, but it would be simpler/better to use that layout.  It would still avoid the compatibility problems with elements that currently handle video/x-raw-yuv.

>From what I see now, it would be either that approach or the new FOURCC one, I'll reply to David's e-mail right after this one and we'll see.

And the same thing for different layouts like yuy2 and uyvy?  Planar 
layouts like I420 might be problematic for row interleaving though, 
because the u and v values spread to 2 lines...

So these caps would describe 3D streams that carry the 2 images.  But 
what about streams that actually are combinations of the 2 images, like 
red/cyan streams?  I guess since these would be displayed on normal 
devices, they could just be normal streams, i.e. a 
video/x-raw-yuv-stereo-side-by-side stream could be converted to 
red/cyan stream with caps video/x-raw-yuv.
  
    
Reprendered Red/cyan is not really detectable and normaly we would just
render it as it is. I one would want to convert read/cyan into grayscale
over under one can use a hand-crafted pipeline. Also the red/cyan output
from the 3dmuxer would not show any sign of being 3d on its caps anymore.
  
Great.  That's how I saw that.

Martin
------------------------------------------------------------------------------


_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

Re: Video 3D support

Donny Viszneki
On Tue, May 25, 2010 at 8:39 PM, Martin Bisson <[hidden email]> wrote:
> You're right, this makes more sense...  I've started to experiment with the
> 3 new caps I've mentioned, but it would be simpler/better to use that
> layout.  It would still avoid the compatibility problems with elements that
> currently handle video/x-raw-yuv.

You still may need an intervening element to deinterlace the stereo
video stream for existing elements that handle video/x-raw-yuv.
Interleaved (including side-by-side and top-bottom) video complicates
the identification of adjacent pixels. Not all elements can simply
pass off interleaved stereo video as non-stereo video. A naive blur
filter applied to side-by-side and top-bottom interleaved stereo video
would result in blurring together each channel's adjacent edges.
Row-by-row interleaved video would be destroyed!

--
http://codebad.com/

------------------------------------------------------------------------------

_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

Re: Video 3D support

Martin Bisson
In reply to this post by David Schleef-2
David Schleef wrote:
On Tue, May 25, 2010 at 04:29:39PM +0300, Stefan Kost wrote:
  
I think that overall, using new fourccs would involve writing
less code and be less prone to bugs.  It is my preference.
  
      
I Honestly don't like it so much as its badly scales :/ To be sure you
mean instead of
video/x-raw-yuv, format="I420" we do video/x-raw-yuv,
format="S420",layout="over/under" (yeah crap). Or
video/x-raw-yuv-stereo, format="I420",layout="over/under" ?
    

I meant "video/x-raw-yuv,format=S420,width=1280,height=720" for the
native way that GStreamer handled stereo video.  (Funny that you
used S420, as that was exactly the method for mangling fourcc's that
I was thinking: I420 -> S420, UYVY -> SYVY, etc.)
  
What about RGB stereo formats?
And no "layout" property.  We already have a fourcc to indicate layout
(as well as other stuff, under which rug we now also sweep stereo/mono).

Side-by-side and top-bottom is a misuderstanding of what "native"
means.  These are methods of packing two pictures into a single
picture for the purposes of shoving it through software that only
understands one picture.  We want a system that understands *two*
pictures.
  
I wouldn't say that "there are methods of packing two pictures into a single picture for the purposes of shoving it through software that understands one picture."  I think it's more about how to organise the memory used to represent those combined picture, like planar RGB vs packed RGB, or YV16 vs YVYU and YUY2.  I agree that side-by-side and row-interleaved layouts end up being the same memory layout, so there should not be a distinction between the 2.  There would only be a distinction, as you said, when you shove this combined image through software that only understands one picture.  But top-bottom (or memory consecutive, or whatever name we choose) and side-by-syde (or left-right, or row-interleaved, or ...) are as different in my opinion as packed vs planar layouts.  Does that make sense?
As a data point, H.264 handles stereo by doubling the number of
pictures in the stream and ordering them left/right/left/right.  The
closest match in a GStreamer API would be to use buffer flags, but
that's gross because a) we don't have any buffer flags available
unless we steal them from miniobject, b) we still would need a
field (stereo=true) in the caps, which would cause compatibility
issues, c) some existing elements would work fine (videoscale),
others would fail horribly (videorate).

The second closest match is what I recommended: format=(fourcc)S420
(as above), indicating two I420 pictures consecutive in memory.  A
stereo H.264 decoder can be modified to decode to these buffers
easily.
  
Again, this would be viable but this would involve choosing the layout, wouldn't it?
On the display side, I only really have experience with X output
to shutter stereo goggles using OpenGL: you upload separate pictures
for right and left, and the driver flips between them.  In this
case, the packing in memory is only slightly important -- memory
consective order would be easier to code, but the graphics engine
could easily be programmed to handle top/bottom or side-by-side.

HDMI 1.4, curiously, has support for half a dozen stereo layouts.
This is a design we should strive to avoid.
  
Wouldn't we want to support what is used elsewhere to augment compatibility (at the cost of greater complexity probably, I agree)?
On the question of "How do I handle side-by-side video":  Use an
element to convert to the native format.  Let's call it
'videotostereo'.  Thus, if you have video like on this page:
http://www.stereomaker.net/sample/index.html, you would use
something like:

  filesrc ! decodebin ! videotostereo method=side-by-side !
    stereoglimagesink
  
Ok, from what I understand, this means that there a file containing a normal video that is actually composed of 2 images that are side-by-side, then the videotostereo plugin is informed, through the "method" property, that the incoming video has this layout.  Since we chose to use put the images in the stereo stream in a memory consecutive layout, videotostereo would "reorganise" the incoming buffer into a top-bottom outgoing buffer.  Then this buffer, having stereo caps (S420 for instance), would be sent to stereoglimagesink that would do whatever.  Is that right?
Assuming you don't have stereo output, but want to watch one of
the mono channels:

  filesrc ! decodebin ! videotostereo method=side-by-side !
    ffmpegcolorspace ! xvimagesink
  
The stream coming out of videotostereo would have stereo caps, so this means that ffmpegcolorspace would have to be aware of the stereo caps.  Is that what you meant?  The approach I was going to take was more to have another plugin, something like stereotovideo, that would take a stereo stream and output it as a normal video, with whatever output layout (left-right, right-left, top-bottom, bottom-top, etc.)
Converting the above video to red/cyan anaglyph:

  filesrc ! decodebin ! videotostereo method=side-by-side !
    videofromstereo method=red-cyan-anaglyph !  xvimagesink

  
Yeah, that's exactly what I meant (stereotovideo = videofromstereo), except that the method for videofromstereo could also be left, right, left-right, right-left, top-bottom, top-bottom (in addition to all kinds of anaglyph), instead of leaving that task to ffmpeg.

Another question that I had is about more than 2 buffers.  Is it too soon to talk about this?  Should we focus on stereo for now?  What about n images in a stream?  Because if I'm right, the 2 suggestions I've received for new caps are:

1) "video/x-raw-yuv-stereo , layout = { memory-consecutive , interleaved }, ..."

which is the one I'm most comfortable with, or

2) "video/x-raw-yuv , format = (fourcc) S420 , ..."

But what about something like :

3) "video/x-raw-yuv-multiple , channels = (int) [ 1 , MAX ] , ..."

or something like that...  Maybe with the "-multiple" to avoid problems with existing plugins...  I haven't thought a lot about this one, but I just wanted to ask you what you think about the possibility of using more than one (or two) images packed together.

Thanks,

Martin

------------------------------------------------------------------------------


_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

Re: Video 3D support

Martin Bisson
In reply to this post by Donny Viszneki
Donny Viszneki wrote:
On Tue, May 25, 2010 at 8:39 PM, Martin Bisson [hidden email] wrote:
  
You're right, this makes more sense...  I've started to experiment with the
3 new caps I've mentioned, but it would be simpler/better to use that
layout.  It would still avoid the compatibility problems with elements that
currently handle video/x-raw-yuv.
    

You still may need an intervening element to deinterlace the stereo
video stream for existing elements that handle video/x-raw-yuv.
Interleaved (including side-by-side and top-bottom) video complicates
the identification of adjacent pixels. Not all elements can simply
pass off interleaved stereo video as non-stereo video. A naive blur
filter applied to side-by-side and top-bottom interleaved stereo video
would result in blurring together each channel's adjacent edges.
Row-by-row interleaved video would be destroyed!

  
I agree.  See David's e-mail or my reply to his e-mail that I just sent.  We would need an element to make "regular" video out of a stereo video.

------------------------------------------------------------------------------


_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/gstreamer-devel