Compositing and GStreamer

classic Classic list List threaded Threaded
10 messages Options
Reply | Threaded
Open this post in threaded view
|

Compositing and GStreamer

Timothy Braun
Hello Everyone,
  I'm fairly new to G-Streamer so any input you can provide is much appreciated.  I'm working on a project where we need to generate a 2 minute video which is a composite of a total of 24 input videos.  The output video will have 4 different 30 second sections, each containing a 3x2 grid of the smaller input videos.  The input videos are all naturally at 240x240 with the goal of having a final output frame size of 720x480.

  Using gst-launch, I've been able to construct a sample 30 second clip using a combination of inputs, videoboxes and a videomixer.  Here is what I've come up with so far:

videomixer name=mix ! ffmpegcolorspace ! ffenc_mpeg1video ! ffmux_mpeg name=mux ! queue ! filesink location=output.mpg
adder name=adder ! audioconvert ! ffenc_mp2 ! mux.
filesrc location=loop1.mp4 ! decodebin name=decode1
decode1. ! videobox border-alpha=0 top=-240 left=0 ! queue ! mix.
decode1. ! adder.
filesrc location=loop2.mp4 ! decodebin name=decode2
decode2. ! videobox border-alpha=0 top=-240 left=-240 ! queue ! mix.
decode2. ! adder.
filesrc location=loop3.mp4 ! decodebin name=decode3
decode3. ! videobox border-alpha=0 top=-240 left=-480 ! queue ! mix.
decode3. ! adder.
filesrc location=loop4.mp4 ! decodebin name=decode4
decode4. ! videobox border-alpha=0 top=0 left=0 ! queue ! mix.
decode4. ! adder.
filesrc location=loop5.mp4 ! decodebin name=decode5
decode5. ! videobox border-alpha=0 top=0 left=-240 ! queue ! mix.
decode5. ! adder.
filesrc location=loop6.mp4 ! decodebin name=decode6
decode6. ! videobox border-alpha=0 top=0 left=-480 ! queue ! mix.
decode6. ! adder.

  Now I need to do this 4 times, each time with a potentially different video in each box.  I've started looking into C interfaces as there's other pieces of the puzzle which need to be tied into this, and I am trying to determine the best way to tackle this.  I originally was looking at Gnonlin, but the documentation is lacking in regards to how gnloperations work.  I also recently stumbled upon the GES library by Edward Hervey, this looks promising as well, but I haven't been able to spend much time on it.

  If I go the Gnonlin route, I believe I would need 6 compositions, one for each box.  At the 30 second marker, I would swap the filesource to a new one using dynamic pads and listening for messages on the pipeline bus.  Am I far off on this?  Any suggestions?

  As for the GES library, it looks very promising and powerful from the little I read on it.  Would this be the smarter route to take?  If so, does anyone have any suggestions for how the pipeline would be structured?

  Thank you in advance for your time on this and I truly appreciate any information you are willing to share with me.

  Happy Thanksgiving,
  Tim

------------------------------------------------------------------------------
Increase Visibility of Your 3D Game App & Earn a Chance To Win $500!
Tap into the largest installed PC base & get more eyes on your game by
optimizing for Intel(R) Graphics Technology. Get started today with the
Intel(R) Software Partner Program. Five $500 cash prizes are up for grabs.
http://p.sf.net/sfu/intelisp-dev2dev
_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

Re: Compositing and GStreamer

Kapil Agrawal
Just a quick clue that might help, try using multifilesrc ?

On Thu, Nov 25, 2010 at 9:47 PM, Timothy Braun <[hidden email]> wrote:
Hello Everyone,
  I'm fairly new to G-Streamer so any input you can provide is much appreciated.  I'm working on a project where we need to generate a 2 minute video which is a composite of a total of 24 input videos.  The output video will have 4 different 30 second sections, each containing a 3x2 grid of the smaller input videos.  The input videos are all naturally at 240x240 with the goal of having a final output frame size of 720x480.

  Using gst-launch, I've been able to construct a sample 30 second clip using a combination of inputs, videoboxes and a videomixer.  Here is what I've come up with so far:

videomixer name=mix ! ffmpegcolorspace ! ffenc_mpeg1video ! ffmux_mpeg name=mux ! queue ! filesink location=output.mpg
adder name=adder ! audioconvert ! ffenc_mp2 ! mux.
filesrc location=loop1.mp4 ! decodebin name=decode1
decode1. ! videobox border-alpha=0 top=-240 left=0 ! queue ! mix.
decode1. ! adder.
filesrc location=loop2.mp4 ! decodebin name=decode2
decode2. ! videobox border-alpha=0 top=-240 left=-240 ! queue ! mix.
decode2. ! adder.
filesrc location=loop3.mp4 ! decodebin name=decode3
decode3. ! videobox border-alpha=0 top=-240 left=-480 ! queue ! mix.
decode3. ! adder.
filesrc location=loop4.mp4 ! decodebin name=decode4
decode4. ! videobox border-alpha=0 top=0 left=0 ! queue ! mix.
decode4. ! adder.
filesrc location=loop5.mp4 ! decodebin name=decode5
decode5. ! videobox border-alpha=0 top=0 left=-240 ! queue ! mix.
decode5. ! adder.
filesrc location=loop6.mp4 ! decodebin name=decode6
decode6. ! videobox border-alpha=0 top=0 left=-480 ! queue ! mix.
decode6. ! adder.

  Now I need to do this 4 times, each time with a potentially different video in each box.  I've started looking into C interfaces as there's other pieces of the puzzle which need to be tied into this, and I am trying to determine the best way to tackle this.  I originally was looking at Gnonlin, but the documentation is lacking in regards to how gnloperations work.  I also recently stumbled upon the GES library by Edward Hervey, this looks promising as well, but I haven't been able to spend much time on it.

  If I go the Gnonlin route, I believe I would need 6 compositions, one for each box.  At the 30 second marker, I would swap the filesource to a new one using dynamic pads and listening for messages on the pipeline bus.  Am I far off on this?  Any suggestions?

  As for the GES library, it looks very promising and powerful from the little I read on it.  Would this be the smarter route to take?  If so, does anyone have any suggestions for how the pipeline would be structured?

  Thank you in advance for your time on this and I truly appreciate any information you are willing to share with me.

  Happy Thanksgiving,
  Tim

------------------------------------------------------------------------------
Increase Visibility of Your 3D Game App & Earn a Chance To Win $500!
Tap into the largest installed PC base & get more eyes on your game by
optimizing for Intel(R) Graphics Technology. Get started today with the
Intel(R) Software Partner Program. Five $500 cash prizes are up for grabs.
http://p.sf.net/sfu/intelisp-dev2dev
_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/gstreamer-devel




--
www.mediamagictechnologies.com (Gstreamer, ffmpeg, Red5, Streaming)
twitter handle: @gst_kaps
http://www.linkedin.com/in/kapilagrawal

------------------------------------------------------------------------------
Increase Visibility of Your 3D Game App & Earn a Chance To Win $500!
Tap into the largest installed PC base & get more eyes on your game by
optimizing for Intel(R) Graphics Technology. Get started today with the
Intel(R) Software Partner Program. Five $500 cash prizes are up for grabs.
http://p.sf.net/sfu/intelisp-dev2dev
_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

Re: Compositing and GStreamer

Timothy Braun
Kapil,
  Thanks for the suggestion, but with multifilesrc I would have to have the files static with incrementing named files.  A single box in the 3x2 grid may contain the same file multiple times so I'm afraid it won't be the best solution.

  I guess, ultimately, there's multiple ways to attack this one from what I've been able to find.  Here are the two that I've been looking at, I'm just not sure which is the better solution at this point:
  • A single gnonlin composition with a 4 gnlsources similar in setup as the gst-launch text I have below.
  • 6 gnonlin compositions, each feeding to a single videomixer which combines them into the final frame.
    • This path I'm currently investigating.  I have a test written in C, but I'm having some difficulties with pad linkage as I still don't have a complete understanding of when certain things will exist and how to get them.
    • Here's currently whats happening:
      • Create a new pipeline
      • Create a videomixer
      • Create 6 gnonlin compositions each with a pad-added signal callback to connect gnlcomposition pad to videomixer.
      • ... (this is were it's going wrong)
    • In the pad-added callback I have:
      • static void onPad(GstElement *comp, GstPad *pad, GstElement *sink) {
            GstPad *v = gst_element_get_pad(sink, "sink");
            gst_pad_link(pad, v);
            gst_object_unref(v);
        }
      • gst_element_get_pad is not returning a pad from the video mixer (sink) which leads me to believe that I'm either not asking in the right manner or the pad doesn't exist.  (I'm aware that gst_element_get_pad is deprecated, I'm just looking to test at the moment)
      • I noticed in one of the repositories under a unit test, the videomixer was attached as a gnloperation?  Is this the better path to take?

  This all leads me to a couple more questions as well:
  • A video mixer pad has xpos and ypos properties.  This would let me shift the video around without needing a video box which I believe may be more efficient?
  • If I use the xpos and ypos properties, is the video mixer smart enough to change the frame size appropriately or will it simply crop the frame to the size of the largest input frame?
    • If so, would it be better to add a videobox to do the adjustments for me, or feed in a solid color background of the required output size?

  Thanks again for the time.  I know there's a lot of questions above, but any help of any kind is greatly appreciated.

  All the best,
  Tim


On Fri, Nov 26, 2010 at 1:04 AM, Kapil Agrawal <[hidden email]> wrote:
Just a quick clue that might help, try using multifilesrc ?

On Thu, Nov 25, 2010 at 9:47 PM, Timothy Braun <[hidden email]> wrote:
Hello Everyone,
  I'm fairly new to G-Streamer so any input you can provide is much appreciated.  I'm working on a project where we need to generate a 2 minute video which is a composite of a total of 24 input videos.  The output video will have 4 different 30 second sections, each containing a 3x2 grid of the smaller input videos.  The input videos are all naturally at 240x240 with the goal of having a final output frame size of 720x480.

  Using gst-launch, I've been able to construct a sample 30 second clip using a combination of inputs, videoboxes and a videomixer.  Here is what I've come up with so far:

videomixer name=mix ! ffmpegcolorspace ! ffenc_mpeg1video ! ffmux_mpeg name=mux ! queue ! filesink location=output.mpg
adder name=adder ! audioconvert ! ffenc_mp2 ! mux.
filesrc location=loop1.mp4 ! decodebin name=decode1
decode1. ! videobox border-alpha=0 top=-240 left=0 ! queue ! mix.
decode1. ! adder.
filesrc location=loop2.mp4 ! decodebin name=decode2
decode2. ! videobox border-alpha=0 top=-240 left=-240 ! queue ! mix.
decode2. ! adder.
filesrc location=loop3.mp4 ! decodebin name=decode3
decode3. ! videobox border-alpha=0 top=-240 left=-480 ! queue ! mix.
decode3. ! adder.
filesrc location=loop4.mp4 ! decodebin name=decode4
decode4. ! videobox border-alpha=0 top=0 left=0 ! queue ! mix.
decode4. ! adder.
filesrc location=loop5.mp4 ! decodebin name=decode5
decode5. ! videobox border-alpha=0 top=0 left=-240 ! queue ! mix.
decode5. ! adder.
filesrc location=loop6.mp4 ! decodebin name=decode6
decode6. ! videobox border-alpha=0 top=0 left=-480 ! queue ! mix.
decode6. ! adder.

  Now I need to do this 4 times, each time with a potentially different video in each box.  I've started looking into C interfaces as there's other pieces of the puzzle which need to be tied into this, and I am trying to determine the best way to tackle this.  I originally was looking at Gnonlin, but the documentation is lacking in regards to how gnloperations work.  I also recently stumbled upon the GES library by Edward Hervey, this looks promising as well, but I haven't been able to spend much time on it.

  If I go the Gnonlin route, I believe I would need 6 compositions, one for each box.  At the 30 second marker, I would swap the filesource to a new one using dynamic pads and listening for messages on the pipeline bus.  Am I far off on this?  Any suggestions?

  As for the GES library, it looks very promising and powerful from the little I read on it.  Would this be the smarter route to take?  If so, does anyone have any suggestions for how the pipeline would be structured?

  Thank you in advance for your time on this and I truly appreciate any information you are willing to share with me.

  Happy Thanksgiving,
  Tim

------------------------------------------------------------------------------
Increase Visibility of Your 3D Game App & Earn a Chance To Win $500!
Tap into the largest installed PC base & get more eyes on your game by
optimizing for Intel(R) Graphics Technology. Get started today with the
Intel(R) Software Partner Program. Five $500 cash prizes are up for grabs.
http://p.sf.net/sfu/intelisp-dev2dev
_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/gstreamer-devel




--
www.mediamagictechnologies.com (Gstreamer, ffmpeg, Red5, Streaming)
twitter handle: @gst_kaps
http://www.linkedin.com/in/kapilagrawal

------------------------------------------------------------------------------
Increase Visibility of Your 3D Game App & Earn a Chance To Win $500!
Tap into the largest installed PC base & get more eyes on your game by
optimizing for Intel(R) Graphics Technology. Get started today with the
Intel(R) Software Partner Program. Five $500 cash prizes are up for grabs.
http://p.sf.net/sfu/intelisp-dev2dev
_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/gstreamer-devel



------------------------------------------------------------------------------
Increase Visibility of Your 3D Game App & Earn a Chance To Win $500!
Tap into the largest installed PC base & get more eyes on your game by
optimizing for Intel(R) Graphics Technology. Get started today with the
Intel(R) Software Partner Program. Five $500 cash prizes are up for grabs.
http://p.sf.net/sfu/intelisp-dev2dev
_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

Re: Compositing and GStreamer

Edward Hervey
Administrator
Hi,

On Sat, 2010-11-27 at 13:50 -0500, Timothy Braun wrote:

> Kapil,
>   Thanks for the suggestion, but with multifilesrc I would have to
> have the files static with incrementing named files.  A single box in
> the 3x2 grid may contain the same file multiple times so I'm afraid it
> won't be the best solution.
>
>   I guess, ultimately, there's multiple ways to attack this one from
> what I've been able to find.  Here are the two that I've been looking
> at, I'm just not sure which is the better solution at this point:
>       * A single gnonlin composition with a 4 gnlsources similar in
>         setup as the gst-launch text I have below.

  Using one composition would actually be the 'proper' way.

>       * 6 gnonlin compositions, each feeding to a single videomixer
>         which combines them into the final frame.
>               * This path I'm currently investigating.  I have a test
>                 written in C, but I'm having some difficulties with
>                 pad linkage as I still don't have a complete
>                 understanding of when certain things will exist and
>                 how to get them.
>               * Here's currently whats happening:
>                       * Create a new pipeline
>                       * Create a videomixer
>                       * Create 6 gnonlin compositions each with a
>                         pad-added signal callback to connect
>                         gnlcomposition pad to videomixer.
>                       * ... (this is were it's going wrong)
>               * In the pad-added callback I have:
>                       * static void onPad(GstElement *comp, GstPad
>                         *pad, GstElement *sink) {
>                             GstPad *v = gst_element_get_pad(sink,
>                         "sink");
>                             gst_pad_link(pad, v);
>                             gst_object_unref(v);
>                         }
>                       * gst_element_get_pad is not returning a pad
>                         from the video mixer (sink) which leads me to
>                         believe that I'm either not asking in the
>                         right manner or the pad doesn't exist.  (I'm
>                         aware that gst_element_get_pad is deprecated,
>                         I'm just looking to test at the moment)
>                       * I noticed in one of the repositories under a
>                         unit test, the videomixer was attached as a
>                         gnloperation?  Is this the better path to
>                         take?
>
>   This all leads me to a couple more questions as well:
>       * A video mixer pad has xpos and ypos properties.  This would
>         let me shift the video around without needing a video box
>         which I believe may be more efficient?

  Yes, it will be more efficient.

>       * If I use the xpos and ypos properties, is the video mixer
>         smart enough to change the frame size appropriately or will it
>         simply crop the frame to the size of the largest input frame?
>               * If so, would it be better to add a videobox to do the
>                 adjustments for me, or feed in a solid color
>                 background of the required output size?

  No, it won't change the size, but what you could do is mix the
original sizes with original offsets and then downconvert the video
later.

  Example for one 3x2 segment:

  Create a gnloperation with a videomixer in it with a gnl priority of
0.
  Create a gnlfilesource for each clip with increasing priorities (1->6)
going from left-right and then top to bottom:
     1  2  3
     4  5  6

  Connect to the gnloperation 'input-priority-changed' signal. When your
callback is called, you will know which priority is being connected to
which gnloperation ghostpad. You can get the videomixer sink pad by
using the gst_ghost_pad_get_target() method and then setting the proper
xpos/ypos property on that pad based on the priority of the feed being
provided.

  Set 'video/x-raw-yuv;video/x-raw-rgb' as the caps property on all your
sources.

  Set duration and media-duration of *all* gnlobjects to the same
duration.
  If you want to add another segment of 3x2 clips, you'll need to re-add
all those 7 objects with a modified 'start' property.

  First connect your composition to an imagesink to make sure the result
is what you want. When it is, insert a videoscale element followed with
a capsfilter with your target resolution.

  Hope this helps.

>
>   Thanks again for the time.  I know there's a lot of questions above,
> but any help of any kind is greatly appreciated.
>
>   All the best,
>   Tim
>
>
> On Fri, Nov 26, 2010 at 1:04 AM, Kapil Agrawal <[hidden email]>
> wrote:
>         Just a quick clue that might help, try using multifilesrc ?
>        
>        
>         On Thu, Nov 25, 2010 at 9:47 PM, Timothy Braun
>         <[hidden email]> wrote:
>        
>                
>                 Hello Everyone,
>                   I'm fairly new to G-Streamer so any input you can
>                 provide is much appreciated.  I'm working on a project
>                 where we need to generate a 2 minute video which is a
>                 composite of a total of 24 input videos.  The output
>                 video will have 4 different 30 second sections, each
>                 containing a 3x2 grid of the smaller input videos.
>                 The input videos are all naturally at 240x240 with the
>                 goal of having a final output frame size of 720x480.
>                
>                   Using gst-launch, I've been able to construct a
>                 sample 30 second clip using a combination of inputs,
>                 videoboxes and a videomixer.  Here is what I've come
>                 up with so far:
>                
>                 videomixer name=mix ! ffmpegcolorspace !
>                 ffenc_mpeg1video ! ffmux_mpeg name=mux ! queue !
>                 filesink location=output.mpg
>                 adder name=adder ! audioconvert ! ffenc_mp2 ! mux.
>                 filesrc location=loop1.mp4 ! decodebin name=decode1
>                 decode1. ! videobox border-alpha=0 top=-240 left=0 !
>                 queue ! mix.
>                 decode1. ! adder.
>                 filesrc location=loop2.mp4 ! decodebin name=decode2
>                 decode2. ! videobox border-alpha=0 top=-240
>                 left=-240 ! queue ! mix.
>                 decode2. ! adder.
>                 filesrc location=loop3.mp4 ! decodebin name=decode3
>                 decode3. ! videobox border-alpha=0 top=-240
>                 left=-480 ! queue ! mix.
>                 decode3. ! adder.
>                 filesrc location=loop4.mp4 ! decodebin name=decode4
>                 decode4. ! videobox border-alpha=0 top=0 left=0 !
>                 queue ! mix.
>                 decode4. ! adder.
>                 filesrc location=loop5.mp4 ! decodebin name=decode5
>                 decode5. ! videobox border-alpha=0 top=0 left=-240 !
>                 queue ! mix.
>                 decode5. ! adder.
>                 filesrc location=loop6.mp4 ! decodebin name=decode6
>                 decode6. ! videobox border-alpha=0 top=0 left=-480 !
>                 queue ! mix.
>                 decode6. ! adder.
>                
>                   Now I need to do this 4 times, each time with a
>                 potentially different video in each box.  I've started
>                 looking into C interfaces as there's other pieces of
>                 the puzzle which need to be tied into this, and I am
>                 trying to determine the best way to tackle this.  I
>                 originally was looking at Gnonlin, but the
>                 documentation is lacking in regards to how
>                 gnloperations work.  I also recently stumbled upon the
>                 GES library by Edward Hervey, this looks promising as
>                 well, but I haven't been able to spend much time on
>                 it.
>                
>                   If I go the Gnonlin route, I believe I would need 6
>                 compositions, one for each box.  At the 30 second
>                 marker, I would swap the filesource to a new one using
>                 dynamic pads and listening for messages on the
>                 pipeline bus.  Am I far off on this?  Any suggestions?
>                
>                   As for the GES library, it looks very promising and
>                 powerful from the little I read on it.  Would this be
>                 the smarter route to take?  If so, does anyone have
>                 any suggestions for how the pipeline would be
>                 structured?
>                
>                   Thank you in advance for your time on this and I
>                 truly appreciate any information you are willing to
>                 share with me.
>                
>                   Happy Thanksgiving,
>                   Tim
>                
>                
>                 ------------------------------------------------------------------------------
>                 Increase Visibility of Your 3D Game App & Earn a
>                 Chance To Win $500!
>                 Tap into the largest installed PC base & get more eyes
>                 on your game by
>                 optimizing for Intel(R) Graphics Technology. Get
>                 started today with the
>                 Intel(R) Software Partner Program. Five $500 cash
>                 prizes are up for grabs.
>                 http://p.sf.net/sfu/intelisp-dev2dev
>                 _______________________________________________
>                 gstreamer-devel mailing list
>                 [hidden email]
>                 https://lists.sourceforge.net/lists/listinfo/gstreamer-devel
>                
>        
>        
>        
>         --
>         www.mediamagictechnologies.com (Gstreamer, ffmpeg, Red5,
>         Streaming)
>         twitter handle: @gst_kaps
>         http://www.linkedin.com/in/kapilagrawal
>        
>         ------------------------------------------------------------------------------
>         Increase Visibility of Your 3D Game App & Earn a Chance To Win
>         $500!
>         Tap into the largest installed PC base & get more eyes on your
>         game by
>         optimizing for Intel(R) Graphics Technology. Get started today
>         with the
>         Intel(R) Software Partner Program. Five $500 cash prizes are
>         up for grabs.
>         http://p.sf.net/sfu/intelisp-dev2dev
>         _______________________________________________
>         gstreamer-devel mailing list
>         [hidden email]
>         https://lists.sourceforge.net/lists/listinfo/gstreamer-devel
>        
>
> ------------------------------------------------------------------------------
> Increase Visibility of Your 3D Game App & Earn a Chance To Win $500!
> Tap into the largest installed PC base & get more eyes on your game by
> optimizing for Intel(R) Graphics Technology. Get started today with the
> Intel(R) Software Partner Program. Five $500 cash prizes are up for grabs.
> http://p.sf.net/sfu/intelisp-dev2dev
> _______________________________________________ gstreamer-devel mailing list [hidden email] https://lists.sourceforge.net/lists/listinfo/gstreamer-devel



------------------------------------------------------------------------------
Increase Visibility of Your 3D Game App & Earn a Chance To Win $500!
Tap into the largest installed PC base & get more eyes on your game by
optimizing for Intel(R) Graphics Technology. Get started today with the
Intel(R) Software Partner Program. Five $500 cash prizes are up for grabs.
http://p.sf.net/sfu/intelisp-dev2dev
_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

Re: Compositing and GStreamer

Timothy Braun
Hi Edward,
  Thanks for the quick reply.  It has been most informative.  A couple questions if I may.  For the video rescale/capsfilter part, what is this doing and how should it look in the pipeline?  The input videos are all 240x240 px with a goal output frame size of 720x480px.

  For a very raw test, I am using this to build my pipeline:

    GstPipeline *pipeline = GST_PIPELINE(gst_pipeline_new("pipeline"));
   
    GstCaps *caps = gst_caps_from_string("video/x-raw-yuv;video/x-raw-rgb");
   
    // Create our composition
    GstElement *comp = gst_element_factory_make("gnlcomposition", "composition");
    g_object_set(G_OBJECT(comp), "caps", caps, NULL);
   
    GstElement *src1 = gst_element_factory_make("gnlfilesource", "source1");
    g_object_set(G_OBJECT(src1), "location", "loop1.mp4",
                 "start", 0,
                 "duration", 30 * GST_SECOND,
                 "media-start", 0,
                 "media-duration", 30 * GST_SECOND,
                 "priority", 1,
                 "caps", caps,
                 NULL);
   
    GstElement *src2 = gst_element_factory_make("gnlfilesource", "source2");
    g_object_set(G_OBJECT(src2), "location", "loop2.mp4",
                 "start", 0,
                 "duration", 30 * GST_SECOND,
                 "media-start", 0,
                 "media-duration", 30 * GST_SECOND,
                 "priority", 2,
                 "caps", caps,
                 NULL);
   
    GstElement *oper = gst_element_factory_make("gnloperation", NULL);
    g_object_set(G_OBJECT(oper), "caps", caps, "expandable", TRUE, NULL);
   
    GstElement *mixer = gst_element_factory_make("videomixer", NULL);
    gst_bin_add(GST_BIN(oper), mixer);
   
    // listen for the input priority change signal
    // to update the video mixer pad
    g_object_connect(oper, "signal::input-priority-changed", onPriorityChange, mixer, NULL );
   
    // add the sources to the composition
    gst_bin_add(GST_BIN(comp), src1);
    gst_bin_add(GST_BIN(comp), src2);
    gst_bin_add(GST_BIN(comp), oper);
   
    // build the output stream
    GstElement *color = gst_element_factory_make("ffmpegcolorspace", "colorspace");
   
    GstElement *identity = gst_element_factory_make("identity", "ident");
    g_object_set(identity, "single-segment", TRUE, NULL);
   
    GstElement *enc = gst_element_factory_make("ffenc_mpeg1video", "encoder");
    GstElement *mux = gst_element_factory_make("ffmux_mpeg", "mux");
    GstElement *queue = gst_element_factory_make("queue", "queue");
   
    GstElement *sink = gst_element_factory_make("filesink", "sink");
    g_object_set(sink, "location", "output.mpg", NULL);
   
    gst_bin_add_many(GST_BIN(pipeline), comp, color, identity, enc, mux, queue, sink, NULL);
   
    /*
    g_object_connect (comp, "signal::pad-added",
                      onPad, mixer, NULL);
    */
   
    gst_element_link_many(mixer, queue, color, identity, enc, mux, sink, NULL);


It seems to link up fine, but fails to stream as it gets stuck in the paused state.  I've tried adding queues all around, but no luck.  Here is the debug output:

** Message: Creating run loop...
** Message: Building pipeline...
** Message: Attaching to bus...
** Message: Setting state to PLAYING...
0:00:00.141504000 91821    0x100609d30 WARN               gnlsource gnlsource.c:545:gnl_source_change_state:<source2> Couldn't find a valid source pad
0:00:00.162296000 91821    0x100609d30 WARN          GST_SCHEDULING gstpad.c:4692:gst_pad_get_range:<source:src> getrange failed unexpected
0:00:00.191513000 91821    0x100609d30 WARN               gnlsource gnlsource.c:545:gnl_source_change_state:<source1> Couldn't find a valid source pad
0:00:00.199956000 91821    0x100686b10 WARN                 qtdemux qtdemux.c:5801:qtdemux_parse_trak:<qtdemux0> unknown version 00000000
0:00:00.200693000 91821    0x100609d30 WARN          GST_SCHEDULING gstpad.c:4692:gst_pad_get_range:<source:src> getrange failed unexpected
0:00:00.210835000 91821    0x10064f030 WARN                 qtdemux qtdemux.c:5801:qtdemux_parse_trak:<qtdemux1> unknown version 00000000
0:00:00.244706000 91821    0x101879f60 WARN               gnlsource gnlsource.c:221:element_pad_added_cb:<source2> We already have (pending) ghost-ed a valid source pad (ghostpad:'':'', pendingblock:1
0:00:00.244852000 91821    0x1006501f0 WARN               gnlsource gnlsource.c:221:element_pad_added_cb:<source1> We already have (pending) ghost-ed a valid source pad (ghostpad:'':'', pendingblock:1
** Message: Priority Changed: composition::gnloperation0->gnloperation0::sink_1(videomixer0::sink_1) - 2
** Message: Priority Changed: composition::gnloperation0->gnloperation0::sink_0(videomixer0::sink_0) - 1

I've been battling with this for a bit now, can't seem to make any headway.  Any input is, again, much appreciated.

Best,
Tim

On Sun, Nov 28, 2010 at 3:58 AM, Edward Hervey <[hidden email]> wrote:
Hi,

On Sat, 2010-11-27 at 13:50 -0500, Timothy Braun wrote:
> Kapil,
>   Thanks for the suggestion, but with multifilesrc I would have to
> have the files static with incrementing named files.  A single box in
> the 3x2 grid may contain the same file multiple times so I'm afraid it
> won't be the best solution.
>
>   I guess, ultimately, there's multiple ways to attack this one from
> what I've been able to find.  Here are the two that I've been looking
> at, I'm just not sure which is the better solution at this point:
>       * A single gnonlin composition with a 4 gnlsources similar in
>         setup as the gst-launch text I have below.

 Using one composition would actually be the 'proper' way.

>       * 6 gnonlin compositions, each feeding to a single videomixer
>         which combines them into the final frame.
>               * This path I'm currently investigating.  I have a test
>                 written in C, but I'm having some difficulties with
>                 pad linkage as I still don't have a complete
>                 understanding of when certain things will exist and
>                 how to get them.
>               * Here's currently whats happening:
>                       * Create a new pipeline
>                       * Create a videomixer
>                       * Create 6 gnonlin compositions each with a
>                         pad-added signal callback to connect
>                         gnlcomposition pad to videomixer.
>                       * ... (this is were it's going wrong)
>               * In the pad-added callback I have:
>                       * static void onPad(GstElement *comp, GstPad
>                         *pad, GstElement *sink) {
>                             GstPad *v = gst_element_get_pad(sink,
>                         "sink");
>                             gst_pad_link(pad, v);
>                             gst_object_unref(v);
>                         }
>                       * gst_element_get_pad is not returning a pad
>                         from the video mixer (sink) which leads me to
>                         believe that I'm either not asking in the
>                         right manner or the pad doesn't exist.  (I'm
>                         aware that gst_element_get_pad is deprecated,
>                         I'm just looking to test at the moment)
>                       * I noticed in one of the repositories under a
>                         unit test, the videomixer was attached as a
>                         gnloperation?  Is this the better path to
>                         take?
>
>   This all leads me to a couple more questions as well:
>       * A video mixer pad has xpos and ypos properties.  This would
>         let me shift the video around without needing a video box
>         which I believe may be more efficient?

 Yes, it will be more efficient.

>       * If I use the xpos and ypos properties, is the video mixer
>         smart enough to change the frame size appropriately or will it
>         simply crop the frame to the size of the largest input frame?
>               * If so, would it be better to add a videobox to do the
>                 adjustments for me, or feed in a solid color
>                 background of the required output size?

 No, it won't change the size, but what you could do is mix the
original sizes with original offsets and then downconvert the video
later.

 Example for one 3x2 segment:

 Create a gnloperation with a videomixer in it with a gnl priority of
0.
 Create a gnlfilesource for each clip with increasing priorities (1->6)
going from left-right and then top to bottom:
    1  2  3
    4  5  6

 Connect to the gnloperation 'input-priority-changed' signal. When your
callback is called, you will know which priority is being connected to
which gnloperation ghostpad. You can get the videomixer sink pad by
using the gst_ghost_pad_get_target() method and then setting the proper
xpos/ypos property on that pad based on the priority of the feed being
provided.

 Set 'video/x-raw-yuv;video/x-raw-rgb' as the caps property on all your
sources.

 Set duration and media-duration of *all* gnlobjects to the same
duration.
 If you want to add another segment of 3x2 clips, you'll need to re-add
all those 7 objects with a modified 'start' property.

 First connect your composition to an imagesink to make sure the result
is what you want. When it is, insert a videoscale element followed with
a capsfilter with your target resolution.

 Hope this helps.

>
>   Thanks again for the time.  I know there's a lot of questions above,
> but any help of any kind is greatly appreciated.
>
>   All the best,
>   Tim
>
>
> On Fri, Nov 26, 2010 at 1:04 AM, Kapil Agrawal <[hidden email]>
> wrote:
>         Just a quick clue that might help, try using multifilesrc ?
>
>
>         On Thu, Nov 25, 2010 at 9:47 PM, Timothy Braun
>         <[hidden email]> wrote:
>
>
>                 Hello Everyone,
>                   I'm fairly new to G-Streamer so any input you can
>                 provide is much appreciated.  I'm working on a project
>                 where we need to generate a 2 minute video which is a
>                 composite of a total of 24 input videos.  The output
>                 video will have 4 different 30 second sections, each
>                 containing a 3x2 grid of the smaller input videos.
>                 The input videos are all naturally at 240x240 with the
>                 goal of having a final output frame size of 720x480.
>
>                   Using gst-launch, I've been able to construct a
>                 sample 30 second clip using a combination of inputs,
>                 videoboxes and a videomixer.  Here is what I've come
>                 up with so far:
>
>                 videomixer name=mix ! ffmpegcolorspace !
>                 ffenc_mpeg1video ! ffmux_mpeg name=mux ! queue !
>                 filesink location=output.mpg
>                 adder name=adder ! audioconvert ! ffenc_mp2 ! mux.
>                 filesrc location=loop1.mp4 ! decodebin name=decode1
>                 decode1. ! videobox border-alpha=0 top=-240 left=0 !
>                 queue ! mix.
>                 decode1. ! adder.
>                 filesrc location=loop2.mp4 ! decodebin name=decode2
>                 decode2. ! videobox border-alpha=0 top=-240
>                 left=-240 ! queue ! mix.
>                 decode2. ! adder.
>                 filesrc location=loop3.mp4 ! decodebin name=decode3
>                 decode3. ! videobox border-alpha=0 top=-240
>                 left=-480 ! queue ! mix.
>                 decode3. ! adder.
>                 filesrc location=loop4.mp4 ! decodebin name=decode4
>                 decode4. ! videobox border-alpha=0 top=0 left=0 !
>                 queue ! mix.
>                 decode4. ! adder.
>                 filesrc location=loop5.mp4 ! decodebin name=decode5
>                 decode5. ! videobox border-alpha=0 top=0 left=-240 !
>                 queue ! mix.
>                 decode5. ! adder.
>                 filesrc location=loop6.mp4 ! decodebin name=decode6
>                 decode6. ! videobox border-alpha=0 top=0 left=-480 !
>                 queue ! mix.
>                 decode6. ! adder.
>
>                   Now I need to do this 4 times, each time with a
>                 potentially different video in each box.  I've started
>                 looking into C interfaces as there's other pieces of
>                 the puzzle which need to be tied into this, and I am
>                 trying to determine the best way to tackle this.  I
>                 originally was looking at Gnonlin, but the
>                 documentation is lacking in regards to how
>                 gnloperations work.  I also recently stumbled upon the
>                 GES library by Edward Hervey, this looks promising as
>                 well, but I haven't been able to spend much time on
>                 it.
>
>                   If I go the Gnonlin route, I believe I would need 6
>                 compositions, one for each box.  At the 30 second
>                 marker, I would swap the filesource to a new one using
>                 dynamic pads and listening for messages on the
>                 pipeline bus.  Am I far off on this?  Any suggestions?
>
>                   As for the GES library, it looks very promising and
>                 powerful from the little I read on it.  Would this be
>                 the smarter route to take?  If so, does anyone have
>                 any suggestions for how the pipeline would be
>                 structured?
>
>                   Thank you in advance for your time on this and I
>                 truly appreciate any information you are willing to
>                 share with me.
>
>                   Happy Thanksgiving,
>                   Tim
>
>
>                 ------------------------------------------------------------------------------
>                 Increase Visibility of Your 3D Game App & Earn a
>                 Chance To Win $500!
>                 Tap into the largest installed PC base & get more eyes
>                 on your game by
>                 optimizing for Intel(R) Graphics Technology. Get
>                 started today with the
>                 Intel(R) Software Partner Program. Five $500 cash
>                 prizes are up for grabs.
>                 http://p.sf.net/sfu/intelisp-dev2dev
>                 _______________________________________________
>                 gstreamer-devel mailing list
>                 [hidden email]
>                 https://lists.sourceforge.net/lists/listinfo/gstreamer-devel
>
>
>
>
>         --
>         www.mediamagictechnologies.com (Gstreamer, ffmpeg, Red5,
>         Streaming)
>         twitter handle: @gst_kaps
>         http://www.linkedin.com/in/kapilagrawal
>
>         ------------------------------------------------------------------------------
>         Increase Visibility of Your 3D Game App & Earn a Chance To Win
>         $500!
>         Tap into the largest installed PC base & get more eyes on your
>         game by
>         optimizing for Intel(R) Graphics Technology. Get started today
>         with the
>         Intel(R) Software Partner Program. Five $500 cash prizes are
>         up for grabs.
>         http://p.sf.net/sfu/intelisp-dev2dev
>         _______________________________________________
>         gstreamer-devel mailing list
>         [hidden email]
>         https://lists.sourceforge.net/lists/listinfo/gstreamer-devel
>
>
> ------------------------------------------------------------------------------
> Increase Visibility of Your 3D Game App & Earn a Chance To Win $500!
> Tap into the largest installed PC base & get more eyes on your game by
> optimizing for Intel(R) Graphics Technology. Get started today with the
> Intel(R) Software Partner Program. Five $500 cash prizes are up for grabs.
> http://p.sf.net/sfu/intelisp-dev2dev
> _______________________________________________ gstreamer-devel mailing list [hidden email] https://lists.sourceforge.net/lists/listinfo/gstreamer-devel



------------------------------------------------------------------------------
Increase Visibility of Your 3D Game App & Earn a Chance To Win $500!
Tap into the largest installed PC base & get more eyes on your game by
optimizing for Intel(R) Graphics Technology. Get started today with the
Intel(R) Software Partner Program. Five $500 cash prizes are up for grabs.
http://p.sf.net/sfu/intelisp-dev2dev
_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/gstreamer-devel


------------------------------------------------------------------------------
Increase Visibility of Your 3D Game App & Earn a Chance To Win $500!
Tap into the largest installed PC base & get more eyes on your game by
optimizing for Intel(R) Graphics Technology. Get started today with the
Intel(R) Software Partner Program. Five $500 cash prizes are up for grabs.
http://p.sf.net/sfu/intelisp-dev2dev
_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

Re: Compositing and GStreamer

Timothy Braun
You'll have to excuse my ignorance in my last email.  I had the mixer in the gnloperation, but I had the src pad of the mixer attached to the final output segment rather than the composition src pad.  I fixed that, but am still running into issues.

So, currently, I have a pipeline which resembles this:


+--------------------------------+
| gnlcomposition                 |
| +----------------------------+ |
| | Box 1 (gnlfilesource)      | |  +-------+  +-------------------------------+
| +----------------------------+ |->| queue |->| filesink (plus encoder stuff) |
| | Box 2 (gnlfilesource)      | |  +-------+  +-------------------------------+
| +----------------------------+ |
| | Video Mixer (gnloperation) | |
| +----------------------------+ |
+--------------------------------+

  I hope that comes through ok..  Box 1, box 2 and video mixer all have the same duration and the input-priority-changed call is triggering the update to xpos and ypos on the video mixer pads.  Ultimately there will be 6 boxes per frame, but that I'm trying to keep it simple at the moment.
 
  The issue I'm currently running into is with the video mixer.  With the Box 1 and Box 2 being 240x240 px, the final frame size is 240x240 px.  Is there a way to feed in a blank 720x480 frame into the mixer so it outputs the proper resolution or am I looking at this the wrong way?
  I've tried to input a 720x480 frame from a videotestsrc, but am having issues (structurally I believe) getting my test source to exist in a gnlsource at a 720x480 resolution.

  Again, any help is greatly, greatly appreciated as I'm running short on time to get this complete.

  Best,
  Tim

On Sun, Nov 28, 2010 at 9:06 PM, Timothy Braun <[hidden email]> wrote:
Hi Edward,
  Thanks for the quick reply.  It has been most informative.  A couple questions if I may.  For the video rescale/capsfilter part, what is this doing and how should it look in the pipeline?  The input videos are all 240x240 px with a goal output frame size of 720x480px.

  For a very raw test, I am using this to build my pipeline:

    GstPipeline *pipeline = GST_PIPELINE(gst_pipeline_new("pipeline"));
   
    GstCaps *caps = gst_caps_from_string("video/x-raw-yuv;video/x-raw-rgb");
   
    // Create our composition
    GstElement *comp = gst_element_factory_make("gnlcomposition", "composition");
    g_object_set(G_OBJECT(comp), "caps", caps, NULL);
   
    GstElement *src1 = gst_element_factory_make("gnlfilesource", "source1");
    g_object_set(G_OBJECT(src1), "location", "loop1.mp4",
                 "start", 0,
                 "duration", 30 * GST_SECOND,
                 "media-start", 0,
                 "media-duration", 30 * GST_SECOND,
                 "priority", 1,
                 "caps", caps,
                 NULL);
   
    GstElement *src2 = gst_element_factory_make("gnlfilesource", "source2");
    g_object_set(G_OBJECT(src2), "location", "loop2.mp4",
                 "start", 0,
                 "duration", 30 * GST_SECOND,
                 "media-start", 0,
                 "media-duration", 30 * GST_SECOND,
                 "priority", 2,
                 "caps", caps,
                 NULL);
   
    GstElement *oper = gst_element_factory_make("gnloperation", NULL);
    g_object_set(G_OBJECT(oper), "caps", caps, "expandable", TRUE, NULL);
   
    GstElement *mixer = gst_element_factory_make("videomixer", NULL);
    gst_bin_add(GST_BIN(oper), mixer);
   
    // listen for the input priority change signal
    // to update the video mixer pad
    g_object_connect(oper, "signal::input-priority-changed", onPriorityChange, mixer, NULL );
   
    // add the sources to the composition
    gst_bin_add(GST_BIN(comp), src1);
    gst_bin_add(GST_BIN(comp), src2);
    gst_bin_add(GST_BIN(comp), oper);
   
    // build the output stream
    GstElement *color = gst_element_factory_make("ffmpegcolorspace", "colorspace");
   
    GstElement *identity = gst_element_factory_make("identity", "ident");
    g_object_set(identity, "single-segment", TRUE, NULL);
   
    GstElement *enc = gst_element_factory_make("ffenc_mpeg1video", "encoder");
    GstElement *mux = gst_element_factory_make("ffmux_mpeg", "mux");
    GstElement *queue = gst_element_factory_make("queue", "queue");
   
    GstElement *sink = gst_element_factory_make("filesink", "sink");
    g_object_set(sink, "location", "output.mpg", NULL);
   
    gst_bin_add_many(GST_BIN(pipeline), comp, color, identity, enc, mux, queue, sink, NULL);
   
    /*
    g_object_connect (comp, "signal::pad-added",
                      onPad, mixer, NULL);
    */
   
    gst_element_link_many(mixer, queue, color, identity, enc, mux, sink, NULL);


It seems to link up fine, but fails to stream as it gets stuck in the paused state.  I've tried adding queues all around, but no luck.  Here is the debug output:

** Message: Creating run loop...
** Message: Building pipeline...
** Message: Attaching to bus...
** Message: Setting state to PLAYING...
0:00:00.141504000 [334m91821 [00m    0x100609d30 [33;01mWARN   [00m [00;01;34m           gnlsource gnlsource.c:545:gnl_source_change_state:<source2> [00m Couldn't find a valid source pad
0:00:00.162296000 [334m91821 [00m    0x100609d30 [33;01mWARN   [00m [00;01;35m      GST_SCHEDULING gstpad.c:4692:gst_pad_get_range:<source:src> [00m getrange failed unexpected
0:00:00.191513000 [334m91821 [00m    0x100609d30 [33;01mWARN   [00m [00;01;34m           gnlsource gnlsource.c:545:gnl_source_change_state:<source1> [00m Couldn't find a valid source pad
0:00:00.199956000 [334m91821 [00m    0x100686b10 [33;01mWARN   [00m [00m             qtdemux qtdemux.c:5801:qtdemux_parse_trak:<qtdemux0> [00m unknown version 00000000
0:00:00.200693000 [334m91821 [00m    0x100609d30 [33;01mWARN   [00m [00;01;35m      GST_SCHEDULING gstpad.c:4692:gst_pad_get_range:<source:src> [00m getrange failed unexpected
0:00:00.210835000 [334m91821 [00m    0x10064f030 [33;01mWARN   [00m [00m             qtdemux qtdemux.c:5801:qtdemux_parse_trak:<qtdemux1> [00m unknown version 00000000
0:00:00.244706000 [334m91821 [00m    0x101879f60 [33;01mWARN   [00m [00;01;34m           gnlsource gnlsource.c:221:element_pad_added_cb:<source2> [00m We already have (pending) ghost-ed a valid source pad (ghostpad:'':'', pendingblock:1
0:00:00.244852000 [334m91821 [00m    0x1006501f0 [33;01mWARN   [00m [00;01;34m           gnlsource gnlsource.c:221:element_pad_added_cb:<source1> [00m We already have (pending) ghost-ed a valid source pad (ghostpad:'':'', pendingblock:1
** Message: Priority Changed: composition::gnloperation0->gnloperation0::sink_1(videomixer0::sink_1) - 2
** Message: Priority Changed: composition::gnloperation0->gnloperation0::sink_0(videomixer0::sink_0) - 1

I've been battling with this for a bit now, can't seem to make any headway.  Any input is, again, much appreciated.

Best,
Tim


On Sun, Nov 28, 2010 at 3:58 AM, Edward Hervey <[hidden email]> wrote:
Hi,

On Sat, 2010-11-27 at 13:50 -0500, Timothy Braun wrote:
> Kapil,
>   Thanks for the suggestion, but with multifilesrc I would have to
> have the files static with incrementing named files.  A single box in
> the 3x2 grid may contain the same file multiple times so I'm afraid it
> won't be the best solution.
>
>   I guess, ultimately, there's multiple ways to attack this one from
> what I've been able to find.  Here are the two that I've been looking
> at, I'm just not sure which is the better solution at this point:
>       * A single gnonlin composition with a 4 gnlsources similar in
>         setup as the gst-launch text I have below.

 Using one composition would actually be the 'proper' way.

>       * 6 gnonlin compositions, each feeding to a single videomixer
>         which combines them into the final frame.
>               * This path I'm currently investigating.  I have a test
>                 written in C, but I'm having some difficulties with
>                 pad linkage as I still don't have a complete
>                 understanding of when certain things will exist and
>                 how to get them.
>               * Here's currently whats happening:
>                       * Create a new pipeline
>                       * Create a videomixer
>                       * Create 6 gnonlin compositions each with a
>                         pad-added signal callback to connect
>                         gnlcomposition pad to videomixer.
>                       * ... (this is were it's going wrong)
>               * In the pad-added callback I have:
>                       * static void onPad(GstElement *comp, GstPad
>                         *pad, GstElement *sink) {
>                             GstPad *v = gst_element_get_pad(sink,
>                         "sink");
>                             gst_pad_link(pad, v);
>                             gst_object_unref(v);
>                         }
>                       * gst_element_get_pad is not returning a pad
>                         from the video mixer (sink) which leads me to
>                         believe that I'm either not asking in the
>                         right manner or the pad doesn't exist.  (I'm
>                         aware that gst_element_get_pad is deprecated,
>                         I'm just looking to test at the moment)
>                       * I noticed in one of the repositories under a
>                         unit test, the videomixer was attached as a
>                         gnloperation?  Is this the better path to
>                         take?
>
>   This all leads me to a couple more questions as well:
>       * A video mixer pad has xpos and ypos properties.  This would
>         let me shift the video around without needing a video box
>         which I believe may be more efficient?

 Yes, it will be more efficient.

>       * If I use the xpos and ypos properties, is the video mixer
>         smart enough to change the frame size appropriately or will it
>         simply crop the frame to the size of the largest input frame?
>               * If so, would it be better to add a videobox to do the
>                 adjustments for me, or feed in a solid color
>                 background of the required output size?

 No, it won't change the size, but what you could do is mix the
original sizes with original offsets and then downconvert the video
later.

 Example for one 3x2 segment:

 Create a gnloperation with a videomixer in it with a gnl priority of
0.
 Create a gnlfilesource for each clip with increasing priorities (1->6)
going from left-right and then top to bottom:
    1  2  3
    4  5  6

 Connect to the gnloperation 'input-priority-changed' signal. When your
callback is called, you will know which priority is being connected to
which gnloperation ghostpad. You can get the videomixer sink pad by
using the gst_ghost_pad_get_target() method and then setting the proper
xpos/ypos property on that pad based on the priority of the feed being
provided.

 Set 'video/x-raw-yuv;video/x-raw-rgb' as the caps property on all your
sources.

 Set duration and media-duration of *all* gnlobjects to the same
duration.
 If you want to add another segment of 3x2 clips, you'll need to re-add
all those 7 objects with a modified 'start' property.

 First connect your composition to an imagesink to make sure the result
is what you want. When it is, insert a videoscale element followed with
a capsfilter with your target resolution.

 Hope this helps.

>
>   Thanks again for the time.  I know there's a lot of questions above,
> but any help of any kind is greatly appreciated.
>
>   All the best,
>   Tim
>
>
> On Fri, Nov 26, 2010 at 1:04 AM, Kapil Agrawal <[hidden email]>
> wrote:
>         Just a quick clue that might help, try using multifilesrc ?
>
>
>         On Thu, Nov 25, 2010 at 9:47 PM, Timothy Braun
>         <[hidden email]> wrote:
>
>
>                 Hello Everyone,
>                   I'm fairly new to G-Streamer so any input you can
>                 provide is much appreciated.  I'm working on a project
>                 where we need to generate a 2 minute video which is a
>                 composite of a total of 24 input videos.  The output
>                 video will have 4 different 30 second sections, each
>                 containing a 3x2 grid of the smaller input videos.
>                 The input videos are all naturally at 240x240 with the
>                 goal of having a final output frame size of 720x480.
>
>                   Using gst-launch, I've been able to construct a
>                 sample 30 second clip using a combination of inputs,
>                 videoboxes and a videomixer.  Here is what I've come
>                 up with so far:
>
>                 videomixer name=mix ! ffmpegcolorspace !
>                 ffenc_mpeg1video ! ffmux_mpeg name=mux ! queue !
>                 filesink location=output.mpg
>                 adder name=adder ! audioconvert ! ffenc_mp2 ! mux.
>                 filesrc location=loop1.mp4 ! decodebin name=decode1
>                 decode1. ! videobox border-alpha=0 top=-240 left=0 !
>                 queue ! mix.
>                 decode1. ! adder.
>                 filesrc location=loop2.mp4 ! decodebin name=decode2
>                 decode2. ! videobox border-alpha=0 top=-240
>                 left=-240 ! queue ! mix.
>                 decode2. ! adder.
>                 filesrc location=loop3.mp4 ! decodebin name=decode3
>                 decode3. ! videobox border-alpha=0 top=-240
>                 left=-480 ! queue ! mix.
>                 decode3. ! adder.
>                 filesrc location=loop4.mp4 ! decodebin name=decode4
>                 decode4. ! videobox border-alpha=0 top=0 left=0 !
>                 queue ! mix.
>                 decode4. ! adder.
>                 filesrc location=loop5.mp4 ! decodebin name=decode5
>                 decode5. ! videobox border-alpha=0 top=0 left=-240 !
>                 queue ! mix.
>                 decode5. ! adder.
>                 filesrc location=loop6.mp4 ! decodebin name=decode6
>                 decode6. ! videobox border-alpha=0 top=0 left=-480 !
>                 queue ! mix.
>                 decode6. ! adder.
>
>                   Now I need to do this 4 times, each time with a
>                 potentially different video in each box.  I've started
>                 looking into C interfaces as there's other pieces of
>                 the puzzle which need to be tied into this, and I am
>                 trying to determine the best way to tackle this.  I
>                 originally was looking at Gnonlin, but the
>                 documentation is lacking in regards to how
>                 gnloperations work.  I also recently stumbled upon the
>                 GES library by Edward Hervey, this looks promising as
>                 well, but I haven't been able to spend much time on
>                 it.
>
>                   If I go the Gnonlin route, I believe I would need 6
>                 compositions, one for each box.  At the 30 second
>                 marker, I would swap the filesource to a new one using
>                 dynamic pads and listening for messages on the
>                 pipeline bus.  Am I far off on this?  Any suggestions?
>
>                   As for the GES library, it looks very promising and
>                 powerful from the little I read on it.  Would this be
>                 the smarter route to take?  If so, does anyone have
>                 any suggestions for how the pipeline would be
>                 structured?
>
>                   Thank you in advance for your time on this and I
>                 truly appreciate any information you are willing to
>                 share with me.
>
>                   Happy Thanksgiving,
>                   Tim
>
>
>                 ------------------------------------------------------------------------------
>                 Increase Visibility of Your 3D Game App & Earn a
>                 Chance To Win $500!
>                 Tap into the largest installed PC base & get more eyes
>                 on your game by
>                 optimizing for Intel(R) Graphics Technology. Get
>                 started today with the
>                 Intel(R) Software Partner Program. Five $500 cash
>                 prizes are up for grabs.
>                 http://p.sf.net/sfu/intelisp-dev2dev
>                 _______________________________________________
>                 gstreamer-devel mailing list
>                 [hidden email]
>                 https://lists.sourceforge.net/lists/listinfo/gstreamer-devel
>
>
>
>
>         --
>         www.mediamagictechnologies.com (Gstreamer, ffmpeg, Red5,
>         Streaming)
>         twitter handle: @gst_kaps
>         http://www.linkedin.com/in/kapilagrawal
>
>         ------------------------------------------------------------------------------
>         Increase Visibility of Your 3D Game App & Earn a Chance To Win
>         $500!
>         Tap into the largest installed PC base & get more eyes on your
>         game by
>         optimizing for Intel(R) Graphics Technology. Get started today
>         with the
>         Intel(R) Software Partner Program. Five $500 cash prizes are
>         up for grabs.
>         http://p.sf.net/sfu/intelisp-dev2dev
>         _______________________________________________
>         gstreamer-devel mailing list
>         [hidden email]
>         https://lists.sourceforge.net/lists/listinfo/gstreamer-devel
>
>
> ------------------------------------------------------------------------------
> Increase Visibility of Your 3D Game App & Earn a Chance To Win $500!
> Tap into the largest installed PC base & get more eyes on your game by
> optimizing for Intel(R) Graphics Technology. Get started today with the
> Intel(R) Software Partner Program. Five $500 cash prizes are up for grabs.
> http://p.sf.net/sfu/intelisp-dev2dev
> _______________________________________________ gstreamer-devel mailing list [hidden email] https://lists.sourceforge.net/lists/listinfo/gstreamer-devel



------------------------------------------------------------------------------
Increase Visibility of Your 3D Game App & Earn a Chance To Win $500!
Tap into the largest installed PC base & get more eyes on your game by
optimizing for Intel(R) Graphics Technology. Get started today with the
Intel(R) Software Partner Program. Five $500 cash prizes are up for grabs.
http://p.sf.net/sfu/intelisp-dev2dev
_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/gstreamer-devel



------------------------------------------------------------------------------
Increase Visibility of Your 3D Game App & Earn a Chance To Win $500!
Tap into the largest installed PC base & get more eyes on your game by
optimizing for Intel(R) Graphics Technology. Get started today with the
Intel(R) Software Partner Program. Five $500 cash prizes are up for grabs.
http://p.sf.net/sfu/intelisp-dev2dev
_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

Re: Compositing and GStreamer

Timothy Braun
Ok, making some headway.  I made a 720x480 input, in mpg form, for the composition which makes the video mixer output the proper size.  I've also dropped the mp4's in favor of mpeg 1 videos as it seemed to be causing an issue.  Using the priority change callback, I've been able to move the videos around in the frame.  Now onto my new issue :).

  This is my current bit of code:

    GstPipeline *pipeline = GST_PIPELINE(gst_pipeline_new("pipeline"));
   
    // Create our composition
    GstElement *comp = gst_element_factory_make("gnlcomposition", "composition");
   
    GstElement *src1 = gst_element_factory_make("gnlfilesource", NULL);
    g_object_set(G_OBJECT(src1), "location", "/Users/tbraun/Documents/Mac Applications/ccvideorenderer/loop1.mpg",
                 "start", 0,
                 "duration", 5 * GST_SECOND,
                 "media-start", 0,
                 "media-duration", 5 * GST_SECOND,
                 "priority", 1,
                 NULL);
    GstElement *src2 = gst_element_factory_make("gnlfilesource", NULL);
    g_object_set(G_OBJECT(src2), "location", "/Users/tbraun/Documents/Mac Applications/ccvideorenderer/loop2.mpg",
                 "start", 0,
                 "duration", 5 * GST_SECOND,
                 "media-start", 0,
                 "media-duration", 5 * GST_SECOND,
                 "priority", 2,
                 NULL);
    GstElement *src3 = gst_element_factory_make("gnlfilesource", NULL);
    g_object_set(G_OBJECT(src3), "location", "/Users/tbraun/Documents/Mac Applications/ccvideorenderer/loop3.mpg",
                 "start", 0,
                 "duration", 5 * GST_SECOND,
                 "media-start", 0,
                 "media-duration", 5 * GST_SECOND,
                 "priority", 3,
                 NULL);
    GstElement *src6 = gst_element_factory_make("gnlfilesource", NULL);
    g_object_set(G_OBJECT(src6), "location", "/Users/tbraun/Documents/Mac Applications/ccvideorenderer/loop6.mpg",
                 "start", 0,
                 "duration", 5 * GST_SECOND,
                 "media-start", 0,
                 "media-duration", 5 * GST_SECOND,
                 "priority", 6,
                 NULL);
    GstElement *bg = gst_element_factory_make("gnlfilesource", NULL);
    g_object_set(G_OBJECT(bg), "location", "/Users/tbraun/Documents/Mac Applications/ccvideorenderer/background.mpg",
                 "start", 0,
                 "duration", 5 * GST_SECOND,
                 "media-start", 0,
                 "media-duration", 5 * GST_SECOND,
                 "priority", 7,
                 NULL);
    GstElement *oper = gst_element_factory_make("gnloperation", NULL);
    g_object_set(G_OBJECT(oper),
                 "start", 0,
                 "duration", 5 * GST_SECOND,
                 "media-start", 0,
                 "media-duration", 5 * GST_SECOND,
                 "priority", 0,
                 NULL);
    GstElement *mixer = gst_element_factory_make("videomixer", NULL);
    gst_bin_add(GST_BIN(oper), mixer);
   
    g_object_connect(oper, "signal::input-priority-changed", onPriorityChange, mixer, NULL );
   
    // add the sources to the composition
    gst_bin_add(GST_BIN(comp), src1);
    gst_bin_add(GST_BIN(comp), src2);
    gst_bin_add(GST_BIN(comp), src3);
    gst_bin_add(GST_BIN(comp), src6);
    gst_bin_add(GST_BIN(comp), bg);
    gst_bin_add(GST_BIN(comp), oper);
   
    // build the output stream
    GstElement *color = gst_element_factory_make("ffmpegcolorspace", "colorspace");
   
    GstElement *enc = gst_element_factory_make("ffenc_mpeg1video", "encoder");
    GstElement *mux = gst_element_factory_make("ffmux_mpeg", "mux");
    GstElement *queue = gst_element_factory_make("queue", "queue");
   
    GstElement *sink = gst_element_factory_make("filesink", "sink");
    g_object_set(sink, "location", "output.mpg", NULL);

    gst_bin_add_many(GST_BIN(pipeline), comp, color, enc, mux, queue, sink, NULL);
   
    g_object_connect (comp, "signal::pad-added",
                      onPad, color, NULL);

    gst_element_link_many(color, enc, mux, queue, sink, NULL);
   
    return pipeline;

This works and outputs a 720x480 video containing the 4 input videos in the proper places.  If I try to add the two additional videos, 6 videos at 240x240 plus the bg video at 720x480, the process fails while spitting out a plethora of these:

0:00:06.174861000 47454    0x1006dc460 WARN                mp3parse gstmpegaudioparse.c:1601:head_check:<mpegaudioparse4> invalid sync

  It occurs if I add either 1 of the 2 missing videos.

  The other issue seems to be with sync.  The source videos for testing have an embedded timestamp in the frame image.  The initial frames are off for each of the source videos and they all end at a different time.  Any suggestions?

  Again, any input is greatly appreciated.  Even a nice "it's not possible" would be of great assistance.

  All the Best,
  Tim



On Mon, Nov 29, 2010 at 5:00 PM, Timothy Braun <[hidden email]> wrote:
You'll have to excuse my ignorance in my last email.  I had the mixer in the gnloperation, but I had the src pad of the mixer attached to the final output segment rather than the composition src pad.  I fixed that, but am still running into issues.

So, currently, I have a pipeline which resembles this:


+--------------------------------+
| gnlcomposition                 |
| +----------------------------+ |
| | Box 1 (gnlfilesource)      | |  +-------+  +-------------------------------+
| +----------------------------+ |->| queue |->| filesink (plus encoder stuff) |
| | Box 2 (gnlfilesource)      | |  +-------+  +-------------------------------+
| +----------------------------+ |
| | Video Mixer (gnloperation) | |
| +----------------------------+ |
+--------------------------------+

  I hope that comes through ok..  Box 1, box 2 and video mixer all have the same duration and the input-priority-changed call is triggering the update to xpos and ypos on the video mixer pads.  Ultimately there will be 6 boxes per frame, but that I'm trying to keep it simple at the moment.
 
  The issue I'm currently running into is with the video mixer.  With the Box 1 and Box 2 being 240x240 px, the final frame size is 240x240 px.  Is there a way to feed in a blank 720x480 frame into the mixer so it outputs the proper resolution or am I looking at this the wrong way?
  I've tried to input a 720x480 frame from a videotestsrc, but am having issues (structurally I believe) getting my test source to exist in a gnlsource at a 720x480 resolution.

  Again, any help is greatly, greatly appreciated as I'm running short on time to get this complete.

  Best,
  Tim


On Sun, Nov 28, 2010 at 9:06 PM, Timothy Braun <[hidden email]> wrote:
Hi Edward,
  Thanks for the quick reply.  It has been most informative.  A couple questions if I may.  For the video rescale/capsfilter part, what is this doing and how should it look in the pipeline?  The input videos are all 240x240 px with a goal output frame size of 720x480px.

  For a very raw test, I am using this to build my pipeline:

    GstPipeline *pipeline = GST_PIPELINE(gst_pipeline_new("pipeline"));
   
    GstCaps *caps = gst_caps_from_string("video/x-raw-yuv;video/x-raw-rgb");
   
    // Create our composition
    GstElement *comp = gst_element_factory_make("gnlcomposition", "composition");
    g_object_set(G_OBJECT(comp), "caps", caps, NULL);
   
    GstElement *src1 = gst_element_factory_make("gnlfilesource", "source1");
    g_object_set(G_OBJECT(src1), "location", "loop1.mp4",
                 "start", 0,
                 "duration", 30 * GST_SECOND,
                 "media-start", 0,
                 "media-duration", 30 * GST_SECOND,
                 "priority", 1,
                 "caps", caps,
                 NULL);
   
    GstElement *src2 = gst_element_factory_make("gnlfilesource", "source2");
    g_object_set(G_OBJECT(src2), "location", "loop2.mp4",
                 "start", 0,
                 "duration", 30 * GST_SECOND,
                 "media-start", 0,
                 "media-duration", 30 * GST_SECOND,
                 "priority", 2,
                 "caps", caps,
                 NULL);
   
    GstElement *oper = gst_element_factory_make("gnloperation", NULL);
    g_object_set(G_OBJECT(oper), "caps", caps, "expandable", TRUE, NULL);
   
    GstElement *mixer = gst_element_factory_make("videomixer", NULL);
    gst_bin_add(GST_BIN(oper), mixer);
   
    // listen for the input priority change signal
    // to update the video mixer pad
    g_object_connect(oper, "signal::input-priority-changed", onPriorityChange, mixer, NULL );
   
    // add the sources to the composition
    gst_bin_add(GST_BIN(comp), src1);
    gst_bin_add(GST_BIN(comp), src2);
    gst_bin_add(GST_BIN(comp), oper);
   
    // build the output stream
    GstElement *color = gst_element_factory_make("ffmpegcolorspace", "colorspace");
   
    GstElement *identity = gst_element_factory_make("identity", "ident");
    g_object_set(identity, "single-segment", TRUE, NULL);
   
    GstElement *enc = gst_element_factory_make("ffenc_mpeg1video", "encoder");
    GstElement *mux = gst_element_factory_make("ffmux_mpeg", "mux");
    GstElement *queue = gst_element_factory_make("queue", "queue");
   
    GstElement *sink = gst_element_factory_make("filesink", "sink");
    g_object_set(sink, "location", "output.mpg", NULL);
   
    gst_bin_add_many(GST_BIN(pipeline), comp, color, identity, enc, mux, queue, sink, NULL);
   
    /*
    g_object_connect (comp, "signal::pad-added",
                      onPad, mixer, NULL);
    */
   
    gst_element_link_many(mixer, queue, color, identity, enc, mux, sink, NULL);


It seems to link up fine, but fails to stream as it gets stuck in the paused state.  I've tried adding queues all around, but no luck.  Here is the debug output:

** Message: Creating run loop...
** Message: Building pipeline...
** Message: Attaching to bus...
** Message: Setting state to PLAYING...
0:00:00.141504000 [334m91821 [00m    0x100609d30 [33;01mWARN   [00m [00;01;34m           gnlsource gnlsource.c:545:gnl_source_change_state:<source2> [00m Couldn't find a valid source pad
0:00:00.162296000 [334m91821 [00m    0x100609d30 [33;01mWARN   [00m [00;01;35m      GST_SCHEDULING gstpad.c:4692:gst_pad_get_range:<source:src> [00m getrange failed unexpected
0:00:00.191513000 [334m91821 [00m    0x100609d30 [33;01mWARN   [00m [00;01;34m           gnlsource gnlsource.c:545:gnl_source_change_state:<source1> [00m Couldn't find a valid source pad
0:00:00.199956000 [334m91821 [00m    0x100686b10 [33;01mWARN   [00m [00m             qtdemux qtdemux.c:5801:qtdemux_parse_trak:<qtdemux0> [00m unknown version 00000000
0:00:00.200693000 [334m91821 [00m    0x100609d30 [33;01mWARN   [00m [00;01;35m      GST_SCHEDULING gstpad.c:4692:gst_pad_get_range:<source:src> [00m getrange failed unexpected
0:00:00.210835000 [334m91821 [00m    0x10064f030 [33;01mWARN   [00m [00m             qtdemux qtdemux.c:5801:qtdemux_parse_trak:<qtdemux1> [00m unknown version 00000000
0:00:00.244706000 [334m91821 [00m    0x101879f60 [33;01mWARN   [00m [00;01;34m           gnlsource gnlsource.c:221:element_pad_added_cb:<source2> [00m We already have (pending) ghost-ed a valid source pad (ghostpad:'':'', pendingblock:1
0:00:00.244852000 [334m91821 [00m    0x1006501f0 [33;01mWARN   [00m [00;01;34m           gnlsource gnlsource.c:221:element_pad_added_cb:<source1> [00m We already have (pending) ghost-ed a valid source pad (ghostpad:'':'', pendingblock:1
** Message: Priority Changed: composition::gnloperation0->gnloperation0::sink_1(videomixer0::sink_1) - 2
** Message: Priority Changed: composition::gnloperation0->gnloperation0::sink_0(videomixer0::sink_0) - 1

I've been battling with this for a bit now, can't seem to make any headway.  Any input is, again, much appreciated.

Best,
Tim


On Sun, Nov 28, 2010 at 3:58 AM, Edward Hervey <[hidden email]> wrote:
Hi,

On Sat, 2010-11-27 at 13:50 -0500, Timothy Braun wrote:
> Kapil,
>   Thanks for the suggestion, but with multifilesrc I would have to
> have the files static with incrementing named files.  A single box in
> the 3x2 grid may contain the same file multiple times so I'm afraid it
> won't be the best solution.
>
>   I guess, ultimately, there's multiple ways to attack this one from
> what I've been able to find.  Here are the two that I've been looking
> at, I'm just not sure which is the better solution at this point:
>       * A single gnonlin composition with a 4 gnlsources similar in
>         setup as the gst-launch text I have below.

 Using one composition would actually be the 'proper' way.

>       * 6 gnonlin compositions, each feeding to a single videomixer
>         which combines them into the final frame.
>               * This path I'm currently investigating.  I have a test
>                 written in C, but I'm having some difficulties with
>                 pad linkage as I still don't have a complete
>                 understanding of when certain things will exist and
>                 how to get them.
>               * Here's currently whats happening:
>                       * Create a new pipeline
>                       * Create a videomixer
>                       * Create 6 gnonlin compositions each with a
>                         pad-added signal callback to connect
>                         gnlcomposition pad to videomixer.
>                       * ... (this is were it's going wrong)
>               * In the pad-added callback I have:
>                       * static void onPad(GstElement *comp, GstPad
>                         *pad, GstElement *sink) {
>                             GstPad *v = gst_element_get_pad(sink,
>                         "sink");
>                             gst_pad_link(pad, v);
>                             gst_object_unref(v);
>                         }
>                       * gst_element_get_pad is not returning a pad
>                         from the video mixer (sink) which leads me to
>                         believe that I'm either not asking in the
>                         right manner or the pad doesn't exist.  (I'm
>                         aware that gst_element_get_pad is deprecated,
>                         I'm just looking to test at the moment)
>                       * I noticed in one of the repositories under a
>                         unit test, the videomixer was attached as a
>                         gnloperation?  Is this the better path to
>                         take?
>
>   This all leads me to a couple more questions as well:
>       * A video mixer pad has xpos and ypos properties.  This would
>         let me shift the video around without needing a video box
>         which I believe may be more efficient?

 Yes, it will be more efficient.

>       * If I use the xpos and ypos properties, is the video mixer
>         smart enough to change the frame size appropriately or will it
>         simply crop the frame to the size of the largest input frame?
>               * If so, would it be better to add a videobox to do the
>                 adjustments for me, or feed in a solid color
>                 background of the required output size?

 No, it won't change the size, but what you could do is mix the
original sizes with original offsets and then downconvert the video
later.

 Example for one 3x2 segment:

 Create a gnloperation with a videomixer in it with a gnl priority of
0.
 Create a gnlfilesource for each clip with increasing priorities (1->6)
going from left-right and then top to bottom:
    1  2  3
    4  5  6

 Connect to the gnloperation 'input-priority-changed' signal. When your
callback is called, you will know which priority is being connected to
which gnloperation ghostpad. You can get the videomixer sink pad by
using the gst_ghost_pad_get_target() method and then setting the proper
xpos/ypos property on that pad based on the priority of the feed being
provided.

 Set 'video/x-raw-yuv;video/x-raw-rgb' as the caps property on all your
sources.

 Set duration and media-duration of *all* gnlobjects to the same
duration.
 If you want to add another segment of 3x2 clips, you'll need to re-add
all those 7 objects with a modified 'start' property.

 First connect your composition to an imagesink to make sure the result
is what you want. When it is, insert a videoscale element followed with
a capsfilter with your target resolution.

 Hope this helps.

>
>   Thanks again for the time.  I know there's a lot of questions above,
> but any help of any kind is greatly appreciated.
>
>   All the best,
>   Tim
>
>
> On Fri, Nov 26, 2010 at 1:04 AM, Kapil Agrawal <[hidden email]>
> wrote:
>         Just a quick clue that might help, try using multifilesrc ?
>
>
>         On Thu, Nov 25, 2010 at 9:47 PM, Timothy Braun
>         <[hidden email]> wrote:
>
>
>                 Hello Everyone,
>                   I'm fairly new to G-Streamer so any input you can
>                 provide is much appreciated.  I'm working on a project
>                 where we need to generate a 2 minute video which is a
>                 composite of a total of 24 input videos.  The output
>                 video will have 4 different 30 second sections, each
>                 containing a 3x2 grid of the smaller input videos.
>                 The input videos are all naturally at 240x240 with the
>                 goal of having a final output frame size of 720x480.
>
>                   Using gst-launch, I've been able to construct a
>                 sample 30 second clip using a combination of inputs,
>                 videoboxes and a videomixer.  Here is what I've come
>                 up with so far:
>
>                 videomixer name=mix ! ffmpegcolorspace !
>                 ffenc_mpeg1video ! ffmux_mpeg name=mux ! queue !
>                 filesink location=output.mpg
>                 adder name=adder ! audioconvert ! ffenc_mp2 ! mux.
>                 filesrc location=loop1.mp4 ! decodebin name=decode1
>                 decode1. ! videobox border-alpha=0 top=-240 left=0 !
>                 queue ! mix.
>                 decode1. ! adder.
>                 filesrc location=loop2.mp4 ! decodebin name=decode2
>                 decode2. ! videobox border-alpha=0 top=-240
>                 left=-240 ! queue ! mix.
>                 decode2. ! adder.
>                 filesrc location=loop3.mp4 ! decodebin name=decode3
>                 decode3. ! videobox border-alpha=0 top=-240
>                 left=-480 ! queue ! mix.
>                 decode3. ! adder.
>                 filesrc location=loop4.mp4 ! decodebin name=decode4
>                 decode4. ! videobox border-alpha=0 top=0 left=0 !
>                 queue ! mix.
>                 decode4. ! adder.
>                 filesrc location=loop5.mp4 ! decodebin name=decode5
>                 decode5. ! videobox border-alpha=0 top=0 left=-240 !
>                 queue ! mix.
>                 decode5. ! adder.
>                 filesrc location=loop6.mp4 ! decodebin name=decode6
>                 decode6. ! videobox border-alpha=0 top=0 left=-480 !
>                 queue ! mix.
>                 decode6. ! adder.
>
>                   Now I need to do this 4 times, each time with a
>                 potentially different video in each box.  I've started
>                 looking into C interfaces as there's other pieces of
>                 the puzzle which need to be tied into this, and I am
>                 trying to determine the best way to tackle this.  I
>                 originally was looking at Gnonlin, but the
>                 documentation is lacking in regards to how
>                 gnloperations work.  I also recently stumbled upon the
>                 GES library by Edward Hervey, this looks promising as
>                 well, but I haven't been able to spend much time on
>                 it.
>
>                   If I go the Gnonlin route, I believe I would need 6
>                 compositions, one for each box.  At the 30 second
>                 marker, I would swap the filesource to a new one using
>                 dynamic pads and listening for messages on the
>                 pipeline bus.  Am I far off on this?  Any suggestions?
>
>                   As for the GES library, it looks very promising and
>                 powerful from the little I read on it.  Would this be
>                 the smarter route to take?  If so, does anyone have
>                 any suggestions for how the pipeline would be
>                 structured?
>
>                   Thank you in advance for your time on this and I
>                 truly appreciate any information you are willing to
>                 share with me.
>
>                   Happy Thanksgiving,
>                   Tim
>
>
>                 ------------------------------------------------------------------------------
>                 Increase Visibility of Your 3D Game App & Earn a
>                 Chance To Win $500!
>                 Tap into the largest installed PC base & get more eyes
>                 on your game by
>                 optimizing for Intel(R) Graphics Technology. Get
>                 started today with the
>                 Intel(R) Software Partner Program. Five $500 cash
>                 prizes are up for grabs.
>                 http://p.sf.net/sfu/intelisp-dev2dev
>                 _______________________________________________
>                 gstreamer-devel mailing list
>                 [hidden email]
>                 https://lists.sourceforge.net/lists/listinfo/gstreamer-devel
>
>
>
>
>         --
>         www.mediamagictechnologies.com (Gstreamer, ffmpeg, Red5,
>         Streaming)
>         twitter handle: @gst_kaps
>         http://www.linkedin.com/in/kapilagrawal
>
>         ------------------------------------------------------------------------------
>         Increase Visibility of Your 3D Game App & Earn a Chance To Win
>         $500!
>         Tap into the largest installed PC base & get more eyes on your
>         game by
>         optimizing for Intel(R) Graphics Technology. Get started today
>         with the
>         Intel(R) Software Partner Program. Five $500 cash prizes are
>         up for grabs.
>         http://p.sf.net/sfu/intelisp-dev2dev
>         _______________________________________________
>         gstreamer-devel mailing list
>         [hidden email]
>         https://lists.sourceforge.net/lists/listinfo/gstreamer-devel
>
>
> ------------------------------------------------------------------------------
> Increase Visibility of Your 3D Game App & Earn a Chance To Win $500!
> Tap into the largest installed PC base & get more eyes on your game by
> optimizing for Intel(R) Graphics Technology. Get started today with the
> Intel(R) Software Partner Program. Five $500 cash prizes are up for grabs.
> http://p.sf.net/sfu/intelisp-dev2dev
> _______________________________________________ gstreamer-devel mailing list [hidden email] https://lists.sourceforge.net/lists/listinfo/gstreamer-devel



------------------------------------------------------------------------------
Increase Visibility of Your 3D Game App & Earn a Chance To Win $500!
Tap into the largest installed PC base & get more eyes on your game by
optimizing for Intel(R) Graphics Technology. Get started today with the
Intel(R) Software Partner Program. Five $500 cash prizes are up for grabs.
http://p.sf.net/sfu/intelisp-dev2dev
_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/gstreamer-devel




------------------------------------------------------------------------------
Increase Visibility of Your 3D Game App & Earn a Chance To Win $500!
Tap into the largest installed PC base & get more eyes on your game by
optimizing for Intel(R) Graphics Technology. Get started today with the
Intel(R) Software Partner Program. Five $500 cash prizes are up for grabs.
http://p.sf.net/sfu/intelisp-dev2dev
_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

Re: Compositing and GStreamer

Timothy Braun
Hey Everyone,
  I just wanted to send out a very heartfelt thanks to those that helped me over the last few days.  Both via the mail list and irc.  Your assistance has been most appreciated. 

  Turns out the issue was that I should have RTFM.  During my tests I had overlooked one of the documents on the GStreamer website which turned out to be most useful.  Not neglecting the assistance from others, without it I wouldn't have known what to look for and in the end pads were my nemesis.

  I have a test working, 6 videos, in a single frame all composed with a gnlcomposition with a videomixer in a gnloperation.  Very exciting.  I also have a second sequence of 6 videos (in a 3x2 grid) in the same composition.  Here's my next, and hopefully final, question.  The first sequence of videos gnlsources are configured like:
  • start: 0
  • duration: 30 * GST_SECOND
  • media-start: 0
  • media-duration: 30 * GST_SECOND
  • priority: incrementing by 1 depending on grid location
  This all works wonderfully.  When I add another sequence with the gnlsources configured like:
  • start: 30 * GST_SECOND
  • duration: 30 * GST_SECOND
  • media-start: 0
  • media-duration: 30 * GST_SECOND
  • priority: incrementing by 1 depending on grid location
  All the videos vanish and I end up with a blank frame with a duration of 60 seconds.  It I set the media-start property to 30 * GST_SECOND, it all works but the second sequence of videos are off by 30 seconds.  The source files are the same although the elements are unique to it's presence in the stream.  This leads me to believe that somethings happening during a seek of some nature.  Would adding the second sequence of videos when receiving the EOS signal fix this or is there something else going on? If it should happen during the EOS signal, is there anything I should be aware of when adding elements to the pipeline while handling the signal?

  Again, most appreciated and when this is all said and done I'd be more than happy to write up a wiki document on what I've learned about gnonlin, pads and troubleshooting links between them.

  All the Best,
  Tim


On Tue, Nov 30, 2010 at 3:24 AM, Timothy Braun <[hidden email]> wrote:
Ok, making some headway.  I made a 720x480 input, in mpg form, for the composition which makes the video mixer output the proper size.  I've also dropped the mp4's in favor of mpeg 1 videos as it seemed to be causing an issue.  Using the priority change callback, I've been able to move the videos around in the frame.  Now onto my new issue :).

  This is my current bit of code:


    GstPipeline *pipeline = GST_PIPELINE(gst_pipeline_new("pipeline"));
   
    // Create our composition
    GstElement *comp = gst_element_factory_make("gnlcomposition", "composition");
   
    GstElement *src1 = gst_element_factory_make("gnlfilesource", NULL);
    g_object_set(G_OBJECT(src1), "location", "/Users/tbraun/Documents/Mac Applications/ccvideorenderer/loop1.mpg",
                 "start", 0,
                 "duration", 5 * GST_SECOND,
                 "media-start", 0,
                 "media-duration", 5 * GST_SECOND,
                 "priority", 1,
                 NULL);
    GstElement *src2 = gst_element_factory_make("gnlfilesource", NULL);
    g_object_set(G_OBJECT(src2), "location", "/Users/tbraun/Documents/Mac Applications/ccvideorenderer/loop2.mpg",
                 "start", 0,
                 "duration", 5 * GST_SECOND,
                 "media-start", 0,
                 "media-duration", 5 * GST_SECOND,
                 "priority", 2,
                 NULL);
    GstElement *src3 = gst_element_factory_make("gnlfilesource", NULL);
    g_object_set(G_OBJECT(src3), "location", "/Users/tbraun/Documents/Mac Applications/ccvideorenderer/loop3.mpg",
                 "start", 0,
                 "duration", 5 * GST_SECOND,
                 "media-start", 0,
                 "media-duration", 5 * GST_SECOND,
                 "priority", 3,
                 NULL);
    GstElement *src6 = gst_element_factory_make("gnlfilesource", NULL);
    g_object_set(G_OBJECT(src6), "location", "/Users/tbraun/Documents/Mac Applications/ccvideorenderer/loop6.mpg",
                 "start", 0,
                 "duration", 5 * GST_SECOND,
                 "media-start", 0,
                 "media-duration", 5 * GST_SECOND,
                 "priority", 6,
                 NULL);
    GstElement *bg = gst_element_factory_make("gnlfilesource", NULL);
    g_object_set(G_OBJECT(bg), "location", "/Users/tbraun/Documents/Mac Applications/ccvideorenderer/background.mpg",
                 "start", 0,
                 "duration", 5 * GST_SECOND,
                 "media-start", 0,
                 "media-duration", 5 * GST_SECOND,
                 "priority", 7,

                 NULL);
    GstElement *oper = gst_element_factory_make("gnloperation", NULL);
    g_object_set(G_OBJECT(oper),
                 "start", 0,
                 "duration", 5 * GST_SECOND,
                 "media-start", 0,
                 "media-duration", 5 * GST_SECOND,
                 "priority", 0,

                 NULL);
    GstElement *mixer = gst_element_factory_make("videomixer", NULL);
    gst_bin_add(GST_BIN(oper), mixer);
   
    g_object_connect(oper, "signal::input-priority-changed", onPriorityChange, mixer, NULL );
   
    // add the sources to the composition
    gst_bin_add(GST_BIN(comp), src1);
    gst_bin_add(GST_BIN(comp), src2);
    gst_bin_add(GST_BIN(comp), src3);
    gst_bin_add(GST_BIN(comp), src6);
    gst_bin_add(GST_BIN(comp), bg);

    gst_bin_add(GST_BIN(comp), oper);
   
    // build the output stream
    GstElement *color = gst_element_factory_make("ffmpegcolorspace", "colorspace");
   
    GstElement *enc = gst_element_factory_make("ffenc_mpeg1video", "encoder");
    GstElement *mux = gst_element_factory_make("ffmux_mpeg", "mux");
    GstElement *queue = gst_element_factory_make("queue", "queue");
   
    GstElement *sink = gst_element_factory_make("filesink", "sink");
    g_object_set(sink, "location", "output.mpg", NULL);

    gst_bin_add_many(GST_BIN(pipeline), comp, color, enc, mux, queue, sink, NULL);

   
    g_object_connect (comp, "signal::pad-added",
                      onPad, color, NULL);

    gst_element_link_many(color, enc, mux, queue, sink, NULL);
   
    return pipeline;

This works and outputs a 720x480 video containing the 4 input videos in the proper places.  If I try to add the two additional videos, 6 videos at 240x240 plus the bg video at 720x480, the process fails while spitting out a plethora of these:

0:00:06.174861000 [331m47454 [00m    0x1006dc460 [33;01mWARN   [00m [00m            mp3parse gstmpegaudioparse.c:1601:head_check:<mpegaudioparse4> [00m invalid sync

  It occurs if I add either 1 of the 2 missing videos.

  The other issue seems to be with sync.  The source videos for testing have an embedded timestamp in the frame image.  The initial frames are off for each of the source videos and they all end at a different time.  Any suggestions?

  Again, any input is greatly appreciated.  Even a nice "it's not possible" would be of great assistance.

  All the Best,
  Tim




On Mon, Nov 29, 2010 at 5:00 PM, Timothy Braun <[hidden email]> wrote:
You'll have to excuse my ignorance in my last email.  I had the mixer in the gnloperation, but I had the src pad of the mixer attached to the final output segment rather than the composition src pad.  I fixed that, but am still running into issues.

So, currently, I have a pipeline which resembles this:


+--------------------------------+
| gnlcomposition                 |
| +----------------------------+ |
| | Box 1 (gnlfilesource)      | |  +-------+  +-------------------------------+
| +----------------------------+ |->| queue |->| filesink (plus encoder stuff) |
| | Box 2 (gnlfilesource)      | |  +-------+  +-------------------------------+
| +----------------------------+ |
| | Video Mixer (gnloperation) | |
| +----------------------------+ |
+--------------------------------+

  I hope that comes through ok..  Box 1, box 2 and video mixer all have the same duration and the input-priority-changed call is triggering the update to xpos and ypos on the video mixer pads.  Ultimately there will be 6 boxes per frame, but that I'm trying to keep it simple at the moment.
 
  The issue I'm currently running into is with the video mixer.  With the Box 1 and Box 2 being 240x240 px, the final frame size is 240x240 px.  Is there a way to feed in a blank 720x480 frame into the mixer so it outputs the proper resolution or am I looking at this the wrong way?
  I've tried to input a 720x480 frame from a videotestsrc, but am having issues (structurally I believe) getting my test source to exist in a gnlsource at a 720x480 resolution.

  Again, any help is greatly, greatly appreciated as I'm running short on time to get this complete.

  Best,
  Tim


On Sun, Nov 28, 2010 at 9:06 PM, Timothy Braun <[hidden email]> wrote:
Hi Edward,
  Thanks for the quick reply.  It has been most informative.  A couple questions if I may.  For the video rescale/capsfilter part, what is this doing and how should it look in the pipeline?  The input videos are all 240x240 px with a goal output frame size of 720x480px.

  For a very raw test, I am using this to build my pipeline:

    GstPipeline *pipeline = GST_PIPELINE(gst_pipeline_new("pipeline"));
   
    GstCaps *caps = gst_caps_from_string("video/x-raw-yuv;video/x-raw-rgb");
   
    // Create our composition
    GstElement *comp = gst_element_factory_make("gnlcomposition", "composition");
    g_object_set(G_OBJECT(comp), "caps", caps, NULL);
   
    GstElement *src1 = gst_element_factory_make("gnlfilesource", "source1");
    g_object_set(G_OBJECT(src1), "location", "loop1.mp4",
                 "start", 0,
                 "duration", 30 * GST_SECOND,
                 "media-start", 0,
                 "media-duration", 30 * GST_SECOND,
                 "priority", 1,
                 "caps", caps,
                 NULL);
   
    GstElement *src2 = gst_element_factory_make("gnlfilesource", "source2");
    g_object_set(G_OBJECT(src2), "location", "loop2.mp4",
                 "start", 0,
                 "duration", 30 * GST_SECOND,
                 "media-start", 0,
                 "media-duration", 30 * GST_SECOND,
                 "priority", 2,
                 "caps", caps,
                 NULL);
   
    GstElement *oper = gst_element_factory_make("gnloperation", NULL);
    g_object_set(G_OBJECT(oper), "caps", caps, "expandable", TRUE, NULL);
   
    GstElement *mixer = gst_element_factory_make("videomixer", NULL);
    gst_bin_add(GST_BIN(oper), mixer);
   
    // listen for the input priority change signal
    // to update the video mixer pad
    g_object_connect(oper, "signal::input-priority-changed", onPriorityChange, mixer, NULL );
   
    // add the sources to the composition
    gst_bin_add(GST_BIN(comp), src1);
    gst_bin_add(GST_BIN(comp), src2);
    gst_bin_add(GST_BIN(comp), oper);
   
    // build the output stream
    GstElement *color = gst_element_factory_make("ffmpegcolorspace", "colorspace");
   
    GstElement *identity = gst_element_factory_make("identity", "ident");
    g_object_set(identity, "single-segment", TRUE, NULL);
   
    GstElement *enc = gst_element_factory_make("ffenc_mpeg1video", "encoder");
    GstElement *mux = gst_element_factory_make("ffmux_mpeg", "mux");
    GstElement *queue = gst_element_factory_make("queue", "queue");
   
    GstElement *sink = gst_element_factory_make("filesink", "sink");
    g_object_set(sink, "location", "output.mpg", NULL);
   
    gst_bin_add_many(GST_BIN(pipeline), comp, color, identity, enc, mux, queue, sink, NULL);
   
    /*
    g_object_connect (comp, "signal::pad-added",
                      onPad, mixer, NULL);
    */
   
    gst_element_link_many(mixer, queue, color, identity, enc, mux, sink, NULL);


It seems to link up fine, but fails to stream as it gets stuck in the paused state.  I've tried adding queues all around, but no luck.  Here is the debug output:

** Message: Creating run loop...
** Message: Building pipeline...
** Message: Attaching to bus...
** Message: Setting state to PLAYING...
0:00:00.141504000 [334m91821 [00m    0x100609d30 [33;01mWARN   [00m [00;01;34m           gnlsource gnlsource.c:545:gnl_source_change_state:<source2> [00m Couldn't find a valid source pad
0:00:00.162296000 [334m91821 [00m    0x100609d30 [33;01mWARN   [00m [00;01;35m      GST_SCHEDULING gstpad.c:4692:gst_pad_get_range:<source:src> [00m getrange failed unexpected
0:00:00.191513000 [334m91821 [00m    0x100609d30 [33;01mWARN   [00m [00;01;34m           gnlsource gnlsource.c:545:gnl_source_change_state:<source1> [00m Couldn't find a valid source pad
0:00:00.199956000 [334m91821 [00m    0x100686b10 [33;01mWARN   [00m [00m             qtdemux qtdemux.c:5801:qtdemux_parse_trak:<qtdemux0> [00m unknown version 00000000
0:00:00.200693000 [334m91821 [00m    0x100609d30 [33;01mWARN   [00m [00;01;35m      GST_SCHEDULING gstpad.c:4692:gst_pad_get_range:<source:src> [00m getrange failed unexpected
0:00:00.210835000 [334m91821 [00m    0x10064f030 [33;01mWARN   [00m [00m             qtdemux qtdemux.c:5801:qtdemux_parse_trak:<qtdemux1> [00m unknown version 00000000
0:00:00.244706000 [334m91821 [00m    0x101879f60 [33;01mWARN   [00m [00;01;34m           gnlsource gnlsource.c:221:element_pad_added_cb:<source2> [00m We already have (pending) ghost-ed a valid source pad (ghostpad:'':'', pendingblock:1
0:00:00.244852000 [334m91821 [00m    0x1006501f0 [33;01mWARN   [00m [00;01;34m           gnlsource gnlsource.c:221:element_pad_added_cb:<source1> [00m We already have (pending) ghost-ed a valid source pad (ghostpad:'':'', pendingblock:1
** Message: Priority Changed: composition::gnloperation0->gnloperation0::sink_1(videomixer0::sink_1) - 2
** Message: Priority Changed: composition::gnloperation0->gnloperation0::sink_0(videomixer0::sink_0) - 1

I've been battling with this for a bit now, can't seem to make any headway.  Any input is, again, much appreciated.

Best,
Tim


On Sun, Nov 28, 2010 at 3:58 AM, Edward Hervey <[hidden email]> wrote:
Hi,

On Sat, 2010-11-27 at 13:50 -0500, Timothy Braun wrote:
> Kapil,
>   Thanks for the suggestion, but with multifilesrc I would have to
> have the files static with incrementing named files.  A single box in
> the 3x2 grid may contain the same file multiple times so I'm afraid it
> won't be the best solution.
>
>   I guess, ultimately, there's multiple ways to attack this one from
> what I've been able to find.  Here are the two that I've been looking
> at, I'm just not sure which is the better solution at this point:
>       * A single gnonlin composition with a 4 gnlsources similar in
>         setup as the gst-launch text I have below.

 Using one composition would actually be the 'proper' way.

>       * 6 gnonlin compositions, each feeding to a single videomixer
>         which combines them into the final frame.
>               * This path I'm currently investigating.  I have a test
>                 written in C, but I'm having some difficulties with
>                 pad linkage as I still don't have a complete
>                 understanding of when certain things will exist and
>                 how to get them.
>               * Here's currently whats happening:
>                       * Create a new pipeline
>                       * Create a videomixer
>                       * Create 6 gnonlin compositions each with a
>                         pad-added signal callback to connect
>                         gnlcomposition pad to videomixer.
>                       * ... (this is were it's going wrong)
>               * In the pad-added callback I have:
>                       * static void onPad(GstElement *comp, GstPad
>                         *pad, GstElement *sink) {
>                             GstPad *v = gst_element_get_pad(sink,
>                         "sink");
>                             gst_pad_link(pad, v);
>                             gst_object_unref(v);
>                         }
>                       * gst_element_get_pad is not returning a pad
>                         from the video mixer (sink) which leads me to
>                         believe that I'm either not asking in the
>                         right manner or the pad doesn't exist.  (I'm
>                         aware that gst_element_get_pad is deprecated,
>                         I'm just looking to test at the moment)
>                       * I noticed in one of the repositories under a
>                         unit test, the videomixer was attached as a
>                         gnloperation?  Is this the better path to
>                         take?
>
>   This all leads me to a couple more questions as well:
>       * A video mixer pad has xpos and ypos properties.  This would
>         let me shift the video around without needing a video box
>         which I believe may be more efficient?

 Yes, it will be more efficient.

>       * If I use the xpos and ypos properties, is the video mixer
>         smart enough to change the frame size appropriately or will it
>         simply crop the frame to the size of the largest input frame?
>               * If so, would it be better to add a videobox to do the
>                 adjustments for me, or feed in a solid color
>                 background of the required output size?

 No, it won't change the size, but what you could do is mix the
original sizes with original offsets and then downconvert the video
later.

 Example for one 3x2 segment:

 Create a gnloperation with a videomixer in it with a gnl priority of
0.
 Create a gnlfilesource for each clip with increasing priorities (1->6)
going from left-right and then top to bottom:
    1  2  3
    4  5  6

 Connect to the gnloperation 'input-priority-changed' signal. When your
callback is called, you will know which priority is being connected to
which gnloperation ghostpad. You can get the videomixer sink pad by
using the gst_ghost_pad_get_target() method and then setting the proper
xpos/ypos property on that pad based on the priority of the feed being
provided.

 Set 'video/x-raw-yuv;video/x-raw-rgb' as the caps property on all your
sources.

 Set duration and media-duration of *all* gnlobjects to the same
duration.
 If you want to add another segment of 3x2 clips, you'll need to re-add
all those 7 objects with a modified 'start' property.

 First connect your composition to an imagesink to make sure the result
is what you want. When it is, insert a videoscale element followed with
a capsfilter with your target resolution.

 Hope this helps.

>
>   Thanks again for the time.  I know there's a lot of questions above,
> but any help of any kind is greatly appreciated.
>
>   All the best,
>   Tim
>
>
> On Fri, Nov 26, 2010 at 1:04 AM, Kapil Agrawal <[hidden email]>
> wrote:
>         Just a quick clue that might help, try using multifilesrc ?
>
>
>         On Thu, Nov 25, 2010 at 9:47 PM, Timothy Braun
>         <[hidden email]> wrote:
>
>
>                 Hello Everyone,
>                   I'm fairly new to G-Streamer so any input you can
>                 provide is much appreciated.  I'm working on a project
>                 where we need to generate a 2 minute video which is a
>                 composite of a total of 24 input videos.  The output
>                 video will have 4 different 30 second sections, each
>                 containing a 3x2 grid of the smaller input videos.
>                 The input videos are all naturally at 240x240 with the
>                 goal of having a final output frame size of 720x480.
>
>                   Using gst-launch, I've been able to construct a
>                 sample 30 second clip using a combination of inputs,
>                 videoboxes and a videomixer.  Here is what I've come
>                 up with so far:
>
>                 videomixer name=mix ! ffmpegcolorspace !
>                 ffenc_mpeg1video ! ffmux_mpeg name=mux ! queue !
>                 filesink location=output.mpg
>                 adder name=adder ! audioconvert ! ffenc_mp2 ! mux.
>                 filesrc location=loop1.mp4 ! decodebin name=decode1
>                 decode1. ! videobox border-alpha=0 top=-240 left=0 !
>                 queue ! mix.
>                 decode1. ! adder.
>                 filesrc location=loop2.mp4 ! decodebin name=decode2
>                 decode2. ! videobox border-alpha=0 top=-240
>                 left=-240 ! queue ! mix.
>                 decode2. ! adder.
>                 filesrc location=loop3.mp4 ! decodebin name=decode3
>                 decode3. ! videobox border-alpha=0 top=-240
>                 left=-480 ! queue ! mix.
>                 decode3. ! adder.
>                 filesrc location=loop4.mp4 ! decodebin name=decode4
>                 decode4. ! videobox border-alpha=0 top=0 left=0 !
>                 queue ! mix.
>                 decode4. ! adder.
>                 filesrc location=loop5.mp4 ! decodebin name=decode5
>                 decode5. ! videobox border-alpha=0 top=0 left=-240 !
>                 queue ! mix.
>                 decode5. ! adder.
>                 filesrc location=loop6.mp4 ! decodebin name=decode6
>                 decode6. ! videobox border-alpha=0 top=0 left=-480 !
>                 queue ! mix.
>                 decode6. ! adder.
>
>                   Now I need to do this 4 times, each time with a
>                 potentially different video in each box.  I've started
>                 looking into C interfaces as there's other pieces of
>                 the puzzle which need to be tied into this, and I am
>                 trying to determine the best way to tackle this.  I
>                 originally was looking at Gnonlin, but the
>                 documentation is lacking in regards to how
>                 gnloperations work.  I also recently stumbled upon the
>                 GES library by Edward Hervey, this looks promising as
>                 well, but I haven't been able to spend much time on
>                 it.
>
>                   If I go the Gnonlin route, I believe I would need 6
>                 compositions, one for each box.  At the 30 second
>                 marker, I would swap the filesource to a new one using
>                 dynamic pads and listening for messages on the
>                 pipeline bus.  Am I far off on this?  Any suggestions?
>
>                   As for the GES library, it looks very promising and
>                 powerful from the little I read on it.  Would this be
>                 the smarter route to take?  If so, does anyone have
>                 any suggestions for how the pipeline would be
>                 structured?
>
>                   Thank you in advance for your time on this and I
>                 truly appreciate any information you are willing to
>                 share with me.
>
>                   Happy Thanksgiving,
>                   Tim
>
>
>                 ------------------------------------------------------------------------------
>                 Increase Visibility of Your 3D Game App & Earn a
>                 Chance To Win $500!
>                 Tap into the largest installed PC base & get more eyes
>                 on your game by
>                 optimizing for Intel(R) Graphics Technology. Get
>                 started today with the
>                 Intel(R) Software Partner Program. Five $500 cash
>                 prizes are up for grabs.
>                 http://p.sf.net/sfu/intelisp-dev2dev
>                 _______________________________________________
>                 gstreamer-devel mailing list
>                 [hidden email]
>                 https://lists.sourceforge.net/lists/listinfo/gstreamer-devel
>
>
>
>
>         --
>         www.mediamagictechnologies.com (Gstreamer, ffmpeg, Red5,
>         Streaming)
>         twitter handle: @gst_kaps
>         http://www.linkedin.com/in/kapilagrawal
>
>         ------------------------------------------------------------------------------
>         Increase Visibility of Your 3D Game App & Earn a Chance To Win
>         $500!
>         Tap into the largest installed PC base & get more eyes on your
>         game by
>         optimizing for Intel(R) Graphics Technology. Get started today
>         with the
>         Intel(R) Software Partner Program. Five $500 cash prizes are
>         up for grabs.
>         http://p.sf.net/sfu/intelisp-dev2dev
>         _______________________________________________
>         gstreamer-devel mailing list
>         [hidden email]
>         https://lists.sourceforge.net/lists/listinfo/gstreamer-devel
>
>
> ------------------------------------------------------------------------------
> Increase Visibility of Your 3D Game App & Earn a Chance To Win $500!
> Tap into the largest installed PC base & get more eyes on your game by
> optimizing for Intel(R) Graphics Technology. Get started today with the
> Intel(R) Software Partner Program. Five $500 cash prizes are up for grabs.
> http://p.sf.net/sfu/intelisp-dev2dev
> _______________________________________________ gstreamer-devel mailing list [hidden email] https://lists.sourceforge.net/lists/listinfo/gstreamer-devel



------------------------------------------------------------------------------
Increase Visibility of Your 3D Game App & Earn a Chance To Win $500!
Tap into the largest installed PC base & get more eyes on your game by
optimizing for Intel(R) Graphics Technology. Get started today with the
Intel(R) Software Partner Program. Five $500 cash prizes are up for grabs.
http://p.sf.net/sfu/intelisp-dev2dev
_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/gstreamer-devel





------------------------------------------------------------------------------
Increase Visibility of Your 3D Game App & Earn a Chance To Win $500!
Tap into the largest installed PC base & get more eyes on your game by
optimizing for Intel(R) Graphics Technology. Get started today with the
Intel(R) Software Partner Program. Five $500 cash prizes are up for grabs.
http://p.sf.net/sfu/intelisp-dev2dev
_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

Re: Compositing and GStreamer

amartin
Dear Timothy and Edward,

Please, would you mind posting also the code of the callback functionon PriorityChange that includes the gst_ghost_pad_get_target call?

g_object_connect(oper, "signal::input-priority-changed", onPriorityChange, mixer, NULL );

I am trying to accomplish the callback function but i don't achieve it (It is never called).

I've tried with several alternatives from the following code:
static void
onPriorityChange (GstElement *element,
                                  GstPad *pad,
                                  guint newpriority,     // Based on gst-inpect gnloperation signal description
                                  gpointer data)
{
        GstPad *sinkpad;
        GstPad *ghostpad;

        printf("PRIORITY %d \n", newpriority);

        // class with the data
        GstManager *gstManager = (GstManager *) data; // data is a program object with a lot of data including the mixer

// ghostpad = gst_element_get_static_pad (gstManager->mixer, "sink");
        ghostpad = GST_GHOST_PAD (pad);
        sinkpad = gst_ghost_pad_get_target (GST_GHOST_PAD (ghostpad));

        // Some code to retrieve the proper xpos and zpos from gstManager according to newpriority

        g_object_set(sinkpad,"zorder",newpriority,NULL);
        g_object_set(sinkpad,"xpos",gstManager->left,NULL);
        g_object_set(sinkpad,"ypos",gstManager->top,NULL);

        gst_object_unref (sinkpad);
}

Thank you in advance,

Angel
Reply | Threaded
Open this post in threaded view
|

Re: Compositing and GStreamer

amartin
Dear all,

I've got the answer to my question.

gst_pad_get_name(gst_pad_get_parent_element(gst_pad_get_peer(pad))) returns the name of the father element of the pad linked to the videomixer.

So, according to this information we can simply define:
static void
onPriorityChange (GstElement *element,
                                  GstPad *pad,
                                  guint newpriority,
                                  gpointer data)

GstPad *sinkpad;
sinkpad = gst_ghost_pad_get_target (GST_GHOST_PAD (pad));

GstManager *gstManager = (GstManager *) data; // App-Class with the information

// This name provides data to identity the linked source to retrieve from App object gstManager the proper xpos ypos and zorder values according to the designed grid
// name = gst_pad_get_name(gst_pad_get_parent_element(gst_pad_get_peer(pad)))
// Some code to do that

g_object_set(sinkpad,"zorder",gstManager->priority,NULL);
g_object_set(sinkpad,"xpos",gstManager->left,NULL);
g_object_set(sinkpad,"ypos",gstManager->top,NULL);
 
}

A thing that i do not understand yet is why the value of newpriority is always 0 (the value define for the gnloperation) instead of the value of the linked gnlfilesource.

Best,

Angel