Using gnonlin to extract time segments from a video file

classic Classic list List threaded Threaded
11 messages Options
Reply | Threaded
Open this post in threaded view
|

Using gnonlin to extract time segments from a video file

Rodrigo Manhães
Hi,

I need to build a small command-line tool for extracting a time
segment from a video file, creating another video file as result
containing the desired segment. I've tried to do this creating a
pipeline whose source is a gnlfilesource and the sink is a filesink
and limiting the playback to the desired timebox.

I got good results on this same goal for audio files. For video files,
however, the gnlfilesource's properties for start and duration are
ignored. Has gnonlin some limitation in this sense or the problem is
in my code?

I've tried to do the same thing with gstreamer without gnonlin and
didn't get success.

The language I'm using is Python.

I thank any help or link for information.

[]'s
Rodrigo

------------------------------------------------------------------------------
Create and Deploy Rich Internet Apps outside the browser with Adobe(R)AIR(TM)
software. With Adobe AIR, Ajax developers can use existing skills and code to
build responsive, highly engaging applications that combine the power of local
resources and data with the reach of the web. Download the Adobe AIR SDK and
Ajax docs to start building applications today-http://p.sf.net/sfu/adobe-com
_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

Re: Using gnonlin to extract time segments from a video file

Behdad Esfahbod-3
Hi Rodrigo,

Not relevant to your question/experience anymore, but this is my experience
trying to do the same thing three years ago:

  http://mces.blogspot.com/2006/03/cutting-vcds-or-how-i-learned-to-stop.html

:)

behdad

Rodrigo Manhães wrote:

> Hi,
>
> I need to build a small command-line tool for extracting a time
> segment from a video file, creating another video file as result
> containing the desired segment. I've tried to do this creating a
> pipeline whose source is a gnlfilesource and the sink is a filesink
> and limiting the playback to the desired timebox.
>
> I got good results on this same goal for audio files. For video files,
> however, the gnlfilesource's properties for start and duration are
> ignored. Has gnonlin some limitation in this sense or the problem is
> in my code?
>
> I've tried to do the same thing with gstreamer without gnonlin and
> didn't get success.
>
> The language I'm using is Python.
>
> I thank any help or link for information.
>
> []'s
> Rodrigo
>
> ------------------------------------------------------------------------------
> Create and Deploy Rich Internet Apps outside the browser with Adobe(R)AIR(TM)
> software. With Adobe AIR, Ajax developers can use existing skills and code to
> build responsive, highly engaging applications that combine the power of local
> resources and data with the reach of the web. Download the Adobe AIR SDK and
> Ajax docs to start building applications today-http://p.sf.net/sfu/adobe-com
> _______________________________________________
> gstreamer-devel mailing list
> [hidden email]
> https://lists.sourceforge.net/lists/listinfo/gstreamer-devel
>

------------------------------------------------------------------------------
Create and Deploy Rich Internet Apps outside the browser with Adobe(R)AIR(TM)
software. With Adobe AIR, Ajax developers can use existing skills and code to
build responsive, highly engaging applications that combine the power of local
resources and data with the reach of the web. Download the Adobe AIR SDK and
Ajax docs to start building applications today-http://p.sf.net/sfu/adobe-com
_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

Re: Using gnonlin to extract time segments from a video file

Edward Hervey
Administrator
In reply to this post by Rodrigo Manhães
Hi,

  For your use case apply the following properties if you want to cut D
seconds starting from A:
  # outside time-realm (what it will output)
  gnlsource.props.start = 0 # the output starts from 0 ...
  gnlsource.props.duration = D # ... and lasts D nanoseconds
  # inner time-realm (what you are extracting)
  gnlsource.props.media_start = A
  gnlsource.props.media_duration = D

  One thing to take into account is that gnlfilesource will decode the
contents to raw audio/video (because it uses decodebin(2)). That means
you'll need encoders and muxer after your compositions.

  As a side note, but I'm not sure whether you'll encounter this with
such a simple use-case, gnlfilesource has a few issues due to the queues
it uses internally. If you see the resulting file stopping before the
end, you will have to use a queue-less bin within a regular gnlsource.
  Since you're using python you can re-use such an element I created in
PiTiVi : SingleDecodeBin (git.pitivi.org ,
pitivi/elements/singledecodebin.py) and put that in a regular
'gnlsource' along with the above properties.

  I hope this helps.

    Edward

On Tue, 2009-02-10 at 01:32 -0200, Rodrigo Manhães wrote:

> Hi,
>
> I need to build a small command-line tool for extracting a time
> segment from a video file, creating another video file as result
> containing the desired segment. I've tried to do this creating a
> pipeline whose source is a gnlfilesource and the sink is a filesink
> and limiting the playback to the desired timebox.
>
> I got good results on this same goal for audio files. For video files,
> however, the gnlfilesource's properties for start and duration are
> ignored. Has gnonlin some limitation in this sense or the problem is
> in my code?
>
> I've tried to do the same thing with gstreamer without gnonlin and
> didn't get success.
>
> The language I'm using is Python.
>
> I thank any help or link for information.
>
> []'s
> Rodrigo
>
> ------------------------------------------------------------------------------
> Create and Deploy Rich Internet Apps outside the browser with Adobe(R)AIR(TM)
> software. With Adobe AIR, Ajax developers can use existing skills and code to
> build responsive, highly engaging applications that combine the power of local
> resources and data with the reach of the web. Download the Adobe AIR SDK and
> Ajax docs to start building applications today-http://p.sf.net/sfu/adobe-com
> _______________________________________________
> gstreamer-devel mailing list
> [hidden email]
> https://lists.sourceforge.net/lists/listinfo/gstreamer-devel


------------------------------------------------------------------------------
Create and Deploy Rich Internet Apps outside the browser with Adobe(R)AIR(TM)
software. With Adobe AIR, Ajax developers can use existing skills and code to
build responsive, highly engaging applications that combine the power of local
resources and data with the reach of the web. Download the Adobe AIR SDK and
Ajax docs to start building applications today-http://p.sf.net/sfu/adobe-com
_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

Re: Using gnonlin to extract time segments from a video file

Rodrigo Manhães
Hi,

First of all, thanks for the replies!

I applied your instructions about the properties to some examples
found on Internet, but didn't work. So, I created a video pipeline by
myself and I'm facing problems on linking dynamic pads from
gnlfilesource to both video and audio streams. As of GStreamer
Documentation, decodebin emits one "new-decoded-pad" signal for each
decoded stream. Neither gnlcomposition nor gnlfilesource accept a
new-decoded-pad signal. How can I register a callback for these
signals?

[]'s
Rodrigo


2009/2/10 Edward Hervey <[hidden email]>:

> Hi,
>
>  For your use case apply the following properties if you want to cut D
> seconds starting from A:
>  # outside time-realm (what it will output)
>  gnlsource.props.start = 0 # the output starts from 0 ...
>  gnlsource.props.duration = D # ... and lasts D nanoseconds
>  # inner time-realm (what you are extracting)
>  gnlsource.props.media_start = A
>  gnlsource.props.media_duration = D
>
>  One thing to take into account is that gnlfilesource will decode the
> contents to raw audio/video (because it uses decodebin(2)). That means
> you'll need encoders and muxer after your compositions.
>
>  As a side note, but I'm not sure whether you'll encounter this with
> such a simple use-case, gnlfilesource has a few issues due to the queues
> it uses internally. If you see the resulting file stopping before the
> end, you will have to use a queue-less bin within a regular gnlsource.
>  Since you're using python you can re-use such an element I created in
> PiTiVi : SingleDecodeBin (git.pitivi.org ,
> pitivi/elements/singledecodebin.py) and put that in a regular
> 'gnlsource' along with the above properties.
>
>  I hope this helps.
>
>    Edward
>
> On Tue, 2009-02-10 at 01:32 -0200, Rodrigo Manhães wrote:
>> Hi,
>>
>> I need to build a small command-line tool for extracting a time
>> segment from a video file, creating another video file as result
>> containing the desired segment. I've tried to do this creating a
>> pipeline whose source is a gnlfilesource and the sink is a filesink
>> and limiting the playback to the desired timebox.
>>
>> I got good results on this same goal for audio files. For video files,
>> however, the gnlfilesource's properties for start and duration are
>> ignored. Has gnonlin some limitation in this sense or the problem is
>> in my code?
>>
>> I've tried to do the same thing with gstreamer without gnonlin and
>> didn't get success.
>>
>> The language I'm using is Python.
>>
>> I thank any help or link for information.
>>
>> []'s
>> Rodrigo
>>
>> ------------------------------------------------------------------------------
>> Create and Deploy Rich Internet Apps outside the browser with Adobe(R)AIR(TM)
>> software. With Adobe AIR, Ajax developers can use existing skills and code to
>> build responsive, highly engaging applications that combine the power of local
>> resources and data with the reach of the web. Download the Adobe AIR SDK and
>> Ajax docs to start building applications today-http://p.sf.net/sfu/adobe-com
>> _______________________________________________
>> gstreamer-devel mailing list
>> [hidden email]
>> https://lists.sourceforge.net/lists/listinfo/gstreamer-devel
>
>
> ------------------------------------------------------------------------------
> Create and Deploy Rich Internet Apps outside the browser with Adobe(R)AIR(TM)
> software. With Adobe AIR, Ajax developers can use existing skills and code to
> build responsive, highly engaging applications that combine the power of local
> resources and data with the reach of the web. Download the Adobe AIR SDK and
> Ajax docs to start building applications today-http://p.sf.net/sfu/adobe-com
> _______________________________________________
> gstreamer-devel mailing list
> [hidden email]
> https://lists.sourceforge.net/lists/listinfo/gstreamer-devel
>

------------------------------------------------------------------------------
Create and Deploy Rich Internet Apps outside the browser with Adobe(R)AIR(TM)
software. With Adobe AIR, Ajax developers can use existing skills and code to
build responsive, highly engaging applications that combine the power of local
resources and data with the reach of the web. Download the Adobe AIR SDK and
Ajax docs to start building applications today-http://p.sf.net/sfu/adobe-com
_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

Re: Using gnonlin to extract time segments from a video file

Edward Hervey
Administrator
Hi,

On Wed, 2009-02-11 at 23:52 -0200, Rodrigo Manhães wrote:

> Hi,
>
> First of all, thanks for the replies!
>
> I applied your instructions about the properties to some examples
> found on Internet, but didn't work. So, I created a video pipeline by
> myself and I'm facing problems on linking dynamic pads from
> gnlfilesource to both video and audio streams. As of GStreamer
> Documentation, decodebin emits one "new-decoded-pad" signal for each
> decoded stream. Neither gnlcomposition nor gnlfilesource accept a
> new-decoded-pad signal. How can I register a callback for these
> signals?

  'new-decoded-pad' is a convenience signal for new pads on decodebin.
The normal signal for new pads being added to an element is 'pad-added'.

   Edward

>
> []'s
> Rodrigo
>
>
> 2009/2/10 Edward Hervey <[hidden email]>:
> > Hi,
> >
> >  For your use case apply the following properties if you want to cut D
> > seconds starting from A:
> >  # outside time-realm (what it will output)
> >  gnlsource.props.start = 0 # the output starts from 0 ...
> >  gnlsource.props.duration = D # ... and lasts D nanoseconds
> >  # inner time-realm (what you are extracting)
> >  gnlsource.props.media_start = A
> >  gnlsource.props.media_duration = D
> >
> >  One thing to take into account is that gnlfilesource will decode the
> > contents to raw audio/video (because it uses decodebin(2)). That means
> > you'll need encoders and muxer after your compositions.
> >
> >  As a side note, but I'm not sure whether you'll encounter this with
> > such a simple use-case, gnlfilesource has a few issues due to the queues
> > it uses internally. If you see the resulting file stopping before the
> > end, you will have to use a queue-less bin within a regular gnlsource.
> >  Since you're using python you can re-use such an element I created in
> > PiTiVi : SingleDecodeBin (git.pitivi.org ,
> > pitivi/elements/singledecodebin.py) and put that in a regular
> > 'gnlsource' along with the above properties.
> >
> >  I hope this helps.
> >
> >    Edward
> >
> > On Tue, 2009-02-10 at 01:32 -0200, Rodrigo Manhães wrote:
> >> Hi,
> >>
> >> I need to build a small command-line tool for extracting a time
> >> segment from a video file, creating another video file as result
> >> containing the desired segment. I've tried to do this creating a
> >> pipeline whose source is a gnlfilesource and the sink is a filesink
> >> and limiting the playback to the desired timebox.
> >>
> >> I got good results on this same goal for audio files. For video files,
> >> however, the gnlfilesource's properties for start and duration are
> >> ignored. Has gnonlin some limitation in this sense or the problem is
> >> in my code?
> >>
> >> I've tried to do the same thing with gstreamer without gnonlin and
> >> didn't get success.
> >>
> >> The language I'm using is Python.
> >>
> >> I thank any help or link for information.
> >>
> >> []'s
> >> Rodrigo
> >>
> >> ------------------------------------------------------------------------------
> >> Create and Deploy Rich Internet Apps outside the browser with Adobe(R)AIR(TM)
> >> software. With Adobe AIR, Ajax developers can use existing skills and code to
> >> build responsive, highly engaging applications that combine the power of local
> >> resources and data with the reach of the web. Download the Adobe AIR SDK and
> >> Ajax docs to start building applications today-http://p.sf.net/sfu/adobe-com
> >> _______________________________________________
> >> gstreamer-devel mailing list
> >> [hidden email]
> >> https://lists.sourceforge.net/lists/listinfo/gstreamer-devel
> >
> >
> > ------------------------------------------------------------------------------
> > Create and Deploy Rich Internet Apps outside the browser with Adobe(R)AIR(TM)
> > software. With Adobe AIR, Ajax developers can use existing skills and code to
> > build responsive, highly engaging applications that combine the power of local
> > resources and data with the reach of the web. Download the Adobe AIR SDK and
> > Ajax docs to start building applications today-http://p.sf.net/sfu/adobe-com
> > _______________________________________________
> > gstreamer-devel mailing list
> > [hidden email]
> > https://lists.sourceforge.net/lists/listinfo/gstreamer-devel
> >
>
> ------------------------------------------------------------------------------
> Create and Deploy Rich Internet Apps outside the browser with Adobe(R)AIR(TM)
> software. With Adobe AIR, Ajax developers can use existing skills and code to
> build responsive, highly engaging applications that combine the power of local
> resources and data with the reach of the web. Download the Adobe AIR SDK and
> Ajax docs to start building applications today-http://p.sf.net/sfu/adobe-com
> _______________________________________________
> gstreamer-devel mailing list
> [hidden email]
> https://lists.sourceforge.net/lists/listinfo/gstreamer-devel


------------------------------------------------------------------------------
Create and Deploy Rich Internet Apps outside the browser with Adobe(R)AIR(TM)
software. With Adobe AIR, Ajax developers can use existing skills and code to
build responsive, highly engaging applications that combine the power of local
resources and data with the reach of the web. Download the Adobe AIR SDK and
Ajax docs to start building applications today-http://p.sf.net/sfu/adobe-com
_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

Re: Using gnonlin to extract time segments from a video file

Rodrigo Manhães
2009/2/12 Edward Hervey <[hidden email]>:
>  'new-decoded-pad' is a convenience signal for new pads on decodebin.
> The normal signal for new pads being added to an element is 'pad-added'.

Using 'pad-added' signal, the callback is called only once, for the
video stream. The pad for audio part is not added.

[]'s
Rodrigo

------------------------------------------------------------------------------
Create and Deploy Rich Internet Apps outside the browser with Adobe(R)AIR(TM)
software. With Adobe AIR, Ajax developers can use existing skills and code to
build responsive, highly engaging applications that combine the power of local
resources and data with the reach of the web. Download the Adobe AIR SDK and
Ajax docs to start building applications today-http://p.sf.net/sfu/adobe-com
_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

Re: Using gnonlin to extract time segments from a video file

Rodrigo Manhães
Hi,

I wrote two small scripts to explain my problem on getting dynamic
pads with gnonlin.

The first, with no gnonlin, uses a filesrc, and prints:

video/x-raw-yuv
audio/x-raw-int

indicating that it found the two pads.
The code is below.

###
import gst
import gtk

pipeline = gst.Pipeline()

source = gst.element_factory_make('filesrc')
source.set_property('location', 'creature.mpg')
pipeline.add(source)

decoder = gst.element_factory_make('decodebin')
pipeline.add(decoder)
source.link(decoder)

def on_new_pad(bin, pad):
    print pad.get_caps()[0].get_name()

decoder.connect('pad-added', on_new_pad)

pipeline.set_state(gst.STATE_PLAYING)
gtk.main()

####

The second, using gnonlin, prints:

video/x-raw-yuv

indicating that only the video stream was found. (If I connect the
signal to gnlfilesource rather than gnlcomposition, the result is the
same)
The code is below:

###
import gst
import gtk

pipeline = gst.Pipeline()

composition = gst.element_factory_make('gnlcomposition')
pipeline.add(composition)

source = gst.element_factory_make('gnlfilesource')
source.set_property('location', 'creature.mpg')
source.set_property('start', 0)
source.set_property('media-start', 0)
source.set_property('duration', 10)
source.set_property('media-duration', 10)
composition.add(source)

def on_new_pad(bin, pad):
    print pad.get_caps()[0].get_name()

source.connect('pad-added', on_new_pad)

pipeline.set_state(gst.STATE_PLAYING)
gtk.main()
#####

What am I doing wrong?

[]'s
Rodrigo

2009/2/12 Rodrigo Manhães <[hidden email]>:

> 2009/2/12 Edward Hervey <[hidden email]>:
>>  'new-decoded-pad' is a convenience signal for new pads on decodebin.
>> The normal signal for new pads being added to an element is 'pad-added'.
>
> Using 'pad-added' signal, the callback is called only once, for the
> video stream. The pad for audio part is not added.
>
> []'s
> Rodrigo
>

------------------------------------------------------------------------------
Open Source Business Conference (OSBC), March 24-25, 2009, San Francisco, CA
-OSBC tackles the biggest issue in open source: Open Sourcing the Enterprise
-Strategies to boost innovation and cut costs with open source participation
-Receive a $600 discount off the registration fee with the source code: SFAD
http://p.sf.net/sfu/XcvMzF8H
_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

Re: Using gnonlin to extract time segments from a video file

Edward Hervey
Administrator
Hi,

  GNonLin elements are one-output-stream-only elements. If you wish to
do processing on audio AND video, you will have to create:
  * 2 compositions (one per media type)
  * 2 sources (one per media type), you will have to set the 'caps'
properties of each of the sources to the correct type (Ex:
gst.Caps("video/x-raw-yuv;video/x-raw-rgb") for the video one).

  Then put one source in each composition and connect each of those
composition to the adequate downstream elements (sinks, encoders,
etc...).

  Why only one stream ? Because it is the lowest common denominator for
all types of non-linear operations. Various tests have proven the
overhead of duplicating the sources is minimal provided you code them
properly (the only duplicated processing part will be the filesource and
demuxer).

  Hope this helps,

    Edward

On Fri, 2009-02-13 at 15:53 -0200, Rodrigo Manhães wrote:

> Hi,
>
> I wrote two small scripts to explain my problem on getting dynamic
> pads with gnonlin.
>
> The first, with no gnonlin, uses a filesrc, and prints:
>
> video/x-raw-yuv
> audio/x-raw-int
>
> indicating that it found the two pads.
> The code is below.
>
> ###
> import gst
> import gtk
>
> pipeline = gst.Pipeline()
>
> source = gst.element_factory_make('filesrc')
> source.set_property('location', 'creature.mpg')
> pipeline.add(source)
>
> decoder = gst.element_factory_make('decodebin')
> pipeline.add(decoder)
> source.link(decoder)
>
> def on_new_pad(bin, pad):
>     print pad.get_caps()[0].get_name()
>
> decoder.connect('pad-added', on_new_pad)
>
> pipeline.set_state(gst.STATE_PLAYING)
> gtk.main()
>
> ####
>
> The second, using gnonlin, prints:
>
> video/x-raw-yuv
>
> indicating that only the video stream was found. (If I connect the
> signal to gnlfilesource rather than gnlcomposition, the result is the
> same)
> The code is below:
>
> ###
> import gst
> import gtk
>
> pipeline = gst.Pipeline()
>
> composition = gst.element_factory_make('gnlcomposition')
> pipeline.add(composition)
>
> source = gst.element_factory_make('gnlfilesource')
> source.set_property('location', 'creature.mpg')
> source.set_property('start', 0)
> source.set_property('media-start', 0)
> source.set_property('duration', 10)
> source.set_property('media-duration', 10)
> composition.add(source)
>
> def on_new_pad(bin, pad):
>     print pad.get_caps()[0].get_name()
>
> source.connect('pad-added', on_new_pad)
>
> pipeline.set_state(gst.STATE_PLAYING)
> gtk.main()
> #####
>
> What am I doing wrong?
>
> []'s
> Rodrigo
>
> 2009/2/12 Rodrigo Manhães <[hidden email]>:
> > 2009/2/12 Edward Hervey <[hidden email]>:
> >>  'new-decoded-pad' is a convenience signal for new pads on decodebin.
> >> The normal signal for new pads being added to an element is 'pad-added'.
> >
> > Using 'pad-added' signal, the callback is called only once, for the
> > video stream. The pad for audio part is not added.
> >
> > []'s
> > Rodrigo
> >
>
> ------------------------------------------------------------------------------
> Open Source Business Conference (OSBC), March 24-25, 2009, San Francisco, CA
> -OSBC tackles the biggest issue in open source: Open Sourcing the Enterprise
> -Strategies to boost innovation and cut costs with open source participation
> -Receive a $600 discount off the registration fee with the source code: SFAD
> http://p.sf.net/sfu/XcvMzF8H
> _______________________________________________
> gstreamer-devel mailing list
> [hidden email]
> https://lists.sourceforge.net/lists/listinfo/gstreamer-devel


------------------------------------------------------------------------------
Open Source Business Conference (OSBC), March 24-25, 2009, San Francisco, CA
-OSBC tackles the biggest issue in open source: Open Sourcing the Enterprise
-Strategies to boost innovation and cut costs with open source participation
-Receive a $600 discount off the registration fee with the source code: SFAD
http://p.sf.net/sfu/XcvMzF8H
_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

Re: Using gnonlin to extract time segments from a video file

Rodrigo Manhães
Hi,

thanks for your help and patience.

I have more two doubts:

1) start, duration, media-start and media-duration

I've tried to do what you said, but didn't work. This first attempt
was intended to extract only a video slice from the original file. The
values assigned to the properties are ignored and the whole video is
generated. The code is below:

import gst
import gtk

pipeline = gst.Pipeline()

composition = gst.element_factory_make('gnlcomposition')
pipeline.add(composition)

source = gst.element_factory_make('gnlfilesource')
source.set_property('location', 'he-man.mpg')
source.set_property('start', 0)
source.set_property('duration', 10 * gst.SECOND)
source.set_property('media-start', 30 * gst.SECOND)
source.set_property('media-duration', 10 * gst.SECOND)
composition.add(source)

video_encoder = gst.element_factory_make('theoraenc')
pipeline.add(video_encoder)

muxer = gst.element_factory_make('oggmux')
pipeline.add(muxer)
video_encoder.get_pad('src').link(muxer.get_pad('sink_%d'))

sink = gst.element_factory_make('filesink')
sink.set_property('location', 'output.ogg')
pipeline.add(sink)
muxer.link(sink)

def on_new_pad(bin, pad):
    print pad.get_caps()[0].get_name()
    pad.link(video_encoder.get_pad('sink'))

def on_message(bus, message):
    if message.type == gst.MESSAGE_EOS:
        pipeline.set_state(gst.STATE_NULL)
        gtk.main_quit()

composition.connect('pad-added', on_new_pad)

bus = pipeline.get_bus()
bus.add_signal_watch()
bus.connect('message', on_message)

pipeline.set_state(gst.STATE_PLAYING)
gtk.main()


2) Doubts on getting the audio pad in a gnonlin video pipeline

>  GNonLin elements are one-output-stream-only elements. If you wish to
> do processing on audio AND video, you will have to create:
>  * 2 compositions (one per media type)
>  * 2 sources (one per media type), you will have to set the 'caps'
> properties of each of the sources to the correct type (Ex:
> gst.Caps("video/x-raw-yuv;video/x-raw-rgb") for the video one).
>
>  Then put one source in each composition and connect each of those
> composition to the adequate downstream elements (sinks, encoders,
> etc...).

I understood your explanation, but I don't know how to program it. The
example you gave creates a gst.Caps object, but how do I set the caps
to the source or composition? I tried to do

source.set_property('caps', gst.Caps("audio/x-raw-int"))

but it also doesn't work, getting the following error

gst.LinkError: <enum GST_PAD_LINK_NOFORMAT of type GstPadLinkReturn>

when connecting the obtained pad to the audio encoder.

[]'s
Rodrigo


2009/2/16 Edward Hervey <[hidden email]>:

> Hi,
>
>  GNonLin elements are one-output-stream-only elements. If you wish to
> do processing on audio AND video, you will have to create:
>  * 2 compositions (one per media type)
>  * 2 sources (one per media type), you will have to set the 'caps'
> properties of each of the sources to the correct type (Ex:
> gst.Caps("video/x-raw-yuv;video/x-raw-rgb") for the video one).
>
>  Then put one source in each composition and connect each of those
> composition to the adequate downstream elements (sinks, encoders,
> etc...).
>
>  Why only one stream ? Because it is the lowest common denominator for
> all types of non-linear operations. Various tests have proven the
> overhead of duplicating the sources is minimal provided you code them
> properly (the only duplicated processing part will be the filesource and
> demuxer).
>
>  Hope this helps,
>
>    Edward
>
> On Fri, 2009-02-13 at 15:53 -0200, Rodrigo Manhães wrote:
>> Hi,
>>
>> I wrote two small scripts to explain my problem on getting dynamic
>> pads with gnonlin.
>>
>> The first, with no gnonlin, uses a filesrc, and prints:
>>
>> video/x-raw-yuv
>> audio/x-raw-int
>>
>> indicating that it found the two pads.
>> The code is below.
>>
>> ###
>> import gst
>> import gtk
>>
>> pipeline = gst.Pipeline()
>>
>> source = gst.element_factory_make('filesrc')
>> source.set_property('location', 'creature.mpg')
>> pipeline.add(source)
>>
>> decoder = gst.element_factory_make('decodebin')
>> pipeline.add(decoder)
>> source.link(decoder)
>>
>> def on_new_pad(bin, pad):
>>     print pad.get_caps()[0].get_name()
>>
>> decoder.connect('pad-added', on_new_pad)
>>
>> pipeline.set_state(gst.STATE_PLAYING)
>> gtk.main()
>>
>> ####
>>
>> The second, using gnonlin, prints:
>>
>> video/x-raw-yuv
>>
>> indicating that only the video stream was found. (If I connect the
>> signal to gnlfilesource rather than gnlcomposition, the result is the
>> same)
>> The code is below:
>>
>> ###
>> import gst
>> import gtk
>>
>> pipeline = gst.Pipeline()
>>
>> composition = gst.element_factory_make('gnlcomposition')
>> pipeline.add(composition)
>>
>> source = gst.element_factory_make('gnlfilesource')
>> source.set_property('location', 'creature.mpg')
>> source.set_property('start', 0)
>> source.set_property('media-start', 0)
>> source.set_property('duration', 10)
>> source.set_property('media-duration', 10)
>> composition.add(source)
>>
>> def on_new_pad(bin, pad):
>>     print pad.get_caps()[0].get_name()
>>
>> source.connect('pad-added', on_new_pad)
>>
>> pipeline.set_state(gst.STATE_PLAYING)
>> gtk.main()
>> #####
>>
>> What am I doing wrong?
>>
>> []'s
>> Rodrigo
>>
>> 2009/2/12 Rodrigo Manhães <[hidden email]>:
>> > 2009/2/12 Edward Hervey <[hidden email]>:
>> >>  'new-decoded-pad' is a convenience signal for new pads on decodebin.
>> >> The normal signal for new pads being added to an element is 'pad-added'.
>> >
>> > Using 'pad-added' signal, the callback is called only once, for the
>> > video stream. The pad for audio part is not added.
>> >
>> > []'s
>> > Rodrigo
>> >
>>
>> ------------------------------------------------------------------------------
>> Open Source Business Conference (OSBC), March 24-25, 2009, San Francisco, CA
>> -OSBC tackles the biggest issue in open source: Open Sourcing the Enterprise
>> -Strategies to boost innovation and cut costs with open source participation
>> -Receive a $600 discount off the registration fee with the source code: SFAD
>> http://p.sf.net/sfu/XcvMzF8H
>> _______________________________________________
>> gstreamer-devel mailing list
>> [hidden email]
>> https://lists.sourceforge.net/lists/listinfo/gstreamer-devel
>
>
> ------------------------------------------------------------------------------
> Open Source Business Conference (OSBC), March 24-25, 2009, San Francisco, CA
> -OSBC tackles the biggest issue in open source: Open Sourcing the Enterprise
> -Strategies to boost innovation and cut costs with open source participation
> -Receive a $600 discount off the registration fee with the source code: SFAD
> http://p.sf.net/sfu/XcvMzF8H
> _______________________________________________
> gstreamer-devel mailing list
> [hidden email]
> https://lists.sourceforge.net/lists/listinfo/gstreamer-devel
>

------------------------------------------------------------------------------
Open Source Business Conference (OSBC), March 24-25, 2009, San Francisco, CA
-OSBC tackles the biggest issue in open source: Open Sourcing the Enterprise
-Strategies to boost innovation and cut costs with open source participation
-Receive a $600 discount off the registration fee with the source code: SFAD
http://p.sf.net/sfu/XcvMzF8H
_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

Re: Using gnonlin to extract time segments from a video file

Edward Hervey
Administrator
On Tue, 2009-02-17 at 16:24 -0300, Rodrigo Manhães wrote:

> Hi,
>
> thanks for your help and patience.
>
> I have more two doubts:
>
> 1) start, duration, media-start and media-duration
>
> I've tried to do what you said, but didn't work. This first attempt
> was intended to extract only a video slice from the original file. The
> values assigned to the properties are ignored and the whole video is
> generated. The code is below:
>
> import gst
> import gtk
>
> pipeline = gst.Pipeline()
>
> composition = gst.element_factory_make('gnlcomposition')
> pipeline.add(composition)
>
> source = gst.element_factory_make('gnlfilesource')

  Your code is fine, but the mpeg file is the culprit. Seeking in mpeg
files is not 100% functional (yes, it sucks, really), and therefore
gnonlin can't work effectivelly with those files :(

> source.set_property('location', 'he-man.mpg')
> source.set_property('start', 0)
> source.set_property('duration', 10 * gst.SECOND)
> source.set_property('media-start', 30 * gst.SECOND)
> source.set_property('media-duration', 10 * gst.SECOND)
> composition.add(source)
>
> video_encoder = gst.element_factory_make('theoraenc')
> pipeline.add(video_encoder)
>
> muxer = gst.element_factory_make('oggmux')
> pipeline.add(muxer)
> video_encoder.get_pad('src').link(muxer.get_pad('sink_%d'))
>
> sink = gst.element_factory_make('filesink')
> sink.set_property('location', 'output.ogg')
> pipeline.add(sink)
> muxer.link(sink)
>
> def on_new_pad(bin, pad):
>     print pad.get_caps()[0].get_name()
>     pad.link(video_encoder.get_pad('sink'))
>
> def on_message(bus, message):
>     if message.type == gst.MESSAGE_EOS:
>         pipeline.set_state(gst.STATE_NULL)
>         gtk.main_quit()
>
> composition.connect('pad-added', on_new_pad)
>
> bus = pipeline.get_bus()
> bus.add_signal_watch()
> bus.connect('message', on_message)
>
> pipeline.set_state(gst.STATE_PLAYING)
> gtk.main()
>
>
> 2) Doubts on getting the audio pad in a gnonlin video pipeline
>
> >  GNonLin elements are one-output-stream-only elements. If you wish to
> > do processing on audio AND video, you will have to create:
> >  * 2 compositions (one per media type)
> >  * 2 sources (one per media type), you will have to set the 'caps'
> > properties of each of the sources to the correct type (Ex:
> > gst.Caps("video/x-raw-yuv;video/x-raw-rgb") for the video one).
> >
> >  Then put one source in each composition and connect each of those
> > composition to the adequate downstream elements (sinks, encoders,
> > etc...).
>
> I understood your explanation, but I don't know how to program it. The
> example you gave creates a gst.Caps object, but how do I set the caps
> to the source or composition? I tried to do
>
> source.set_property('caps', gst.Caps("audio/x-raw-int"))

  Make that "audio/x-raw-int;audio/x-raw-float" so that it allows any
kind of raw audio to come out.

>
> but it also doesn't work, getting the following error
>
> gst.LinkError: <enum GST_PAD_LINK_NOFORMAT of type GstPadLinkReturn>
>
> when connecting the obtained pad to the audio encoder.
>
> []'s
> Rodrigo
>
>
> 2009/2/16 Edward Hervey <[hidden email]>:
> > Hi,
> >
> >  GNonLin elements are one-output-stream-only elements. If you wish to
> > do processing on audio AND video, you will have to create:
> >  * 2 compositions (one per media type)
> >  * 2 sources (one per media type), you will have to set the 'caps'
> > properties of each of the sources to the correct type (Ex:
> > gst.Caps("video/x-raw-yuv;video/x-raw-rgb") for the video one).
> >
> >  Then put one source in each composition and connect each of those
> > composition to the adequate downstream elements (sinks, encoders,
> > etc...).
> >
> >  Why only one stream ? Because it is the lowest common denominator for
> > all types of non-linear operations. Various tests have proven the
> > overhead of duplicating the sources is minimal provided you code them
> > properly (the only duplicated processing part will be the filesource and
> > demuxer).
> >
> >  Hope this helps,
> >
> >    Edward
> >
> > On Fri, 2009-02-13 at 15:53 -0200, Rodrigo Manhães wrote:
> >> Hi,
> >>
> >> I wrote two small scripts to explain my problem on getting dynamic
> >> pads with gnonlin.
> >>
> >> The first, with no gnonlin, uses a filesrc, and prints:
> >>
> >> video/x-raw-yuv
> >> audio/x-raw-int
> >>
> >> indicating that it found the two pads.
> >> The code is below.
> >>
> >> ###
> >> import gst
> >> import gtk
> >>
> >> pipeline = gst.Pipeline()
> >>
> >> source = gst.element_factory_make('filesrc')
> >> source.set_property('location', 'creature.mpg')
> >> pipeline.add(source)
> >>
> >> decoder = gst.element_factory_make('decodebin')
> >> pipeline.add(decoder)
> >> source.link(decoder)
> >>
> >> def on_new_pad(bin, pad):
> >>     print pad.get_caps()[0].get_name()
> >>
> >> decoder.connect('pad-added', on_new_pad)
> >>
> >> pipeline.set_state(gst.STATE_PLAYING)
> >> gtk.main()
> >>
> >> ####
> >>
> >> The second, using gnonlin, prints:
> >>
> >> video/x-raw-yuv
> >>
> >> indicating that only the video stream was found. (If I connect the
> >> signal to gnlfilesource rather than gnlcomposition, the result is the
> >> same)
> >> The code is below:
> >>
> >> ###
> >> import gst
> >> import gtk
> >>
> >> pipeline = gst.Pipeline()
> >>
> >> composition = gst.element_factory_make('gnlcomposition')
> >> pipeline.add(composition)
> >>
> >> source = gst.element_factory_make('gnlfilesource')
> >> source.set_property('location', 'creature.mpg')
> >> source.set_property('start', 0)
> >> source.set_property('media-start', 0)
> >> source.set_property('duration', 10)
> >> source.set_property('media-duration', 10)
> >> composition.add(source)
> >>
> >> def on_new_pad(bin, pad):
> >>     print pad.get_caps()[0].get_name()
> >>
> >> source.connect('pad-added', on_new_pad)
> >>
> >> pipeline.set_state(gst.STATE_PLAYING)
> >> gtk.main()
> >> #####
> >>
> >> What am I doing wrong?
> >>
> >> []'s
> >> Rodrigo
> >>
> >> 2009/2/12 Rodrigo Manhães <[hidden email]>:
> >> > 2009/2/12 Edward Hervey <[hidden email]>:
> >> >>  'new-decoded-pad' is a convenience signal for new pads on decodebin.
> >> >> The normal signal for new pads being added to an element is 'pad-added'.
> >> >
> >> > Using 'pad-added' signal, the callback is called only once, for the
> >> > video stream. The pad for audio part is not added.
> >> >
> >> > []'s
> >> > Rodrigo
> >> >
> >>
> >> ------------------------------------------------------------------------------
> >> Open Source Business Conference (OSBC), March 24-25, 2009, San Francisco, CA
> >> -OSBC tackles the biggest issue in open source: Open Sourcing the Enterprise
> >> -Strategies to boost innovation and cut costs with open source participation
> >> -Receive a $600 discount off the registration fee with the source code: SFAD
> >> http://p.sf.net/sfu/XcvMzF8H
> >> _______________________________________________
> >> gstreamer-devel mailing list
> >> [hidden email]
> >> https://lists.sourceforge.net/lists/listinfo/gstreamer-devel
> >
> >
> > ------------------------------------------------------------------------------
> > Open Source Business Conference (OSBC), March 24-25, 2009, San Francisco, CA
> > -OSBC tackles the biggest issue in open source: Open Sourcing the Enterprise
> > -Strategies to boost innovation and cut costs with open source participation
> > -Receive a $600 discount off the registration fee with the source code: SFAD
> > http://p.sf.net/sfu/XcvMzF8H
> > _______________________________________________
> > gstreamer-devel mailing list
> > [hidden email]
> > https://lists.sourceforge.net/lists/listinfo/gstreamer-devel
> >
>
> ------------------------------------------------------------------------------
> Open Source Business Conference (OSBC), March 24-25, 2009, San Francisco, CA
> -OSBC tackles the biggest issue in open source: Open Sourcing the Enterprise
> -Strategies to boost innovation and cut costs with open source participation
> -Receive a $600 discount off the registration fee with the source code: SFAD
> http://p.sf.net/sfu/XcvMzF8H
> _______________________________________________
> gstreamer-devel mailing list
> [hidden email]
> https://lists.sourceforge.net/lists/listinfo/gstreamer-devel


------------------------------------------------------------------------------
Open Source Business Conference (OSBC), March 24-25, 2009, San Francisco, CA
-OSBC tackles the biggest issue in open source: Open Sourcing the Enterprise
-Strategies to boost innovation and cut costs with open source participation
-Receive a $600 discount off the registration fee with the source code: SFAD
http://p.sf.net/sfu/XcvMzF8H
_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

Re: Using gnonlin to extract time segments from a video file

Rodrigo Manhães
Hi,

2009/2/18 Edward Hervey <[hidden email]>:
>  Your code is fine, but the mpeg file is the culprit. Seeking in mpeg
> files is not 100% functional (yes, it sucks, really), and therefore
> gnonlin can't work effectivelly with those files :(

Ok! I tested the same code with ogg video files and worked fine. Thank you!

What format is better supported by gnonlin? I'll establish a canonical
video format for my application, so this information is very useful.

>> I understood your explanation, but I don't know how to program it. The
>> example you gave creates a gst.Caps object, but how do I set the caps
>> to the source or composition? I tried to do
>>
>> source.set_property('caps', gst.Caps("audio/x-raw-int"))
>
>  Make that "audio/x-raw-int;audio/x-raw-float" so that it allows any
> kind of raw audio to come out.

I tried to do this but I received the same GST_PAD_LINK_NOFORMAT error.

The form of setting caps I wrote above is correct? I've not found
references on doing this.

[]'s
Rodrigo


>
>>
>> but it also doesn't work, getting the following error
>>
>> gst.LinkError: <enum GST_PAD_LINK_NOFORMAT of type GstPadLinkReturn>
>>
>> when connecting the obtained pad to the audio encoder.
>>
>> []'s
>> Rodrigo
>>
>>
>> 2009/2/16 Edward Hervey <[hidden email]>:
>> > Hi,
>> >
>> >  GNonLin elements are one-output-stream-only elements. If you wish to
>> > do processing on audio AND video, you will have to create:
>> >  * 2 compositions (one per media type)
>> >  * 2 sources (one per media type), you will have to set the 'caps'
>> > properties of each of the sources to the correct type (Ex:
>> > gst.Caps("video/x-raw-yuv;video/x-raw-rgb") for the video one).
>> >
>> >  Then put one source in each composition and connect each of those
>> > composition to the adequate downstream elements (sinks, encoders,
>> > etc...).
>> >
>> >  Why only one stream ? Because it is the lowest common denominator for
>> > all types of non-linear operations. Various tests have proven the
>> > overhead of duplicating the sources is minimal provided you code them
>> > properly (the only duplicated processing part will be the filesource and
>> > demuxer).
>> >
>> >  Hope this helps,
>> >
>> >    Edward
>> >
>> > On Fri, 2009-02-13 at 15:53 -0200, Rodrigo Manhães wrote:
>> >> Hi,
>> >>
>> >> I wrote two small scripts to explain my problem on getting dynamic
>> >> pads with gnonlin.
>> >>
>> >> The first, with no gnonlin, uses a filesrc, and prints:
>> >>
>> >> video/x-raw-yuv
>> >> audio/x-raw-int
>> >>
>> >> indicating that it found the two pads.
>> >> The code is below.
>> >>
>> >> ###
>> >> import gst
>> >> import gtk
>> >>
>> >> pipeline = gst.Pipeline()
>> >>
>> >> source = gst.element_factory_make('filesrc')
>> >> source.set_property('location', 'creature.mpg')
>> >> pipeline.add(source)
>> >>
>> >> decoder = gst.element_factory_make('decodebin')
>> >> pipeline.add(decoder)
>> >> source.link(decoder)
>> >>
>> >> def on_new_pad(bin, pad):
>> >>     print pad.get_caps()[0].get_name()
>> >>
>> >> decoder.connect('pad-added', on_new_pad)
>> >>
>> >> pipeline.set_state(gst.STATE_PLAYING)
>> >> gtk.main()
>> >>
>> >> ####
>> >>
>> >> The second, using gnonlin, prints:
>> >>
>> >> video/x-raw-yuv
>> >>
>> >> indicating that only the video stream was found. (If I connect the
>> >> signal to gnlfilesource rather than gnlcomposition, the result is the
>> >> same)
>> >> The code is below:
>> >>
>> >> ###
>> >> import gst
>> >> import gtk
>> >>
>> >> pipeline = gst.Pipeline()
>> >>
>> >> composition = gst.element_factory_make('gnlcomposition')
>> >> pipeline.add(composition)
>> >>
>> >> source = gst.element_factory_make('gnlfilesource')
>> >> source.set_property('location', 'creature.mpg')
>> >> source.set_property('start', 0)
>> >> source.set_property('media-start', 0)
>> >> source.set_property('duration', 10)
>> >> source.set_property('media-duration', 10)
>> >> composition.add(source)
>> >>
>> >> def on_new_pad(bin, pad):
>> >>     print pad.get_caps()[0].get_name()
>> >>
>> >> source.connect('pad-added', on_new_pad)
>> >>
>> >> pipeline.set_state(gst.STATE_PLAYING)
>> >> gtk.main()
>> >> #####
>> >>
>> >> What am I doing wrong?
>> >>
>> >> []'s
>> >> Rodrigo
>> >>
>> >> 2009/2/12 Rodrigo Manhães <[hidden email]>:
>> >> > 2009/2/12 Edward Hervey <[hidden email]>:
>> >> >>  'new-decoded-pad' is a convenience signal for new pads on decodebin.
>> >> >> The normal signal for new pads being added to an element is 'pad-added'.
>> >> >
>> >> > Using 'pad-added' signal, the callback is called only once, for the
>> >> > video stream. The pad for audio part is not added.
>> >> >
>> >> > []'s
>> >> > Rodrigo
>> >> >
>> >>
>> >> ------------------------------------------------------------------------------
>> >> Open Source Business Conference (OSBC), March 24-25, 2009, San Francisco, CA
>> >> -OSBC tackles the biggest issue in open source: Open Sourcing the Enterprise
>> >> -Strategies to boost innovation and cut costs with open source participation
>> >> -Receive a $600 discount off the registration fee with the source code: SFAD
>> >> http://p.sf.net/sfu/XcvMzF8H
>> >> _______________________________________________
>> >> gstreamer-devel mailing list
>> >> [hidden email]
>> >> https://lists.sourceforge.net/lists/listinfo/gstreamer-devel
>> >
>> >
>> > ------------------------------------------------------------------------------
>> > Open Source Business Conference (OSBC), March 24-25, 2009, San Francisco, CA
>> > -OSBC tackles the biggest issue in open source: Open Sourcing the Enterprise
>> > -Strategies to boost innovation and cut costs with open source participation
>> > -Receive a $600 discount off the registration fee with the source code: SFAD
>> > http://p.sf.net/sfu/XcvMzF8H
>> > _______________________________________________
>> > gstreamer-devel mailing list
>> > [hidden email]
>> > https://lists.sourceforge.net/lists/listinfo/gstreamer-devel
>> >
>>
>> ------------------------------------------------------------------------------
>> Open Source Business Conference (OSBC), March 24-25, 2009, San Francisco, CA
>> -OSBC tackles the biggest issue in open source: Open Sourcing the Enterprise
>> -Strategies to boost innovation and cut costs with open source participation
>> -Receive a $600 discount off the registration fee with the source code: SFAD
>> http://p.sf.net/sfu/XcvMzF8H
>> _______________________________________________
>> gstreamer-devel mailing list
>> [hidden email]
>> https://lists.sourceforge.net/lists/listinfo/gstreamer-devel
>
>
> ------------------------------------------------------------------------------
> Open Source Business Conference (OSBC), March 24-25, 2009, San Francisco, CA
> -OSBC tackles the biggest issue in open source: Open Sourcing the Enterprise
> -Strategies to boost innovation and cut costs with open source participation
> -Receive a $600 discount off the registration fee with the source code: SFAD
> http://p.sf.net/sfu/XcvMzF8H
> _______________________________________________
> gstreamer-devel mailing list
> [hidden email]
> https://lists.sourceforge.net/lists/listinfo/gstreamer-devel
>

------------------------------------------------------------------------------
Open Source Business Conference (OSBC), March 24-25, 2009, San Francisco, CA
-OSBC tackles the biggest issue in open source: Open Sourcing the Enterprise
-Strategies to boost innovation and cut costs with open source participation
-Receive a $600 discount off the registration fee with the source code: SFAD
http://p.sf.net/sfu/XcvMzF8H
_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/gstreamer-devel