Parse pipeline to Python

classic Classic list List threaded Threaded
8 messages Options
Reply | Threaded
Open this post in threaded view
|

Parse pipeline to Python

Ormi
Hi guys!

I have made a working pipeline for playing video with audio. It looks like.

# Video with audio
gst-launch-1.0 filesrc location=speaker.MTS ! decodebin name=demuxer demuxer. ! \
queue ! audioconvert ! audioresample ! autoaudiosink demuxer. ! \
queue ! videoscale ! videoconvert ! videobox ! videomixer ! xvimagesink

The video file is with audio
I have real struggle to parse this to Python.
So far with best effors I get.
(Source: https://brettviren.github.io/pygst-tutorial-org/pygst-tutorial.html)

#!/usr/bin/env python

import os
import gi
gi.require_version('Gst', '1.0')
from gi.repository import Gst, GObject, Gtk

class GTK_Main(object):

    def __init__(self):
        window = Gtk.Window(Gtk.WindowType.TOPLEVEL)
        window.set_title("Mpeg2-Player")
        window.set_default_size(500, 400)
        window.connect("destroy", Gtk.main_quit, "WM destroy")
        vbox = Gtk.VBox()
        window.add(vbox)
        hbox = Gtk.HBox()
        vbox.pack_start(hbox, False, False, 0)
        self.entry = Gtk.Entry()
        hbox.add(self.entry)
        self.button = Gtk.Button("Start")
        hbox.pack_start(self.button, False, False, 0)
        self.button.connect("clicked", self.start_stop)
        self.movie_window = Gtk.DrawingArea()
        vbox.add(self.movie_window)
        window.show_all()

        self.player = Gst.Pipeline.new("player")
        source = Gst.ElementFactory.make("filesrc", "file-source")
        demuxer = Gst.ElementFactory.make("mpegpsdemux", "demuxer")
        demuxer.connect("pad-added", self.demuxer_callback)
        self.video_decoder = Gst.ElementFactory.make("mpeg2dec", "video-decoder")
        self.audio_decoder = Gst.ElementFactory.make("mad", "audio-decoder")
        audioconv = Gst.ElementFactory.make("audioconvert", "converter")
        audiosink = Gst.ElementFactory.make("autoaudiosink", "audio-output")
        videosink = Gst.ElementFactory.make("autovideosink", "video-output")
        self.queuea = Gst.ElementFactory.make("queue", "queuea")
        self.queuev = Gst.ElementFactory.make("queue", "queuev")
        colorspace = Gst.ElementFactory.make("videoconvert", "colorspace")

        self.player.add(source) 
        self.player.add(demuxer) 
        self.player.add(self.video_decoder) 
        self.player.add(self.audio_decoder) 
        self.player.add(audioconv) 
        self.player.add(audiosink) 
        self.player.add(videosink) 
        self.player.add(self.queuea) 
        self.player.add(self.queuev) 
        self.player.add(colorspace)

        source.link(demuxer)

        self.queuev.link(self.video_decoder)
        self.video_decoder.link(colorspace)
        colorspace.link(videosink)

        self.queuea.link(self.audio_decoder)
        self.audio_decoder.link(audioconv)
        audioconv.link(audiosink)

        bus = self.player.get_bus()
        bus.add_signal_watch()
        bus.enable_sync_message_emission()
        bus.connect("message", self.on_message)
        bus.connect("sync-message::element", self.on_sync_message)

    def start_stop(self, w):
        if self.button.get_label() == "Start":
            filepath = self.entry.get_text().strip()
            if os.path.isfile(filepath):
                filepath = os.path.realpath(filepath)
                self.button.set_label("Stop")
                self.player.get_by_name("file-source").set_property("location", filepath)
                self.player.set_state(Gst.State.PLAYING)
            else:
                self.player.set_state(Gst.State.NULL)
                self.button.set_label("Start")

    def on_message(self, bus, message):
        t = message.type
        if t == Gst.MessageType.EOS:
            self.player.set_state(Gst.State.NULL)
            self.button.set_label("Start")
        elif t == Gst.MessageType.ERROR:
            err, debug = message.parse_error()
            print ("Error: %s" % err, debug)
            self.player.set_state(Gst.State.NULL)
            self.button.set_label("Start")

    def on_sync_message(self, bus, message):
        if message.get_structure().get_name() == 'prepare-window-handle':
            imagesink = message.src
            imagesink.set_property("force-aspect-ratio", True)
            xid = self.movie_window.get_property('window').get_xid()
            imagesink.set_window_handle(xid)

    def demuxer_callback(self, demuxer, pad):
        if pad.get_property("template").name_template == "video_%02x":
            qv_pad = self.queuev.get_static_pad("sink")
            pad.link(qv_pad)
        elif pad.get_property("template").name_template == "audio_%02x":
            qa_pad = self.queuea.get_static_pad("sink")
            pad.link(qa_pad)


Gst.init(None)
GTK_Main()
GObject.threads_init()
Gtk.main()

But I have ultimate error, where I can't move further
Error: gst-stream-error-quark: Internal data stream error. (1) gstmpegdemux.c(2945): gst_ps_demux_loop (): /GstPipeline:player/GstMpegPSDemux:demuxer:
streaming stopped, reason not-negotiated (-4)

Does anybody have any idea what I can improve or do you have some other (more update) tutorials?
My further idea is use this for making video streams from Python conferences in Czech Republic, I mean parse to one screen 2 video streams, with sound from one video and some picture next to videos with info.

Thanks a lot
Ormi
Reply | Threaded
Open this post in threaded view
|

Re: Parse pipeline to Python

Nicolas Dufresne-5
Le mardi 28 mars 2017 à 06:17 -0700, Ormi a écrit :

> I have made a working pipeline for playing video with audio. It looks
> like.
>
>
>
> The video file is with audio
> I have real struggle to parse this to Python.
> So far with best effors I get.
> /(Source:
> https://brettviren.github.io/pygst-tutorial-org/pygst-tutorial.html)/
>
>
>
> But I have ultimate error, where I can't move further
Can you explain what errors you are having ? Describe the pipeline and
share the part of the code that has problems ? It's hard from your
report to help you, since you just point to a large tutorial with a lot
of code.

>
>
> Does anybody have any idea what I can improve or do you have some other
> (more update) tutorials?
> My further idea is use this for making video streams from Python conferences
> in Czech Republic, I mean parse to one screen 2 video streams, with sound
> from one video and some picture next to videos with info.
_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel

signature.asc (188 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: Parse pipeline to Python

Ormi
That's my whole output from terminal.

test4.py:6: PyGIWarning: Gtk was imported without specifying a version first. Use gi.require_version('Gtk', '3.0') before import to ensure that the right version gets loaded.
  from gi.repository import Gst, GObject, Gtk
Error: gst-stream-error-quark: decoding error (7) gstmpeg2dec.c(1134): gst_mpeg2dec_handle_frame (): /GstPipeline:player/GstMpeg2dec:video-decoder:
Reached libmpeg2 invalid state

At the begging I sent my pipeline prototype which works in terminal.

gst-launch-1.0 filesrc location=speaker.MTS ! decodebin name=demuxer demuxer. ! \
queue ! audioconvert ! audioresample ! autoaudiosink demuxer. ! \
queue ! videoscale ! videoconvert ! videobox ! videomixer ! xvimagesink

And I showed you my Python code, which is best implementation of my pipeline I can get.
Because exactly implement the working pipeline prototype to Python failed. So I used this tutorials.

If anyobody can show me little example how to implement this my pipeline to Python I will be grateful.
I have a little mess in all pad, src, sinks and separate video/audio processing, so I am not sure If I have right approach.
Reply | Threaded
Open this post in threaded view
|

Re: Parse pipeline to Python

Nicolas Dufresne-5
Did you forget to attach something? I really don't see anything off what you mention.

Le 28 mars 2017 6:14 PM, "Ormi" <[hidden email]> a écrit :
That's my whole output from terminal.



At the begging I sent my pipeline prototype which works in terminal.



And I showed you my Python code, which is best implementation of my pipeline
I can get.
Because exactly implement the working pipeline prototype to Python failed.
So I used this tutorials.

If anyobody can show me little example how to implement this my pipeline to
Python I will be grateful.
I have a little mess in all pad, src, sinks and separate video/audio
processing, so I am not sure If I have right approach.




--
View this message in context: http://gstreamer-devel.966125.n4.nabble.com/Parse-pipeline-to-Python-tp4682430p4682439.html
Sent from the GStreamer-devel mailing list archive at Nabble.com.
_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel

_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

Re: Parse pipeline to Python

Ormi
I think there is a problem with raw text formatting in this forum.
Once again. Or something/someone block my raw text code samples.

__

That's my whole output from terminal.

test4.py:6: PyGIWarning: Gtk was imported without specifying a version first. Use gi.require_version('Gtk', '3.0') before import to ensure that the right version gets loaded.
  from gi.repository import Gst, GObject, Gtk
Error: gst-stream-error-quark: decoding error (7) gstmpeg2dec.c(1134): gst_mpeg2dec_handle_frame (): /GstPipeline:player/GstMpeg2dec:video-decoder:
Reached libmpeg2 invalid state

At the begging I sent my pipeline prototype which works in terminal.

gst-launch-1.0 filesrc location=speaker.MTS ! decodebin name=demuxer demuxer. ! \
queue ! audioconvert ! audioresample ! autoaudiosink demuxer. ! \
queue ! videoscale ! videoconvert ! videobox ! videomixer ! xvimagesink

And I showed you my Python code, which is best implementation of my pipeline I can get.
Because exactly implement the working pipeline prototype to Python failed. So I used this tutorials.

If anyobody can show me little example how to implement this my pipeline to Python I will be grateful.
I have a little mess in all pad, src, sinks and separate video/audio processing, so I am not sure If I have right approach.

#!/usr/bin/env python

import os
import gi
gi.require_version('Gst', '1.0')
from gi.repository import Gst, GObject, Gtk

class GTK_Main(object):

    def __init__(self):
        window = Gtk.Window(Gtk.WindowType.TOPLEVEL)
        window.set_title("Mpeg2-Player")
        window.set_default_size(500, 400)
        window.connect("destroy", Gtk.main_quit, "WM destroy")
        vbox = Gtk.VBox()
        window.add(vbox)
        hbox = Gtk.HBox()
        vbox.pack_start(hbox, False, False, 0)
        self.entry = Gtk.Entry()
        hbox.add(self.entry)
        self.button = Gtk.Button("Start")
        hbox.pack_start(self.button, False, False, 0)
        self.button.connect("clicked", self.start_stop)
        self.movie_window = Gtk.DrawingArea()
        vbox.add(self.movie_window)
        window.show_all()

        self.player = Gst.Pipeline.new("player")
        source = Gst.ElementFactory.make("filesrc", "file-source")
        demuxer = Gst.ElementFactory.make("mpegpsdemux", "demuxer")
        demuxer.connect("pad-added", self.demuxer_callback)
        self.video_decoder = Gst.ElementFactory.make("mpeg2dec", "video-decoder")
        self.audio_decoder = Gst.ElementFactory.make("mad", "audio-decoder")
        audioconv = Gst.ElementFactory.make("audioconvert", "converter")
        audiosink = Gst.ElementFactory.make("autoaudiosink", "audio-output")
        videosink = Gst.ElementFactory.make("autovideosink", "video-output")
        self.queuea = Gst.ElementFactory.make("queue", "queuea")
        self.queuev = Gst.ElementFactory.make("queue", "queuev")
        colorspace = Gst.ElementFactory.make("videoconvert", "colorspace")

        self.player.add(source)
        self.player.add(demuxer)
        self.player.add(self.video_decoder)
        self.player.add(self.audio_decoder)
        self.player.add(audioconv)
        self.player.add(audiosink)
        self.player.add(videosink)
        self.player.add(self.queuea)
        self.player.add(self.queuev)
        self.player.add(colorspace)

        source.link(demuxer)

        self.queuev.link(self.video_decoder)
        self.video_decoder.link(colorspace)
        colorspace.link(videosink)

        self.queuea.link(self.audio_decoder)
        self.audio_decoder.link(audioconv)
        audioconv.link(audiosink)

        bus = self.player.get_bus()
        bus.add_signal_watch()
        bus.enable_sync_message_emission()
        bus.connect("message", self.on_message)
        bus.connect("sync-message::element", self.on_sync_message)

    def start_stop(self, w):
        if self.button.get_label() == "Start":
            filepath = self.entry.get_text().strip()
            if os.path.isfile(filepath):
                filepath = os.path.realpath(filepath)
                self.button.set_label("Stop")
                self.player.get_by_name("file-source").set_property("location", filepath)
                self.player.set_state(Gst.State.PLAYING)
            else:
                self.player.set_state(Gst.State.NULL)
                self.button.set_label("Start")

    def on_message(self, bus, message):
        t = message.type
        if t == Gst.MessageType.EOS:
            self.player.set_state(Gst.State.NULL)
            self.button.set_label("Start")
        elif t == Gst.MessageType.ERROR:
            err, debug = message.parse_error()
            print ("Error: %s" % err, debug)
            self.player.set_state(Gst.State.NULL)
            self.button.set_label("Start")

    def on_sync_message(self, bus, message):
        if message.get_structure().get_name() == 'prepare-window-handle':
            imagesink = message.src
            imagesink.set_property("force-aspect-ratio", True)
            xid = self.movie_window.get_property('window').get_xid()
            imagesink.set_window_handle(xid)

    def demuxer_callback(self, demuxer, pad):
        if pad.get_property("template").name_template == "video_%02x":
            qv_pad = self.queuev.get_static_pad("sink")
            pad.link(qv_pad)
        elif pad.get_property("template").name_template == "audio_%02x":
            qa_pad = self.queuea.get_static_pad("sink")
            pad.link(qa_pad)

Gst.init(None)
GTK_Main()
GObject.threads_init()
Gtk.main()


Thanks in advance!
Ormi
Reply | Threaded
Open this post in threaded view
|

Re: Parse pipeline to Python

Arjen Veenhuizen
The biggest differences between your command line pipeline and your code is that on the former you rely on decodebin to do all the hard word of selecting the correct demuxer and decoders while in the latter you try to do it yourself. It is quite possible that you selected the incorrect elements.

Follow these [1] steps to create a DOT file of your command line pipeline and convert the DOT file to a PNG. This PNG gives you insight in what decodebin is actually doing under the hood. Check it to make sure you are using the correct demuxer and decoders.

[1] https://developer.ridgerun.com/wiki/index.php/How_to_generate_a_Gstreamer_pipeline_diagram_(graph)
Reply | Threaded
Open this post in threaded view
|

Re: Parse pipeline to Python

Ormi
Thanks for you reply Arjen!
I was looking for a way how to make graphs from pipelines. Now I know how.

I tried to implement pipeline as you can see it, in easy way. But without the success.
So I try some tutorials which lead to me this code. I have to study how to do hard-work behind decodebin.

..or.
Is there is a way how to implement gstreamer in Python easy as gstreaner pipeline in bash? Or exactly like that, to give all hard work to decodebin?

Thanks
Ormi.
Reply | Threaded
Open this post in threaded view
|

Re: Parse pipeline to Python

Arjen Veenhuizen
For simple pipelines you can resort to using Gst.parse_launch():
pipeline = "filesrc location=speaker.MTS name=source ! decodebin name=demuxer demuxer. ! queue ! audioconvert ! audioresample ! autoaudiosink demuxer. ! queue ! videoscale ! videoconvert ! videobox ! videomixer ! xvimagesink"

self.pipeline = Gst.parse_launch(pipeline)
This way you don't have to go through the trouble of creating the pipeline yourself. You can access individual elements by their name, e.g.  
filesrc = self.pipeline.get_by_name("source")

You can also get rid of your parser (which turn out to be missing in your code in the first place!) and decoder elements after the demuxer and replace mpegpsdemux by decodebin, The decodebin element also has a pad-added signal which will be fired for each elementary stream (audio/video). Make sure to check the pad names in you callback because they will likely have different names compared to the demuxer.