gstreamer appsrc works for xvimagesink but no in theoraenc ! oggmux

classic Classic list List threaded Threaded
2 messages Options
Reply | Threaded
Open this post in threaded view
|

gstreamer appsrc works for xvimagesink but no in theoraenc ! oggmux

Bernardo Kyotoku
asked in Stackoverflow. (no answer)
Hi All,
I am trying to stream cast a computer generated video using gstreamer
and icecast, but I cannot get gstreamer appsrc to work. My app works
as expected if I use xvimagesink as the  sink(see commented code
below). But once I pipe it to theoraenc it does not run.

I exchanged shout2send with filesink to check if the problem was
icecast, the result is that no data is written to the file.
Substituting appsrc with testvideosrc works as expected. Any
suggestion?

Bernardo

    #!/usr/bin/env python
    import sys, os, pygtk, gtk, gobject
    import pygst
    pygst.require("0.10")
    import gst
    import numpy as np

    class GTK_Main:
    def __init__(self):
    window = gtk.Window(gtk.WINDOW_TOPLEVEL)
    window.connect("destroy", gtk.main_quit, "WM destroy")
    vbox = gtk.VBox()
    window.add(vbox)
    self.button = gtk.Button("Start")
    self.button.connect("clicked", self.start_stop)
    vbox.add(self.button)
    window.show_all()
   
    self.player = gst.Pipeline("player")
    source = gst.element_factory_make("appsrc", "source")
    caps = gst.Caps("video/x-raw-gray,bpp=16,endianness=1234,width=320,height=240,framerate=(fraction)10/1")
    source.set_property('caps',caps)
    source.set_property('blocksize',320*240*2)
    source.connect('need-data', self.needdata)
    colorspace = gst.element_factory_make('ffmpegcolorspace')
    enc = gst.element_factory_make('theoraenc')
    mux = gst.element_factory_make('oggmux')
    shout = gst.element_factory_make('shout2send')
    shout.set_property("ip","localhost")
    shout.set_property("password","hackme")
    shout.set_property("mount","/stream")
    caps = gst.Caps("video/x-raw-yuv,width=320,height=240,framerate=(fraction)10/1,format=(fourcc)I420")
    enc.caps = caps
    videosink = gst.element_factory_make('xvimagesink')
    videosink.caps = caps
   
    self.player.add(source, colorspace, enc, mux, shout)
    gst.element_link_many(source, colorspace, enc, mux, shout)
    #self.player.add(source, colorspace, videosink)
    #gst.element_link_many(source, colorspace, videosink)

    def start_stop(self, w):
    if self.button.get_label() == "Start":
    self.button.set_label("Stop")
    self.player.set_state(gst.STATE_PLAYING)
    else:
    self.player.set_state(gst.STATE_NULL)
    self.button.set_label("Start")

    def needdata(self, src, length):
    bytes = np.int16(np.random.rand(length/2)*30000).data
    src.emit('push-buffer', gst.Buffer(bytes))

    GTK_Main()
    gtk.gdk.threads_init()
    gtk.main()
_______________________________________________
gstreamer-devel mailing list
[hidden email]
http://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

Newbie Pipeline design question

Paul Stuart

Hi,
 My system needs to:
Preview captured video while encoding in up to two different formats to two different files while the user may asynchronously capture jpeg stills.

So, is it better to implement this into a single, fancy, pipeline where I dynamically add and remove encoder elements, or is it better to spin each into a separate pipeline and launch them in parallel, like:
Preview Pipeline: Capture->Display
H264 Encode Pipeline: Capture->Enc->Mux->Filesink
JPEG Encode Pipeline: Capture->Enc->Mux->Filesink

And so on. Is it a fair question to even ask which is better?

Thanks,
Paul
_______________________________________________
gstreamer-devel mailing list
[hidden email]
http://lists.freedesktop.org/mailman/listinfo/gstreamer-devel