Really bad code to critique

classic Classic list List threaded Threaded
5 messages Options
Reply | Threaded
Open this post in threaded view
|

Really bad code to critique

Dave Mateer
I'm brand new to gstreamer, but am pretty sure it is going to be the right library for an application we are working on. I am very impressed with the feature set that the library implements. I have a proof-of-concept application that reads a video file and then plays the audio on the computer and displays the video in a Qt widget. I was wondering if some of the more experienced devs would mind offering a critique--I'm sure I'm doing a bunch of things The Wrong Way, but there are a lot of new things for me here: the gstreamer API itself, C (I'm more of a C++ guy), the GObject system, etc.

Among other things, the fact that I'm testing for audio vs. video pad by comparing a string prefix seems fragile and smelly. That can't be right. Also, it seems like I'm leaking memory in the call to gst_bin_get_by_name because the docs say that transfers ownership (not just a reference count?). Not sure about that one. I'm sure there are many other things as well.

I appreciate any help anyone could offer. It's my first attempt, so I know it's bad, so be honest! I stripped out some of the more Qt-specific stuff.

static void on_pad_added(GstElement *element,
                         GstPad *new_pad,
                         gpointer user_data) {
  GstCaps *caps = gst_pad_get_caps_reffed(new_pad);
  gchar *name = gst_caps_to_string(caps);
  GstBin *pipeline = (GstBin*)user_data;

  if (g_str_has_prefix(name, "audio")) {
    g_print("audio\n");

    // Leaking memory?? gst_bin_get_by_name transfers ownership?
    GstElement *audio_sink = gst_bin_get_by_name(pipeline, "audio-sink");
    if (!audio_sink) {
      throw std::runtime_error("Could not extract audio sink from pipeline");
    }

    GstPad *sink_pad = gst_element_get_static_pad(audio_sink, "sink");
    if (!sink_pad) {
      throw std::runtime_error("Could not get sink pad from audio element.");
    }

    if (GST_PAD_LINK_FAILED(gst_pad_link(new_pad, sink_pad))) {
      throw std::runtime_error("Failed to link audio src and sink");
    }

    gst_object_unref(sink_pad);
  } else {
    g_print("video\n");

    // Leaking memory?? gst_bin_get_by_name transfers ownership?
    GstElement *video_sink = gst_bin_get_by_name(pipeline, "video-sink");
    if (!video_sink) {
      throw std::runtime_error("Could not extract video sink from pipeline");
    }

    GstPad *sink_pad = gst_element_get_static_pad(video_sink, "sink");
    if (!sink_pad) {
      throw std::runtime_error("Could not get sink pad from video element.");
    }

    if (GST_PAD_LINK_FAILED(gst_pad_link(new_pad, sink_pad))) {
      throw std::runtime_error("Failed to link video src and sink");
    }

    gst_object_unref(sink_pad);
  }

  g_free(name);
  gst_caps_unref(caps);
}

MainWindow::MainWindow(QWidget *parent)
  : QMainWindow(parent),
    ui(new Ui::MainWindow) {
  ui->setupUi(this);

  // Create the top-level pipeline.
  pipeline_ = gst_pipeline_new(NULL);
  if (!pipeline_) {
    throw std::runtime_error("Could not create pipeline");
  }

  // Create the decode bin for our video source.
  GstElement *src = gst_element_factory_make("uridecodebin", NULL);
  if (!src) {
    throw std::runtime_error("Could not create uridecodebin");
  }
  g_object_set(src, "uri", "file:///path/to/media.ogg", NULL);

  // Add the decode bin to the pipeline.
  if (!gst_bin_add(GST_BIN(pipeline_), src)) {
    throw std::runtime_error("Could not add uridecodebin to pipeline");
  }

  // Create the video sink. This will be linked in the pad-added signal handler.
  GstElement *video_sink = gst_element_factory_make("xvimagesink",
                                                    "video-sink");
  if (!video_sink) {
    throw std::runtime_error("Could not create xvimagesink");
  }
  if (!gst_bin_add(GST_BIN(pipeline_), video_sink)) {
    throw std::runtime_error("Could not add video sink to pipeline");
  }
  WId id = ui->video_widget->winId();
  gst_x_overlay_set_window_handle(GST_X_OVERLAY(video_sink), id);
  qApp->syncX();

  // Create the audio sink. This will be linked in the pad-added signal handler.
  GstElement *audio_sink = gst_element_factory_make("autoaudiosink",
                                                    "audio-sink");
  if (!audio_sink) {
    throw std::runtime_error("Could not create autoaudiosink");
  }
  if (!gst_bin_add(GST_BIN(pipeline_), audio_sink)) {
    throw std::runtime_error("Could not add audio sink to pipeline");
  }

  // Register our interest in the pad-added signal so we can connect our sinks.
  g_signal_connect(src, "pad-added", G_CALLBACK(on_pad_added), pipeline_);

  // Start the playback.
  gst_element_set_state(pipeline_, GST_STATE_PLAYING);
}

MainWindow::~MainWindow() {
  gst_element_set_state (pipeline_, GST_STATE_NULL);
  gst_object_unref(pipeline_);
  delete ui;
}


_______________________________________________
gstreamer-devel mailing list
[hidden email]
http://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

Re: Really bad code to critique

Luis de Bethencourt
On 1 May 2012 21:55, Dave Mateer <[hidden email]> wrote:

> I'm brand new to gstreamer, but am pretty sure it is going to be the right library for an application we are working on. I am very impressed with the feature set that the library implements. I have a proof-of-concept application that reads a video file and then plays the audio on the computer and displays the video in a Qt widget. I was wondering if some of the more experienced devs would mind offering a critique--I'm sure I'm doing a bunch of things The Wrong Way, but there are a lot of new things for me here: the gstreamer API itself, C (I'm more of a C++ guy), the GObject system, etc.
>
> Among other things, the fact that I'm testing for audio vs. video pad by comparing a string prefix seems fragile and smelly. That can't be right. Also, it seems like I'm leaking memory in the call to gst_bin_get_by_name because the docs say that transfers ownership (not just a reference count?). Not sure about that one. I'm sure there are many other things as well.
>
> I appreciate any help anyone could offer. It's my first attempt, so I know it's bad, so be honest! I stripped out some of the more Qt-specific stuff.
>
> static void on_pad_added(GstElement *element,
>                         GstPad *new_pad,
>                         gpointer user_data) {
>  GstCaps *caps = gst_pad_get_caps_reffed(new_pad);
>  gchar *name = gst_caps_to_string(caps);
>  GstBin *pipeline = (GstBin*)user_data;
>
>  if (g_str_has_prefix(name, "audio")) {
>    g_print("audio\n");
>
>    // Leaking memory?? gst_bin_get_by_name transfers ownership?
>    GstElement *audio_sink = gst_bin_get_by_name(pipeline, "audio-sink");
>    if (!audio_sink) {
>      throw std::runtime_error("Could not extract audio sink from pipeline");
>    }
>
>    GstPad *sink_pad = gst_element_get_static_pad(audio_sink, "sink");
>    if (!sink_pad) {
>      throw std::runtime_error("Could not get sink pad from audio element.");
>    }
>
>    if (GST_PAD_LINK_FAILED(gst_pad_link(new_pad, sink_pad))) {
>      throw std::runtime_error("Failed to link audio src and sink");
>    }
>
>    gst_object_unref(sink_pad);
>  } else {
>    g_print("video\n");
>
>    // Leaking memory?? gst_bin_get_by_name transfers ownership?
>    GstElement *video_sink = gst_bin_get_by_name(pipeline, "video-sink");
>    if (!video_sink) {
>      throw std::runtime_error("Could not extract video sink from pipeline");
>    }
>
>    GstPad *sink_pad = gst_element_get_static_pad(video_sink, "sink");
>    if (!sink_pad) {
>      throw std::runtime_error("Could not get sink pad from video element.");
>    }
>
>    if (GST_PAD_LINK_FAILED(gst_pad_link(new_pad, sink_pad))) {
>      throw std::runtime_error("Failed to link video src and sink");
>    }
>
>    gst_object_unref(sink_pad);
>  }
>
>  g_free(name);
>  gst_caps_unref(caps);
> }
>
> MainWindow::MainWindow(QWidget *parent)
>  : QMainWindow(parent),
>    ui(new Ui::MainWindow) {
>  ui->setupUi(this);
>
>  // Create the top-level pipeline.
>  pipeline_ = gst_pipeline_new(NULL);
>  if (!pipeline_) {
>    throw std::runtime_error("Could not create pipeline");
>  }
>
>  // Create the decode bin for our video source.
>  GstElement *src = gst_element_factory_make("uridecodebin", NULL);
>  if (!src) {
>    throw std::runtime_error("Could not create uridecodebin");
>  }
>  g_object_set(src, "uri", "file:///path/to/media.ogg", NULL);
>
>  // Add the decode bin to the pipeline.
>  if (!gst_bin_add(GST_BIN(pipeline_), src)) {
>    throw std::runtime_error("Could not add uridecodebin to pipeline");
>  }
>
>  // Create the video sink. This will be linked in the pad-added signal handler.
>  GstElement *video_sink = gst_element_factory_make("xvimagesink",
>                                                    "video-sink");
>  if (!video_sink) {
>    throw std::runtime_error("Could not create xvimagesink");
>  }
>  if (!gst_bin_add(GST_BIN(pipeline_), video_sink)) {
>    throw std::runtime_error("Could not add video sink to pipeline");
>  }
>  WId id = ui->video_widget->winId();
>  gst_x_overlay_set_window_handle(GST_X_OVERLAY(video_sink), id);
>  qApp->syncX();
>
>  // Create the audio sink. This will be linked in the pad-added signal handler.
>  GstElement *audio_sink = gst_element_factory_make("autoaudiosink",
>                                                    "audio-sink");
>  if (!audio_sink) {
>    throw std::runtime_error("Could not create autoaudiosink");
>  }
>  if (!gst_bin_add(GST_BIN(pipeline_), audio_sink)) {
>    throw std::runtime_error("Could not add audio sink to pipeline");
>  }
>
>  // Register our interest in the pad-added signal so we can connect our sinks.
>  g_signal_connect(src, "pad-added", G_CALLBACK(on_pad_added), pipeline_);
>
>  // Start the playback.
>  gst_element_set_state(pipeline_, GST_STATE_PLAYING);
> }
>
> MainWindow::~MainWindow() {
>  gst_element_set_state (pipeline_, GST_STATE_NULL);
>  gst_object_unref(pipeline_);
>  delete ui;
> }

I would suggest getting the GstBus of the pipeline, and then reading
the GstMessages
you receive in it. It will make things easier.

http://gstreamer.freedesktop.org/data/doc/gstreamer/head/gstreamer/html/GstBus.html

Enjoy,
Luis
_______________________________________________
gstreamer-devel mailing list
[hidden email]
http://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

Re:Really bad code to critique

hcpwll
In reply to this post by Dave Mateer
About the Gobject system,
Why Gobject  not C++ ?
      It designed for cross language programming.
      Please read "Inside the C++ Object Mode", It will help you understand all of the OO programming  language.
      For other things, You just need a KDE and trying to debug Gstreamer sample step by step, and look at the call stack.

At 2012-05-02 04:55:59,"Dave Mateer" <[hidden email]> wrote:
>I'm brand new to gstreamer, but am pretty sure it is going to be the right library for an application we are working on. I am very impressed with the feature set that the library implements. I have a proof-of-concept application that reads a video file and then plays the audio on the computer and displays the video in a Qt widget. I was wondering if some of the more experienced devs would mind offering a critique--I'm sure I'm doing a bunch of things Th e Wrong Way, but there are a lot of new things for me here: the gstreamer API itself, C (I'm more of a C++ guy), the GObject system, etc.
>
>Among other things, the fact that I'm testing for audio vs. video pad by comparing a string prefix seems fragile and smelly. That can't be right. Also, it seems like I'm leaking memory in the call to gst_bin_get_by_name because the docs say that transfers ownership (not just a reference count?). Not sure about that one. I'm sure there are many other things as well.
>
&g t;I appreciate any help anyone could offer. It's my first attempt, so I know it's bad, so be honest! I stripped out some of the more Qt-specific stuff.

>
>static void on_pad_added(GstElement *element,
>                         GstPad *new_pad,
>                         gpointer user_data) {
>  GstCaps *caps = gst_pad_get_caps_reffed(new_pad);
>  gchar *name = gst_caps_to_string(caps);
>  GstBin *pipeline = (GstBin*)user_data;
>
>   if (g_str_has_prefix(name, "audio")) {
>    g_print("audio\n");
>
>    // Leaking memory?? gst_bin_get_by_name transfers ownership?
>    GstElement *audio_sink = gst_bin_get_by_name(pipeline, "audio-sink");
>    if (!audio_sink) {
>      throw std::runtime_error("Could not extract audio sink from pipeline");
>    }
>
>    GstPad *sink_pad = gst_element_get_static_pad(audio_sink, "sink");
>    if (!sink_pad) {
>      throw std::runtime_error("Could not get sink pad from audio element.");
>    }
>
> &n bsp;  if (GST_PAD_LINK_FAILED(gst_pad_link(new_pad, sink_pad))) {
>      throw std::runtime_error("Failed to link audio src and sink");
>    }
>
>    gst_object_unref(sink_pad);
>  } else {
>    g_print("video\n");
>
>    // Leaking memory?? gst_bin_get_by_name transfers ownership?
>    GstElement *video_sink = gst_bin_get_by_name(pipeline, "video-sink");
>    if (!video_sink) {
>      throw std::runtime_error("Could not extract video sink from pipeline");
>    }
>
>    GstPad *sink_pad = gst_el ement_get_static_pad(video_sink, "sink");
>    if (!sink_pad) {
>      throw std::runtime_error("Could not get sink pad from video element.");
>    }
>
>    if (GST_PAD_LINK_FAILED(gst_pad_link(new_pad, sink_pad))) {
>      throw std::runtime_error("Failed to link video src and sink");
>    }
>
>    gst_object_unref(sink_pad);
>  }
>
>  g_free(name);
>  gst_caps_unref(caps);
>}
>
>MainWindow::MainWindow(QWidget *parent)
>  : QMainWindow(parent),
>    ui(new Ui::MainWindow) {
>  ui->setupUi(this);
>
>   // Create the top-level pipeline.
>  pipeline_ = gst_pipeline_new(NULL);
>  if (!pipeline_) {
>    throw std::runtime_error("Could not create pipeline");
>  }
>
>  // Create the decode bin for our video source.
>  GstElement *src = gst_element_factory_make("uridecodebin", NULL);
>  if (!src) {
>    throw std::runtime_error("Could not create uridecodebin");
>  }
>  g_object_set(src, "uri", "file:///path/to/media.ogg", NULL);
>
>  // Add the decode bin to the pipeline.
>  if (!gst_bin_add(GST_BIN(pipeline_), src)) {
>   ;  throw std::runtime_error("Could not add uridecodebin to pipeline");
>  }
>
>  // Create the video sink. This will be linked in the pad-added signal handler.
>  GstElement *video_sink = gst_element_factory_make("xvimagesink",
>                                                    "video-sink");
>  if (!video_sink) {
>    throw std::runtime_error("Could not create xvimagesink");
>  }
>  if (!gst_bin_add(GST_BIN(pipeline_), video_sin k)) {
>    throw std::runtime_error("Could not add video sink to pipeline");
>  }
>  WId id = ui->video_widget->winId();
>  gst_x_overlay_set_window_handle(GST_X_OVERLAY(video_sink), id);
>  qApp->syncX();
>
>  // Create the audio sink. This will be linked in the pad-added signal handler.
>  GstElement *audio_sink = gst_element_factory_make("autoaudiosink",
>                                                    "audio-sink");
>   ;if (!audio_sink) {
>    throw std::runtime_error("Could not create autoaudiosink");
>  }
>  if (!gst_bin_add(GST_BIN(pipeline_), audio_sink)) {
>    throw std::runtime_error("Could not add audio sink to pipeline");
>  }
>
>  // Register our interest in the pad-added signal so we can connect our sinks.
>  g_signal_connect(src, "pad-added", G_CALLBACK(on_pad_added), pipeline_);
>
>  // Start the playback.
>  gst_element_set_state(pipeline_, GST_STATE_PLAYING);
>}
>
>MainWindow::~MainWindow() {
>  gst_element_set_state (pipeline_, GST_STATE_NULL);
>  gst_object_un ref(pipeline_);
>  delete ui;
>}
>
>
>_______________________________________________
>gstreamer-devel mailing list
>[hidden email]
>http://lists.freedesktop.org/mailman/listinfo/gstreamer-devel



_______________________________________________
gstreamer-devel mailing list
[hidden email]
http://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

Re: Really bad code to critique

Tim-Philipp Müller-2
In reply to this post by Dave Mateer
On Tue, 2012-05-01 at 16:55 -0400, Dave Mateer wrote:

Hi Dave,

> Among other things, the fact that I'm testing for audio vs. video pad
> by comparing a string prefix seems fragile and smelly. That can't be
> right.

Well, in this context ([uri]decodebin[2]) it's quite okay. You know the
output is going to be either "raw" audio or "raw" video.

If you want to do it "more" correctly, you could check for "audio/x-raw"
or "video/x-raw" prefixes, or you can check for the full media types
explicitly, but then you need at least four checks: audio/x-raw-int or
audio/x-raw-float for audio (in 0.10; in 0.11/1.0 this is unified as
audio/x-raw) and video/x-raw-rgb or video/x-raw-yuv or possibly even
video/x-raw-gray for video (in 0.10; in 0.11/1.0 this is unified as
video/x-raw).

Checking for media type strings is quite kosher. It's how it's supposed
to be done in this case.

> Also, it seems like I'm leaking memory in the call to
> gst_bin_get_by_name because the docs say that transfers ownership (not
> just a reference count?). Not sure about that one. I'm sure there are
> many other things as well.

My docs say "Caller owns returned reference", which isn't quite the same
as "transfers ownership" IMHO, but it could be worded better I guess.

The refcount of the element/bin/object returned will be increased, and
you are responsible for calling gst_object_unref() on it when you are
done with it. This is how most element/bin/object related API works in
GStreamer, for thread-safety reasons.


> static void on_pad_added(GstElement *element,
>                          GstPad *new_pad,
>                          gpointer user_data) {
>   GstCaps *caps = gst_pad_get_caps_reffed(new_pad);
>   gchar *name = gst_caps_to_string(caps);

This is fine, and easy to do. Something like this would be using the API
more explicitly:

GstStructure *s = gst_caps_get_structure (caps, 0);
const gchar *media_type = gst_structure_get_name (s);

or:

if (gst_structure_has_name (s, "audio/x-raw-int")
  || gst_structure_has_name (s, "audio/x-raw-float")) {
  g_print ("Audio!\n");
} else if (...) {
  ..
} else {
}

>   GstBin *pipeline = (GstBin*)user_data;
>
>   if (g_str_has_prefix(name, "audio")) {
>     g_print("audio\n");
>
>     // Leaking memory?? gst_bin_get_by_name transfers ownership?
>     GstElement *audio_sink = gst_bin_get_by_name(pipeline, "audio-sink");
>     if (!audio_sink) {
>       throw std::runtime_error("Could not extract audio sink from pipeline");
>     }

Yes, you'll need to gst_object_unref (audio_sink) later;

>     GstPad *sink_pad = gst_element_get_static_pad(audio_sink, "sink");
>     if (!sink_pad) {
>       throw std::runtime_error("Could not get sink pad from audio element.");
>     }
>     if (GST_PAD_LINK_FAILED(gst_pad_link(new_pad, sink_pad))) {
>       throw std::runtime_error("Failed to link audio src and sink");
>     }
>
>     gst_object_unref(sink_pad);
>   } else {
>     g_print("video\n");
>
>     // Leaking memory?? gst_bin_get_by_name transfers ownership?
>     GstElement *video_sink = gst_bin_get_by_name(pipeline, "video-sink");
>     if (!video_sink) {
>       throw std::runtime_error("Could not extract video sink from pipeline");
>     }

unref video_sink later too.

>     GstPad *sink_pad = gst_element_get_static_pad(video_sink, "sink");
>     if (!sink_pad) {
>       throw std::runtime_error("Could not get sink pad from video element.");
>     }
>
>     if (GST_PAD_LINK_FAILED(gst_pad_link(new_pad, sink_pad))) {
>       throw std::runtime_error("Failed to link video src and sink");
>     }
>
>     gst_object_unref(sink_pad);
>   }
>
>   g_free(name);
>   gst_caps_unref(caps);
> }
>
> MainWindow::MainWindow(QWidget *parent)
>   : QMainWindow(parent),
>     ui(new Ui::MainWindow) {
>   ui->setupUi(this);
>
>   // Create the top-level pipeline.
>   pipeline_ = gst_pipeline_new(NULL);
>   if (!pipeline_) {
>     throw std::runtime_error("Could not create pipeline");
>   }
>
>   // Create the decode bin for our video source.
>   GstElement *src = gst_element_factory_make("uridecodebin", NULL);
>   if (!src) {
>     throw std::runtime_error("Could not create uridecodebin");
>   }
>   g_object_set(src, "uri", "file:///path/to/media.ogg", NULL);
>
>   // Add the decode bin to the pipeline.
>   if (!gst_bin_add(GST_BIN(pipeline_), src)) {
>     throw std::runtime_error("Could not add uridecodebin to pipeline");
>   }
>
>   // Create the video sink. This will be linked in the pad-added signal handler.
>   GstElement *video_sink = gst_element_factory_make("xvimagesink",
>                                                     "video-sink");
>   if (!video_sink) {
>     throw std::runtime_error("Could not create xvimagesink");
>   }
>   if (!gst_bin_add(GST_BIN(pipeline_), video_sink)) {
>     throw std::runtime_error("Could not add video sink to pipeline");
>   }
>   WId id = ui->video_widget->winId();
>   gst_x_overlay_set_window_handle(GST_X_OVERLAY(video_sink), id);
>   qApp->syncX();

Out of curiosity, why is the syncX() call required? Just for comparison,
in Gtk, you'd need to realize the video widget / application window
before being able to get the xid from it, and you would also need to
tell Gtk before that that you want it to create a "native" window for
the video widget (instead of drawing on the parent widget's). I guess
that's not required here. If you do run into problems, there are some Qt
overlay examples in gst-plugins-base/tests/examples/.

>   // Create the audio sink. This will be linked in the pad-added signal handler.
>   GstElement *audio_sink = gst_element_factory_make("autoaudiosink",
>                                                     "audio-sink");
>   if (!audio_sink) {
>     throw std::runtime_error("Could not create autoaudiosink");
>   }
>   if (!gst_bin_add(GST_BIN(pipeline_), audio_sink)) {
>     throw std::runtime_error("Could not add audio sink to pipeline");
>   }
>
>   // Register our interest in the pad-added signal so we can connect our sinks.
>   g_signal_connect(src, "pad-added", G_CALLBACK(on_pad_added), pipeline_);
>
>   // Start the playback.
>   gst_element_set_state(pipeline_, GST_STATE_PLAYING);
> }
>
> MainWindow::~MainWindow() {
>   gst_element_set_state (pipeline_, GST_STATE_NULL);
>   gst_object_unref(pipeline_);
>   delete ui;
> }


Cheers
 -Tim

_______________________________________________
gstreamer-devel mailing list
[hidden email]
http://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

RE: Really bad code to critique

Dave Mateer
Thanks, Tim; VERY helpful.

About the syncX(), I'm not quite sure at this point *why* it is necessary. But if I don't include it, the video only shows up on the widget about 1/10 of the time. With that call, it always works. Again, a lot of this is a black box to me at this point ... I'm trying to get up to speed as quickly as I can. Your comments were very helpful.



-----Original Message-----
From: gstreamer-devel-bounces+dave_mateer=[hidden email] [mailto:gstreamer-devel-bounces+dave_mateer=[hidden email]] On Behalf Of Tim-Philipp Müller
Sent: Thursday, May 03, 2012 3:11 AM
To: [hidden email]
Subject: Re: Really bad code to critique

On Tue, 2012-05-01 at 16:55 -0400, Dave Mateer wrote:

Hi Dave,

> Among other things, the fact that I'm testing for audio vs. video pad
> by comparing a string prefix seems fragile and smelly. That can't be
> right.

Well, in this context ([uri]decodebin[2]) it's quite okay. You know the output is going to be either "raw" audio or "raw" video.

If you want to do it "more" correctly, you could check for "audio/x-raw"
or "video/x-raw" prefixes, or you can check for the full media types explicitly, but then you need at least four checks: audio/x-raw-int or audio/x-raw-float for audio (in 0.10; in 0.11/1.0 this is unified as
audio/x-raw) and video/x-raw-rgb or video/x-raw-yuv or possibly even video/x-raw-gray for video (in 0.10; in 0.11/1.0 this is unified as video/x-raw).

Checking for media type strings is quite kosher. It's how it's supposed to be done in this case.

> Also, it seems like I'm leaking memory in the call to
> gst_bin_get_by_name because the docs say that transfers ownership (not
> just a reference count?). Not sure about that one. I'm sure there are
> many other things as well.

My docs say "Caller owns returned reference", which isn't quite the same as "transfers ownership" IMHO, but it could be worded better I guess.

The refcount of the element/bin/object returned will be increased, and you are responsible for calling gst_object_unref() on it when you are done with it. This is how most element/bin/object related API works in GStreamer, for thread-safety reasons.


> static void on_pad_added(GstElement *element,
>                          GstPad *new_pad,
>                          gpointer user_data) {
>   GstCaps *caps = gst_pad_get_caps_reffed(new_pad);
>   gchar *name = gst_caps_to_string(caps);

This is fine, and easy to do. Something like this would be using the API more explicitly:

GstStructure *s = gst_caps_get_structure (caps, 0); const gchar *media_type = gst_structure_get_name (s);

or:

if (gst_structure_has_name (s, "audio/x-raw-int")
  || gst_structure_has_name (s, "audio/x-raw-float")) {
  g_print ("Audio!\n");
} else if (...) {
  ..
} else {
}

>   GstBin *pipeline = (GstBin*)user_data;
>
>   if (g_str_has_prefix(name, "audio")) {
>     g_print("audio\n");
>
>     // Leaking memory?? gst_bin_get_by_name transfers ownership?
>     GstElement *audio_sink = gst_bin_get_by_name(pipeline, "audio-sink");
>     if (!audio_sink) {
>       throw std::runtime_error("Could not extract audio sink from pipeline");
>     }

Yes, you'll need to gst_object_unref (audio_sink) later;

>     GstPad *sink_pad = gst_element_get_static_pad(audio_sink, "sink");
>     if (!sink_pad) {
>       throw std::runtime_error("Could not get sink pad from audio element.");
>     }
>     if (GST_PAD_LINK_FAILED(gst_pad_link(new_pad, sink_pad))) {
>       throw std::runtime_error("Failed to link audio src and sink");
>     }
>
>     gst_object_unref(sink_pad);
>   } else {
>     g_print("video\n");
>
>     // Leaking memory?? gst_bin_get_by_name transfers ownership?
>     GstElement *video_sink = gst_bin_get_by_name(pipeline, "video-sink");
>     if (!video_sink) {
>       throw std::runtime_error("Could not extract video sink from pipeline");
>     }

unref video_sink later too.

>     GstPad *sink_pad = gst_element_get_static_pad(video_sink, "sink");
>     if (!sink_pad) {
>       throw std::runtime_error("Could not get sink pad from video element.");
>     }
>
>     if (GST_PAD_LINK_FAILED(gst_pad_link(new_pad, sink_pad))) {
>       throw std::runtime_error("Failed to link video src and sink");
>     }
>
>     gst_object_unref(sink_pad);
>   }
>
>   g_free(name);
>   gst_caps_unref(caps);
> }
>
> MainWindow::MainWindow(QWidget *parent)
>   : QMainWindow(parent),
>     ui(new Ui::MainWindow) {
>   ui->setupUi(this);
>
>   // Create the top-level pipeline.
>   pipeline_ = gst_pipeline_new(NULL);
>   if (!pipeline_) {
>     throw std::runtime_error("Could not create pipeline");
>   }
>
>   // Create the decode bin for our video source.
>   GstElement *src = gst_element_factory_make("uridecodebin", NULL);
>   if (!src) {
>     throw std::runtime_error("Could not create uridecodebin");
>   }
>   g_object_set(src, "uri", "file:///path/to/media.ogg", NULL);
>
>   // Add the decode bin to the pipeline.
>   if (!gst_bin_add(GST_BIN(pipeline_), src)) {
>     throw std::runtime_error("Could not add uridecodebin to pipeline");
>   }
>
>   // Create the video sink. This will be linked in the pad-added signal handler.
>   GstElement *video_sink = gst_element_factory_make("xvimagesink",
>                                                     "video-sink");
>   if (!video_sink) {
>     throw std::runtime_error("Could not create xvimagesink");
>   }
>   if (!gst_bin_add(GST_BIN(pipeline_), video_sink)) {
>     throw std::runtime_error("Could not add video sink to pipeline");
>   }
>   WId id = ui->video_widget->winId();
>   gst_x_overlay_set_window_handle(GST_X_OVERLAY(video_sink), id);
>   qApp->syncX();

Out of curiosity, why is the syncX() call required? Just for comparison, in Gtk, you'd need to realize the video widget / application window before being able to get the xid from it, and you would also need to tell Gtk before that that you want it to create a "native" window for the video widget (instead of drawing on the parent widget's). I guess that's not required here. If you do run into problems, there are some Qt overlay examples in gst-plugins-base/tests/examples/.

>   // Create the audio sink. This will be linked in the pad-added signal handler.
>   GstElement *audio_sink = gst_element_factory_make("autoaudiosink",
>                                                     "audio-sink");
>   if (!audio_sink) {
>     throw std::runtime_error("Could not create autoaudiosink");
>   }
>   if (!gst_bin_add(GST_BIN(pipeline_), audio_sink)) {
>     throw std::runtime_error("Could not add audio sink to pipeline");
>   }
>
>   // Register our interest in the pad-added signal so we can connect our sinks.
>   g_signal_connect(src, "pad-added", G_CALLBACK(on_pad_added),
> pipeline_);
>
>   // Start the playback.
>   gst_element_set_state(pipeline_, GST_STATE_PLAYING); }
>
> MainWindow::~MainWindow() {
>   gst_element_set_state (pipeline_, GST_STATE_NULL);
>   gst_object_unref(pipeline_);
>   delete ui;
> }


Cheers
 -Tim

_______________________________________________
gstreamer-devel mailing list
[hidden email]
http://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
_______________________________________________
gstreamer-devel mailing list
[hidden email]
http://lists.freedesktop.org/mailman/listinfo/gstreamer-devel