Building a FUSE fs.

classic Classic list List threaded Threaded
4 messages Options
Reply | Threaded
Open this post in threaded view
|

Building a FUSE fs.

Stef Bon
Hi,

I'm building an audio-format-decode fs, and would like to use
Gstreamer for that. Hurray!

(I've found some other FUSE fs's using Gstreamer....)

Now, haven't been programming Gstreamer before, and I have some questions.

How can I make a prog determine the audio conversions supported? Or
are there tooo many?

And how do I program a a "chain", from one format to another.
I've read some of the documentation, and it looks it's a chain of
various steps. How do I know what steps,
and in what order?

Thanks in advance,

Stef
_______________________________________________
gstreamer-devel mailing list
[hidden email]
http://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

Re: Building a FUSE fs.

Sean McNamara-4
Hi,

On Sat, Jul 2, 2011 at 1:48 PM, Stef Bon <[hidden email]> wrote:

> Hi,
>
> I'm building an audio-format-decode fs, and would like to use
> Gstreamer for that. Hurray!
>
> (I've found some other FUSE fs's using Gstreamer....)
>
> Now, haven't been programming Gstreamer before, and I have some questions.
>
> How can I make a prog determine the audio conversions supported? Or
> are there tooo many?
>
> And how do I program a a "chain", from one format to another.
> I've read some of the documentation, and it looks it's a chain of
> various steps. How do I know what steps,
> and in what order?

It's called a pipeline. Video works in a similar way, but I'm just
going to talk about audio here.

The catch-all format of an audio conversion pipeline is something like this:

<src> ! decodebin2 ! audioconvert ! <encoder> ! <sink>

where <src> is a source element, <encoder> is an encoder element,
<sink> is a sink element, and the other two are the names of actual
element classes that you'll need.

The formats that can be decoded and/or demuxed by decodebin2 depends
on what plugins are installed by your gstreamer installation.

The formats that can be encoded and/or muxed depends on which encoder
element you choose.

Keep in mind that encodebin, a fairly new element in gst-plugins-base,
can basically encapsulate the entire "guts" of the pipeline except for
the source and sink as stated above:

<src> ! encodebin ! <sink>

All you have to do then is specify the desired profile to the
encodebin by setting the profile property.

There's an example of getting all the existing profiles on the system
in the docs: http://gstreamer.freedesktop.org/data/doc/gstreamer/head/gst-plugins-base-libs/html/gst-plugins-base-libs-encoding-profile.html#GstEncodingProfile

This would make it easy to discover all the possible output formats in
a user-friendly way, rather than trying to determine all the formats
manually from the caps supported by encoders on the system.

And it's nice that encodebin supports automatic passthrough if your
input data is already in the output format; so for example if you
wanted to "convert" from MP3 to MP3, it would just pass it through
instead of re-encoding (causing loss of data).

Only downside of encodebin is that it's relatively new, and thus has
less thorough testing in real-world applications. But I think it
should be, at least, interesting to try it out and see if it works for
you, and if not, report a bug.

Last thing I would suggest is to download DevHelp (if you're
developing on GNU/Linux, BSD, or another platform where Gnome is
supported) and grab the gstreamer docs. It's nice to be able to access
the gstreamer docs from within devhelp, and many of the element
classes and base classes have example C code in the docs.

If you're confused by the notation I used above, with an exclamation
point between elements, that's what's called "gst-launch syntax",
because you can pass that kind of string to the console command
gst-launch-0.10. For example:

$ gst-launch-0.10 audiotestsrc ! volume volume=0.10 ! audioconvert !
autoaudiosink

The pipeline must always contain at least one source (abbreviated src)
and at least one sink. A source is where data originates from outside
the gstreamer environment (a file, the network, etc), and a sink is
where data exits the gstreamer environment (a file, the network, sound
card, etc).

All of that is covered in the app developer docs:
http://gstreamer.freedesktop.org/data/doc/gstreamer/head/manual/html/index.html

HTH,

Sean

>
> Thanks in advance,
>
> Stef
> _______________________________________________
> gstreamer-devel mailing list
> [hidden email]
> http://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
>
_______________________________________________
gstreamer-devel mailing list
[hidden email]
http://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

Re: Building a FUSE fs.

Stef Bon
Hi,

thanks a lot for the detailed information!  And the quick response.

I will report any relevant progress back to the list.

Stef
_______________________________________________
gstreamer-devel mailing list
[hidden email]
http://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

Re: Building a FUSE fs.

Stef Bon
Well I have some more questions.

In gstfs

see:

http://bobcopeland.com/gstfs/

an ogg file is decoded from mp3 to ogg on the fly.
There a pipeline is given on the commandline, like:

pipeline="filesrc name=\"_source\" ! oggdemux ! vorbisdec !
audioconvert ! lame bitrate=160 ! fdsink name=\"_dest\" sync=false"

in the program some additional parameters are set:

    thread_params.fd = pipefds[0];
    thread_params.add_data_cb = add_data_cb;
    thread_params.user_data = user_data;

    pthread_create(&thread, NULL, send_pipe, (void *) &thread_params);

    g_object_set(G_OBJECT(source), "location", filename, NULL);
    g_object_set(G_OBJECT(dest), "fd", pipefds[1], NULL);

    bus = gst_pipeline_get_bus(GST_PIPELINE(pipeline));
    gst_bus_add_signal_watch(bus);
    gst_element_set_state(pipeline, GST_STATE_PLAYING);
    GstMessage *message = gst_bus_poll(bus, GST_MESSAGE_EOS |
        GST_MESSAGE_ERROR, -1);
    gst_message_unref(message);

    // close read-side so pipe will terminate
    close(pipefds[1]);
    pthread_join(thread, thread_status);


Here very important parameters are set. In the first place an fd, the
read site of a pipe.
An a callback, which is performed when data is received.

As I read it correctly, a thread is created to read the data, which
receives data, send over the pipe by another thread, the gstreamer
thread. This gstreamer thread sends over the data via the fd, set via

g_object_set(G_OBJECT(dest), "fd", pipefds[1], NULL);

When the receive thread receives data by reading the pipe, it calls
the read callback which writes the data to a buffer in memory ( and
corrects the size of this buffer on the fly...)

Am I correct?

How can I set the fd? It does not look like a gst call, more a glib call.

Stef


(My intention is to not use a pipe per file, but one write fd for all
the conversions, and the readsite it watched by an epoll instance. Is
it possible to add "user data" to the data send by the decoding
process?)
_______________________________________________
gstreamer-devel mailing list
[hidden email]
http://lists.freedesktop.org/mailman/listinfo/gstreamer-devel