One Stream API to Rule them all - Could GStreamer be it ?

classic Classic list List threaded Threaded
3 messages Options
Reply | Threaded
Open this post in threaded view
|

One Stream API to Rule them all - Could GStreamer be it ?

Bugzilla from maxim_m@gmx.net
I am writing to this list because it is the closest fit i am aware of. please be so kind to direct me to the right forums, MLs, etc. to expose this idea to the right people.

when thinking about the diverse, uncoordinated and code-duplicating mess that are all the different encoding/compression/processing libraries and programs out there i found the need to for a solution that brings order to the chaos once and for all.
and possibly extends the way we think about operating system services.

it seems the following use cases have enough in common to be handled through a unified, system-wide api. please correct me if i am wrong.

(ordered by decreasing existing use of gstreamer)
1) a media file should be viewed on the screen

2)
a media stream should be captured to a file

3)
a 7-zip/LZMA archive should be created from a bunch of file handles

4)
a file system needs to handle transparent compression and/or encryption of inodes/whatever low level objects it uses

5)
a hash needs to be generated from a file stored on a different server, accessed through name-your-favourite-esoteric-protocol-here

6)
SETI-at-home needs to crunch some data using its FastFourierTransform Algorithm to find our future alien overlords and the user has a custom DSP specifically designed to speed up this process. using that would be easy - if he could just install the right FFT plugin

7)
a user of Photoshop, nay Gimp/Krita would like to utilize the SVG filter the Webkit implementation uses and dynamically stream that picture as a change-as-you-work-stream to a guy on the other end of the world who does some video processing with it

8) Complex:
high level objects work the same way - downloading and installing an rpm:

since many file mirrors(the respectable anyway) publish a hash alongside the file a la

SomeFile.rpm
SomeFile.MD5

handing the http or ftp location and the "install" command to the respective http source and rpm sinks(or both to an handle-archive--bin ?), the autoplugger could even go so far as to auto-insert potentially installed -gnutella/bittorrent/whatever,
-hashing,
-SSL/TLS,
-security handling,
-decompression,
-caching
-and optional ask-the-user-for-input plugins into the pipeline to bring transparent P2P swarming to even the simplest of downloading applications.
(the http source could dynamically create a a-hash-is-available-pad to facilitate that or the bin instantiates another connection. failing that it instantiates a plugin listening for user/system-input hashes. probably such a plugin should be preloaded and maybe create a dbus service for runtime/pre-run configuration of pipelines)

-----------------------------------------------

all these things involve some from of somewhat more complex algorithm being run on a buffer of data, be that either a static one of known type, or changing over time.
automatic threading, scheduling, detection'plugging services would benefit the programming ecosystem greatly, imho

-----------------------------------------------

benefits of one globally used stream processing api:
- greatly reduced development effort, rate of bugs and maintenance

- reduced learning curve when adding functionality to software, furthermore a more structured way of thinking about processing data is also bound to produce cleaner code

- systems theory: standardizing components interfaces  tends to lead to the discovery of novel ways to combine those, leading to innovative applications

- improve the functionality of existing applications simply installing a new plugin(to a certain degree)

- users decide which implementation they want to use, all in all, it's his/her decision to make.
the ability to stop having to install multitudes of http/etc protocol, VirtualFileSystems(KIO-slaves,GIO, PHP5's stream API ...), encoding(OpenSSL/GnuTLS/PGP ...), compression etc implementations just because the respective application developers had their favourites, would be a great step forward in the fight for truly free software and putting the user back in control of his own system.

- by moving up one abstraction layer, just one wrapper to support other programming languages is required-in contrast to per library wrappers
.
this last point essentially realizes most of the promise of .NET but in a somewhat less invasive fashion.
many of the different installed (class) library implementations(from .Net to Z Machine) of similar functions could be unified into one, mostly user chosen set of plugins

---------------------------------------

speaking from a system architect's view,

i would like to move most if not all of the data<->data processing of the Controller part of MVC into a high(er) level, unified and generic interface.

only the [knowledge<->]information<->data parts (most of the "business logic") should remain in the main application, as they are much more task specific

this should create a comprehensive view of the complex processing algorithms available(a part of the system services view, that could be advertised over a network vie e.g. ZeroConf) on a system and the partial automation should make customized deployment and management a lot better.

the plugin->library separation could still be kept in selected cases to allow for ultra-low-level operation, although for this class of algorithms it might be overkill.

-----------------------------------------

Q: where do we stand with GStreamer in terms of being able to be the root of that api ?
- is it sufficiently generic ?
- is it simple enough ?
- is the core sufficiently light-weight and modular ?
- are plugins sufficiently simple  to write and do they only add a negligible overhead by default ?
- is the type-system powerful enough or can it be made to be ?

Q:Should gstreamer be split into a generic stream processing/plugin mgmt layer and a multimedia handling api layer for this ?
everybody should use the lower layer, and be competing on the higher API level(xine,mplayer,directshow ... not sure if phonon is in the same level)

Q: should the core go into the linux kernel to be universally available and be able to serve the kernel's needs(hashing, TCP/IP layer, crypto, VFS etc...) ?
can it even go there with the plugins staying in userland ?

Q: are the identification algorithms used for autoplugging also plugins so that they can be arranged into a pipeline that produces typeinfo?

it should be possible to load the data recognition/autoplugging parts separately since low level users don't want the extra overhead(e.g the CRC on an inode in a linux filesystem)

also, some users might prefer a unique autoplugging strategy on theirs systems

Q: what has to be done to make this vision a reality ?
who to talk to to hammer out a strategy ?
what skills/efforts are needed to implement it ?
where to evangelize it ?

Q: since much work on gstreamer is sponsored by a specific company, what is their take on this ?

Q: and most important of all: does all that even make sense ?

thanks for reading and looking forward to your feedback and pointers.


best regards

MaxxCorp


--
MaxxCorp Knowledge Management Solutions GmbH
Berlin - GuangZhou
Maxim Mueller - Founder and Managing Partner
GZ Cell:13416104615
EMail: [hidden email]

------------------------------------------------------------------------------
SF.Net email is Sponsored by MIX09, March 18-20, 2009 in Las Vegas, Nevada.
The future of the web can't happen without you.  Join us at MIX09 to help
pave the way to the Next Web now. Learn more and register at
http://ad.doubleclick.net/clk;208669438;13503038;i?http://2009.visitmix.com/
_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

One Stream API to Rule them all - Could GStreamer be it ?

Bugzilla from maxim_m@gmx.net
I am writing to this list because it is the closest fit i am aware of. please be so kind to direct me to the right forums, MLs, etc. to expose and discuss  this idea to/with the right people.

when thinking about the diverse, uncoordinated and code-duplicating mess that are all the different encoding/compression/processing libraries and programs out there i found the need to for a solution that brings order to the chaos once and for all.
one that goes further than what gstreamers scope currently seems to be and possibly extends the way we think about operating system services.

it seems the following use cases have enough in common to be handled through a unified, system-wide api. please correct me if i am wrong.

(ordered by decreasing existing use of gstreamer)
1) a media file should be viewed on the screen

2)
a media stream should be captured to a file

3)
a 7-zip/LZMA archive should be created from a bunch of file handles

4)
a file system needs to handle transparent compression and/or encryption of inodes/whatever low level objects it uses

5)
a hash needs to be generated from a file stored on a different server, accessed through name-your-favourite-esoteric-protocol-here

6)
SETI-at-home needs to crunch some data using its FastFourierTransform Algorithm to find our future alien overlords and the user has a custom DSP specifically designed to speed up this process. using that would be easy - if he could just install the right FFT plugin

7)
a user of Photoshop, nay Gimp/Krita would like to utilize the SVG filter the Webkit implementation uses and dynamically stream that picture as a change-as-you-work-stream to a guy on the other end of the world who does some video processing with it

8) Complex:
high level objects work the same way - downloading and installing an rpm:

since many file mirrors(the respectable anyway) publish a hash alongside the file a la

SomeFile.rpm
SomeFile.MD5

handing the http or ftp location and the "install" command to the respective http source and rpm sinks(or both to an handle-archive--bin ?), the autoplugger could even go so far as to auto-insert potentially installed -gnutella/bittorrent/whatever,
-hashing,
-SSL/TLS,
-security handling,
-decompression,
-caching
-db management
-and optional ask-the-user-for-input plugins into the pipeline to bring transparent P2P swarming to even the simplest of downloading applications.
(the http source could dynamically create a a-hash-is-available-pad to facilitate that or the bin instantiates another connection. failing that it instantiates a plugin listening for user/system-input hashes. probably such a plugin should be preloaded by default and maybe create a dbus service for runtime/pre-run configuration of pipelines)

-----------------------------------------------

all these things involve some from of somewhat more complex algorithm being run on a buffer of data, be that either a static one of known type, or changing over time, necessitating renegotiation of the processing chain.

automatic threading, scheduling, detection/plugging services and a generic interface to processing data would benefit the programming ecosystem greatly, imho

-----------------------------------------------

benefits of one globally used stream processing api:
- greatly reduced development effort, rate of bugs and maintenance

- reduced learning curve when adding functionality to software, furthermore a more structured way of thinking about processing data is also bound to produce cleaner code

- systems theory: standardizing components interfaces  tends to lead to the discovery of novel ways to combine those, leading to innovative applications

- improve the functionality of existing applications simply installing a new plugin(to a certain degree)

- users decide which implementation they want to use, all in all, it's his/her decision to make.
the ability to stop having to install multitudes of http/etc protocol, VirtualFileSystems(KIO-slaves,GIO, PHP5's stream API ...), encoding(OpenSSL/GnuTLS/PGP ...), compression etc implementations just because the respective application developers had their favourites, would be a great step forward in the fight for truly free software and putting the user back in control of his own system.

- by moving up one abstraction layer, just one wrapper to support other programming languages is required-in contrast to per library wrappers
.
this last point essentially realizes most of the promise of .NET but in a somewhat less invasive fashion.
many of the different installed (class) library implementations(from .Net to Z Machine) of similar functions could be unified into one, mostly user chosen set of plugins

---------------------------------------

speaking from a system architect's view,

i would like to move most if not all of the data<->data processing of the Controller part of MVC into a high(er) level, unified and generic interface.

only the [knowledge<->]information<->data parts (most of the "business logic") should remain in the main application, as they are much more task specific

this should create a comprehensive view of the complex processing algorithms available(a part of the system services view, that could be advertised over a network vie e.g. ZeroConf) on a system and the partial automation should make customized deployment and management a lot better.

the plugin->library separation could still be kept in selected cases to allow for ultra-low-level operation and alternative stream processing apis, although for this class of algorithms it might be overkill.

-----------------------------------------

Q: where do we stand with GStreamer in terms of being able to be the root of that api ?
- is it sufficiently generic ?
- is it simple enough ?
- is the core sufficiently light-weight and modular ?
- are plugins sufficiently simple  to write and do they only add a negligible overhead by default ?
- is the type-system powerful enough or can it be made to be ?

Q:Should gstreamer be split into a generic stream processing/plugin mgmt layer and a multimedia handling api layer for this ?
everybody should use the lower layer, and be competing on the higher API level(xine,mplayer,directshow ... not sure if phonon is in the same level)

Q: should the core go into the linux kernel to be universally available and be able to serve the kernel's needs(hashing, TCP/IP layer, crypto, VFS etc...) ?
can it even go there with the plugins staying in userland ?

Q: are the identification algorithms used for autoplugging also plugins so that they can be arranged into a pipeline that produces typeinfo?

it should be possible to load the data recognition/autoplugging parts separately since low level users don't want the extra overhead(e.g the CRC on an inode in a linux filesystem)

also, some users might prefer a unique autoplugging strategy on theirs systems

Q: what has to be done to make this vision a reality ?
who to talk to to hammer out a strategy ?
what skills/efforts are needed to implement it ?
where to evangelize it ?

Q: since much work on gstreamer is sponsored by a specific company, what is their take on this ?

Q: and most important of all: does all that even make sense ?

thanks for reading and looking forward to your feedback and pointers.


best regards

MaxxCorp


--
MaxxCorp Knowledge Management Solutions GmbH
Berlin - GuangZhou
Maxim Mueller - Founder and Managing Partner
GZ Cell:13416104615
EMail: [hidden email]


-------------------------------------------------------------------------
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

Re: One Stream API to Rule them all - Could GStreamer be it ?

Brian Crowell-2
On Wed, Dec 3, 2008 at 3:18 AM, Maxim Mueller <[hidden email]> wrote:
> automatic threading, scheduling, detection/plugging services and a generic
> interface to processing data would benefit the programming ecosystem
> greatly, imho

All right, I'm a Linux and GStreamer newb, but I most certainly have
an opinion on this. Caveat lector.

You're basically asking that there be some sort of generic streaming
API available, potentially in the kernel, that auto-does everything.

First, you have a streaming API in the kernel, and it does do
automatic threading and scheduling, and just about every command line
tool uses it. It's stdio: files and sockets. It does not do stream
typing and autoplugging, but UNIX has made it this far without those
features. Not that they would help you anyhow-- you still have to
specify parameters to your filters and commands to get them to do what
you want.


> benefits of one globally used stream processing api:
> - greatly reduced development effort, rate of bugs and maintenance

The GNU people would agree.


> - reduced learning curve when adding functionality to software, furthermore
> a more structured way of thinking about processing data is also bound to
> produce cleaner code

Over the current method? I don't think so. Adding more complexity to
the interface makes it easier to screw up. I'm still getting segfaults
from the videomixer plugin, and I'm not at all sure why, and I know
the person who authored it knows way more about GStreamer than I do.

C++ was supposed to produce cleaner code, but I've seen many obscure
bugs happen just because a copy constructor was written wrong.


> - systems theory: standardizing components interfaces  tends to lead to the
> discovery of novel ways to combine those, leading to innovative applications
>
> - improve the functionality of existing applications simply installing a new
> plugin(to a certain degree)

Again, the GNU people would agree. In fact, typing the stream would
reduce some of your flexibility. Think of some of the novel ways
people have used grep and sed.


> - users decide which implementation they want to use, all in all, it's
> his/her decision to make.
> the ability to stop having to install multitudes of http/etc protocol,
> VirtualFileSystems(KIO-slaves,GIO, PHP5's stream API ...),
> encoding(OpenSSL/GnuTLS/PGP ...), compression etc implementations just
> because the respective application developers had their favourites, would be
> a great step forward in the fight for truly free software and putting the
> user back in control of his own system.

All of these choices would still exist, and you'd have the extra
effort of having to write brand-new GStreamer plugins to support them
all.


> - by moving up one abstraction layer, just one wrapper to support other
> programming languages is required-in contrast to per library wrappers
> .

I'm not sure what you mean by this. GPLers have the power to
consolidate their libraries at any time, but they don't. Every library
has a different purpose, and not all of it makes sense together, and
not all of it can be described in terms of streaming.


> the plugin->library separation could still be kept in selected cases to
> allow for ultra-low-level operation and alternative stream processing apis,
> although for this class of algorithms it might be overkill.

Abstraction is great, right up until you run across a problem the
abstracter didn't think about. Inevitably, trying to consolidate
everything leaves you with the problem of having to separate
everything out again (or worse, copying out just the part you need).
Look at the flak Phonon is taking for trying to abstract a little too
much.

On a related note, I just discovered liboil the other day. That rocks.


> Q: where do we stand with GStreamer in terms of being able to be the root of
> that api ?
> - is it sufficiently generic ?
> - is it simple enough ?
> - is the core sufficiently light-weight and modular ?
> - are plugins sufficiently simple  to write and do they only add a
> negligible overhead by default ?
> - is the type-system powerful enough or can it be made to be ?

What you're asking to do is probably technically feasible, but I
wouldn't count on it happening. A simpler API for GStreamer would be
awesome, but judging from its popularity, I think the design pretty
much nailed the requirements.


> Q:Should gstreamer be split into a generic stream processing/plugin mgmt
> layer and a multimedia handling api layer for this ?
> everybody should use the lower layer, and be competing on the higher API
> level(xine,mplayer,directshow ... not sure if phonon is in the same level)

It kind of already is. The core plugins don't know a darned thing
about multimedia, and the API actually makes very few assumptions in
that regard.


> Q: should the core go into the linux kernel to be universally available and
> be able to serve the kernel's needs(hashing, TCP/IP layer, crypto, VFS
> etc...) ?
> can it even go there with the plugins staying in userland ?

No. Using GStreamer for kernel tasks would only slow the kernel down.
Just the work that goes into stream type negotiation would be a waste
in many places, especially since most (if not all) of the types are
known in advance.

On the other hand, when you look at sockets and pipes, you've already
got that clean separation with all the plugins living in userland. :P


> Q: and most important of all: does all that even make sense ?

That question should have come first :P

I don't think so: you're better off with ordinary kernel streams
(stdio, sockets, pipes, etc.).

--Brian

-------------------------------------------------------------------------
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/gstreamer-devel