Theory behind audio source block sizes

classic Classic list List threaded Threaded
4 messages Options
Reply | Threaded
Open this post in threaded view
|

Theory behind audio source block sizes

Casey Waldren
Given an audio source that produces audio in N byte chunks, and a downstream element that consumes chunks of N*4 bytes, what is the best way to link these elements?

I have encountered this problem with a plugin that transforms audio from 4 to 2 channels. Gstreamer seems to lack docs on the topic of block sizes, so I feel like I have missed some important knowledge about how audio sources are supposed to operate.

I could change the audio source to produce these fixed sized blocks, or the downstream element to accept smaller blocks and accumulate them, e.g. with a GstAdapter. This seems brittle and sounds like a general purpose plugin should be provided for this purpose. Is that audiobuffersplit, or is there some other technique? 





_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

Re: Theory behind audio source block sizes

Sebastian Dröge-3
On Tue, 2019-02-19 at 00:36 -0800, Casey Waldren wrote:

> Given an audio source that produces audio in N byte chunks, and a
> downstream element that consumes chunks of N*4 bytes, what is the
> best way to link these elements?
>
> I have encountered this problem with a plugin that transforms audio
> from 4 to 2 channels. Gstreamer seems to lack docs on the topic of
> block sizes, so I feel like I have missed some important knowledge
> about how audio sources are supposed to operate.
>
> I could change the audio source to produce these fixed sized blocks,
> or the downstream element to accept smaller blocks and accumulate
> them, e.g. with a GstAdapter. This seems brittle and sounds like a
> general purpose plugin should be provided for this purpose. Is that
> audiobuffersplit, or is there some other technique?
The correct solution here is to use e.g. a GstAdapter in the downstream
element to be able to accept any block size. audio/x-raw caps require
nothing from the block size of buffers (other than being the same
amount of samples for each channel), and the size of buffers can also
change between buffer and buffer.
Generally, it's a bug in an element if it can't handle raw audio
buffers of arbitrary size.


An alternative would be to define a different caps for this that would
allow negotiating a specific block size and elements using those caps
would then ensure to only produce/consume buffers of that size.

--
Sebastian Dröge, Centricular Ltd · https://www.centricular.com


_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel

signature.asc (981 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: Theory behind audio source block sizes

Casey Waldren
Enlightening. In that case, would audiobuffersplit be regarded as more of a debugging tool / hack to test if pipeline works?  

On Tue, Feb 19, 2019, 12:56 AM Sebastian Dröge <[hidden email]> wrote:
On Tue, 2019-02-19 at 00:36 -0800, Casey Waldren wrote:
> Given an audio source that produces audio in N byte chunks, and a
> downstream element that consumes chunks of N*4 bytes, what is the
> best way to link these elements?
>
> I have encountered this problem with a plugin that transforms audio
> from 4 to 2 channels. Gstreamer seems to lack docs on the topic of
> block sizes, so I feel like I have missed some important knowledge
> about how audio sources are supposed to operate.
>
> I could change the audio source to produce these fixed sized blocks,
> or the downstream element to accept smaller blocks and accumulate
> them, e.g. with a GstAdapter. This seems brittle and sounds like a
> general purpose plugin should be provided for this purpose. Is that
> audiobuffersplit, or is there some other technique?

The correct solution here is to use e.g. a GstAdapter in the downstream
element to be able to accept any block size. audio/x-raw caps require
nothing from the block size of buffers (other than being the same
amount of samples for each channel), and the size of buffers can also
change between buffer and buffer.
Generally, it's a bug in an element if it can't handle raw audio
buffers of arbitrary size.


An alternative would be to define a different caps for this that would
allow negotiating a specific block size and elements using those caps
would then ensure to only produce/consume buffers of that size.

--
Sebastian Dröge, Centricular Ltd · https://www.centricular.com

_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel

_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

Re: Theory behind audio source block sizes

Sebastian Dröge-3
On Tue, 2019-02-19 at 06:24 -0800, Casey Waldren wrote:
> Enlightening. In that case, would audiobuffersplit be regarded as
> more of a debugging tool / hack to test if pipeline works?  

No, it's useful for cases when you want to ensure that buffer sizes are
fixed at some point. E.g. to send fixed 20ms buffers over the network.

--
Sebastian Dröge, Centricular Ltd · https://www.centricular.com


_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel

signature.asc (981 bytes) Download Attachment