Question regarding use of interleaved and deinterleaved elements in processing chain

classic Classic list List threaded Threaded
10 messages Options
Reply | Threaded
Open this post in threaded view
|

Question regarding use of interleaved and deinterleaved elements in processing chain

hk-iks
Hi everyone,

I have a processing chain in which interleaved audio channels are
transformed into mono streams and afterwards
transformed into a single interleaved stream with a different number of
channels.


I understood that using a deinterleaver followed by an interleaver is
the way to follow for this.


audiosrc --> deinterleaver --[channel0]-->  interleaver -> audiosink
                            --[channel1]->
                            --[channel2]->
                            --[channel3]->


Hence, i add both interleaver and deinterleaver to the pipeline. When
trying to link the pads, however,
there are no output src pads on the deinterleaver and no sink pads on
the interleaver since those are of type
"sometimes". By playing around, I found that the pipeline must be set
into state PLAYING to ask the deinterleaver to actually
allocate the src pads (one for each channel, the number of which being
specified by transferring the caps to the sink pad of it).

So, if I keep deinterleaver and interleaver disconnected when switching
into state PLAYING, the pipeline reports an
"internat data flow error" and is switched into PAUSED since there is a
connection gap. Now, the newly created pads of the
deinterleaver are reported and I could connect those in the matching
callback and set the state to PLAYING again. The
interleaver, however, never exposes the "sometimes" sink pads.

Here is the question: is that really the way it is supposed to work or
may I somehow trigger interleaver and deinterleaver
to produce source and sink pads before actually switching to PLAYING
state? And if that is the way, how may I achieve the
generation of the interleaver sink pads? The creation of the processing
chain is aways from src to sink which implies that the
interleaver must have the sink pads connected first which are not available.

Thank you and best regards

Hauke
_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

Re: Question regarding use of interleaved and deinterleaved elements in processing chain

Sebastian Dröge-3
On Sun, 2016-08-21 at 18:45 +0200, Hauke Krüger wrote:

>
> Here is the question: is that really the way it is supposed to work or 
> may I somehow trigger interleaver and deinterleaver
> to produce source and sink pads before actually switching to PLAYING 
> state? And if that is the way, how may I achieve the
> generation of the interleaver sink pads? The creation of the processing 
> chain is aways from src to sink which implies that the
> interleaver must have the sink pads connected first which are not available.

That's all intended, yes. deinterleave does not know about the number
of channels before it receives data.

You'll have to connect to the "pad-added" signal of it, and from there
link the pads further. See also the first part in chapter 8 of the
manual (application developer's manual).

--
Sebastian Dröge, Centricular Ltd · http://www.centricular.com
_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel

signature.asc (949 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: Question regarding use of interleaved and deinterleaved elements in processing chain

hk-iks
Hi,

thank you for your response.

On 08/22/2016 08:05 AM, Sebastian Dröge wrote:

> On Sun, 2016-08-21 at 18:45 +0200, Hauke Krüger wrote:
>>  
>>
>> Here is the question: is that really the way it is supposed to work or
>> may I somehow trigger interleaver and deinterleaver
>> to produce source and sink pads before actually switching to PLAYING
>> state? And if that is the way, how may I achieve the
>> generation of the interleaver sink pads? The creation of the processing
>> chain is aways from src to sink which implies that the
>> interleaver must have the sink pads connected first which are not available.
> That's all intended, yes. deinterleave does not know about the number
> of channels before it receives data.
>
> You'll have to connect to the "pad-added" signal of it, and from there
> link the pads further. See also the first part in chapter 8 of the
> manual (application developer's manual).

Yes, that worked out as you said. However, it seems to be impossible to
set the pipeline into the
PLAYING state afterwards: Whenever trying to set the state of the
pipeline to PLAYING, the pipeline element
is blocked since it expects an ASYNC state change which unfortunately
never really is solved.

How is that part supposed to work? Is there any way to somehow observe
the ASYNC state change by catching
a message in the main loop?

Thank you and best regards

Hauke


_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

Re: Question regarding use of interleaved and deinterleaved elements in processing chain

Sebastian Dröge-3
On Mon, 2016-08-22 at 21:15 +0200, Hauke Krüger wrote:

>
> > You'll have to connect to the "pad-added" signal of it, and from there
> > link the pads further. See also the first part in chapter 8 of the
> > manual (application developer's manual).
>
> Yes, that worked out as you said. However, it seems to be impossible to 
> set the pipeline into the
> PLAYING state afterwards: Whenever trying to set the state of the 
> pipeline to PLAYING, the pipeline element
> is blocked since it expects an ASYNC state change which unfortunately 
> never really is solved.
>
> How is that part supposed to work? Is there any way to somehow observe 
> the ASYNC state change by catching
> a message in the main loop?
That all depends on your actual pipeline. How does it look like?

Most commonly this means that you have one or more sinks in your
pipeline that never receive any buffer and are async=true (which is the
default). You should check where data flows in your pipeline and why it
doesn't reach all sinks. Also check if you get any error messages.

--
Sebastian Dröge, Centricular Ltd · http://www.centricular.com
_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel

signature.asc (949 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: Question regarding use of interleaved and deinterleaved elements in processing chain

hk-iks


On 08/23/2016 10:24 AM, Sebastian Dröge wrote:

> On Mon, 2016-08-22 at 21:15 +0200, Hauke Krüger wrote:
>>> You'll have to connect to the "pad-added" signal of it, and from there
>>> link the pads further. See also the first part in chapter 8 of the
>>> manual (application developer's manual).
>> Yes, that worked out as you said. However, it seems to be impossible to
>> set the pipeline into the
>> PLAYING state afterwards: Whenever trying to set the state of the
>> pipeline to PLAYING, the pipeline element
>> is blocked since it expects an ASYNC state change which unfortunately
>> never really is solved.
>>
>> How is that part supposed to work? Is there any way to somehow observe
>> the ASYNC state change by catching
>> a message in the main loop?
> That all depends on your actual pipeline. How does it look like?
>
> Most commonly this means that you have one or more sinks in your
> pipeline that never receive any buffer and are async=true (which is the
> default). You should check where data flows in your pipeline and why it
> doesn't reach all sinks. Also check if you get any error messages.
>
>

My pipeline is rather simple:

alsasrc -> capsfilter -> deinterleaver -<channel0>-> interleaver ->
capsfilter -> alsasink
-<channel1>->

alsasrc and alsasink are configured to allow 2 channels, no rate
conversion must be
done.

I tracked the problem by means of my debugger: once all pads are
connected, I can see
the deinterleaver working: It has 2 output sources and I can see the
code in which the deinterleaver
loops over all sources here: deinterleave.c, line 896 (current master
release of gstreamer 1.0 project).
I see that the function "gst_pad_push" is used within the loop to pass
the single buffers to the connected
interleaver sink.

When debugging into this function call of "gst_pad_push" for the first
single buffer to be transferred, I end
up in "gstcollectpad.c" since the interleaver seems to use this
collector block. The buffers seem to arrive
in  function "gst_collect_pads_chain" to wait for all buffers before
interleaving takes place.

Debugger shows that the first buffer is accepted but the program flow
ends in a non-returning wait condition
in line 2257 in "gstcollectpad.c",

/* wait to be collected, this must happen from another thread triggered
      * by the _chain function of another pad. We release the lock so we
      * can get stopped or flushed as well. We can however not get EOS
      * because we still hold the STREAM_LOCK.
      */
     GST_COLLECT_PADS_STREAM_UNLOCK (pads);
     GST_COLLECT_PADS_EVT_WAIT (pads, cookie); <---- HERE
     GST_COLLECT_PADS_STREAM_LOCK (pads);

It seems that the wait will persist locked until the chain function is
called from within the context of another thread.

This behavior would make sense if the deinterleaver would decompose the
single buffers to be
processed within different thread contexts. This, however, seems not to
be the case.

Do I miss to set a specific property to tell the interleaver that it
shall not be in wait mode? Or is there another trick which
I would need to employ?

Thank you and best regards

Hauke

_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

Re: Question regarding use of interleaved and deinterleaved elements in processing chain

Sebastian Dröge-3
On Tue, 2016-08-23 at 15:23 +0200, Hauke Krüger wrote:

>
> On 08/23/2016 10:24 AM, Sebastian Dröge wrote:
> >
> > On Mon, 2016-08-22 at 21:15 +0200, Hauke Krüger wrote:
> > >
> > > >
> > > > You'll have to connect to the "pad-added" signal of it, and from there
> > > > link the pads further. See also the first part in chapter 8 of the
> > > > manual (application developer's manual).
> > > Yes, that worked out as you said. However, it seems to be impossible to
> > > set the pipeline into the
> > > PLAYING state afterwards: Whenever trying to set the state of the
> > > pipeline to PLAYING, the pipeline element
> > > is blocked since it expects an ASYNC state change which unfortunately
> > > never really is solved.
> > >
> > > How is that part supposed to work? Is there any way to somehow observe
> > > the ASYNC state change by catching
> > > a message in the main loop?
> > That all depends on your actual pipeline. How does it look like?
> >
> > Most commonly this means that you have one or more sinks in your
> > pipeline that never receive any buffer and are async=true (which is the
> > default). You should check where data flows in your pipeline and why it
> > doesn't reach all sinks. Also check if you get any error messages.
> >
> >
>
> My pipeline is rather simple:
>
> alsasrc -> capsfilter -> deinterleaver -<channel0>-> interleaver -> 
> capsfilter -> alsasink
> -<channel1>->
>
> alsasrc and alsasink are configured to allow 2 channels, no rate 
> conversion must be
> done.
>
> I tracked the problem by means of my debugger: once all pads are 
> connected, I can see
> the deinterleaver working: It has 2 output sources and I can see the 
> code in which the deinterleaver
> loops over all sources here: deinterleave.c, line 896 (current master 
> release of gstreamer 1.0 project).
> I see that the function "gst_pad_push" is used within the loop to pass 
> the single buffers to the connected
> interleaver sink.
>
> When debugging into this function call of "gst_pad_push" for the first 
> single buffer to be transferred, I end
> up in "gstcollectpad.c" since the interleaver seems to use this 
> collector block. The buffers seem to arrive
> in  function "gst_collect_pads_chain" to wait for all buffers before 
> interleaving takes place.
>
> Debugger shows that the first buffer is accepted but the program flow 
> ends in a non-returning wait condition
> in line 2257 in "gstcollectpad.c",
>
> /* wait to be collected, this must happen from another thread triggered
>       * by the _chain function of another pad. We release the lock so we
>       * can get stopped or flushed as well. We can however not get EOS
>       * because we still hold the STREAM_LOCK.
>       */
>      GST_COLLECT_PADS_STREAM_UNLOCK (pads);
>      GST_COLLECT_PADS_EVT_WAIT (pads, cookie); <---- HERE
>      GST_COLLECT_PADS_STREAM_LOCK (pads);
>
> It seems that the wait will persist locked until the chain function is 
> called from within the context of another thread.
>
> This behavior would make sense if the deinterleaver would decompose the 
> single buffers to be
> processed within different thread contexts. This, however, seems not to 
> be the case.
That's exactly the problem: you have to put a queue element after each
deinterleave pad for adding this new thread.

--
Sebastian Dröge, Centricular Ltd · http://www.centricular.com
_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel

signature.asc (949 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: Question regarding use of interleaved and deinterleaved elements in processing chain

hk-iks

Am 23.08.2016 um 15:29 schrieb Sebastian Dröge:
On Tue, 2016-08-23 at 15:23 +0200, Hauke Krüger wrote:
On 08/23/2016 10:24 AM, Sebastian Dröge wrote:
On Mon, 2016-08-22 at 21:15 +0200, Hauke Krüger wrote:

            
You'll have to connect to the "pad-added" signal of it, and from there
link the pads further. See also the first part in chapter 8 of the
manual (application developer's manual).
Yes, that worked out as you said. However, it seems to be impossible to
set the pipeline into the
PLAYING state afterwards: Whenever trying to set the state of the
pipeline to PLAYING, the pipeline element
is blocked since it expects an ASYNC state change which unfortunately
never really is solved.

How is that part supposed to work? Is there any way to somehow observe
the ASYNC state change by catching
a message in the main loop?
That all depends on your actual pipeline. How does it look like?

Most commonly this means that you have one or more sinks in your
pipeline that never receive any buffer and are async=true (which is the
default). You should check where data flows in your pipeline and why it
doesn't reach all sinks. Also check if you get any error messages.


My pipeline is rather simple:

alsasrc -> capsfilter -> deinterleaver -<channel0>-> interleaver -> 
capsfilter -> alsasink
-<channel1>->

alsasrc and alsasink are configured to allow 2 channels, no rate 
conversion must be
done.

I tracked the problem by means of my debugger: once all pads are 
connected, I can see
the deinterleaver working: It has 2 output sources and I can see the 
code in which the deinterleaver
loops over all sources here: deinterleave.c, line 896 (current master 
release of gstreamer 1.0 project).
I see that the function "gst_pad_push" is used within the loop to pass 
the single buffers to the connected
interleaver sink.

When debugging into this function call of "gst_pad_push" for the first 
single buffer to be transferred, I end
up in "gstcollectpad.c" since the interleaver seems to use this 
collector block. The buffers seem to arrive
in  function "gst_collect_pads_chain" to wait for all buffers before 
interleaving takes place.

Debugger shows that the first buffer is accepted but the program flow 
ends in a non-returning wait condition
in line 2257 in "gstcollectpad.c",

/* wait to be collected, this must happen from another thread triggered
      * by the _chain function of another pad. We release the lock so we
      * can get stopped or flushed as well. We can however not get EOS
      * because we still hold the STREAM_LOCK.
      */
     GST_COLLECT_PADS_STREAM_UNLOCK (pads);
     GST_COLLECT_PADS_EVT_WAIT (pads, cookie); <---- HERE
     GST_COLLECT_PADS_STREAM_LOCK (pads);

It seems that the wait will persist locked until the chain function is 
called from within the context of another thread.

This behavior would make sense if the deinterleaver would decompose the 
single buffers to be
processed within different thread contexts. This, however, seems not to 
be the case.
That's exactly the problem: you have to put a queue element after each
deinterleave pad for adding this new thread.



Thank you, that was the trick I was looking for ;-)

For those who will run into the same problem: If you connect the interleaver to alsasink, you will
in addition need to provide a channel mask by setting the property "channel-positions" properly.
Otherwise, the interleaver will not connect to alsasink.
And if you set that property, you will need to link against the gstreamer-audio library in order to set the
correct GValue type (type GST_TYPE_AUDIO_CHANNEL_POSITION).

And finally: we have a soundcard running which has 196 channels. The interleave gstreamer plugin has a limitation
to expect a maximum of 64 channels only: In line 63 of interleaver.h, the limitatin is given as

gint default_channels_ordering_map[64];

By connecting 196 channels, this limit does not really stop the interleaver from returning requested pads as it seems.
But it will cause a segmentatin fault later. Setting the 64 to 196 solved this problem. The same modification maybe done in line 275 in
file interleave.c. But I have to admit those 196 channels are really an unexpected hardware setting.

Thank you and best regards

Hauke



_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel

-- 
Dr.-Ing. Hauke Krüger
Institute of Communication Systems (IKS)
RWTH Aachen University
Muffeter Weg 3a, 52074 Aachen, Germany
+49 241 80 26963
[hidden email]
http://www.iks.rwth-aachen.de

_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

Re: Question regarding use of interleaved and deinterleaved elements in processing chain

Sebastian Dröge-3
On Wed, 2016-08-24 at 22:03 +0200, [hidden email] wrote:

>
> > That's exactly the problem: you have to put a queue element after
> > each deinterleave pad for adding this new thread.
>  
> Thank you, that was the trick I was looking for ;-)
>
> For those who will run into the same problem: If you connect the
> interleaver to alsasink, you will
> in addition need to provide a channel mask by setting the property
> "channel-positions" properly.
> Otherwise, the interleaver will not connect to alsasink.
> And if you set that property, you will need to link against the
> gstreamer-audio library in order to set the
> correct GValue type (type GST_TYPE_AUDIO_CHANNEL_POSITION).
You can also let it use the channel position of the input btw.

Instead of interleave you might also want to use audiointerleave, which
is the new version of the element and generally works better.

> And finally: we have a soundcard running which has 196 channels. The
> interleave gstreamer plugin has a limitation
> to expect a maximum of 64 channels only: In line 63 of interleaver.h,
> the limitatin is given as
>
> gint default_channels_ordering_map[64];
>
> By connecting 196 channels, this limit does not really stop the
> interleaver from returning requested pads as it seems.
> But it will cause a segmentatin fault later. Setting the 64 to 196
> solved this problem. The same modification maybe done in line 275 in
> file interleave.c. But I have to admit those 196 channels are really
> an unexpected hardware setting.
We currently only have support for 64 positioned channels, but more
unpositioned should be useful and relatively easy to support.

Do you want to provide your patch? Instead of making the array bigger,
for > 64 channels you would use an unpositioned layout
(GST_AUDIO_CHANNEL_POSITION_UNPOSITIONED) and can leave the array
empty.

--
Sebastian Dröge, Centricular Ltd · http://www.centricular.com
_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel

signature.asc (949 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: Question regarding use of interleaved and deinterleaved elements in processing chain

hk-iks

Am 25.08.2016 um 08:58 schrieb Sebastian Dröge:

> On Wed, 2016-08-24 at 22:03 +0200, [hidden email] wrote:
>>> That's exactly the problem: you have to put a queue element after
>>> each deinterleave pad for adding this new thread.
>>  
>> Thank you, that was the trick I was looking for ;-)
>>
>> For those who will run into the same problem: If you connect the
>> interleaver to alsasink, you will
>> in addition need to provide a channel mask by setting the property
>> "channel-positions" properly.
>> Otherwise, the interleaver will not connect to alsasink.
>> And if you set that property, you will need to link against the
>> gstreamer-audio library in order to set the
>> correct GValue type (type GST_TYPE_AUDIO_CHANNEL_POSITION).
> You can also let it use the channel position of the input btw.
>
> Instead of interleave you might also want to use audiointerleave, which
> is the new version of the element and generally works better.

From what I observed with the debugger, the following seems to be the
strategy:

If letting the channel positions empty and setting the option
"channel-positions-from-input" to true (the default),
the module uses a NULL/empty channel configuration.
The channel-mask which is filled in the caps automatically on caps
fixation is 0x0000000 in this case.
The alsasink, however, specifies the channel-mask as 0x00000003 given,
e.g., a stereo output. That
prevents the interleaver to connect with the alsasink in my case.

Depending on the number of channels, different channel-positions must be
specified since this is what alsasink expects.

If using more than 8 channels, the alsasink also sets the channel-mask
to 0x0000000 since channel positions are not defined for this
number of channels. In that case, there is no need to set the channel
positions and interleaver connects with alsasink.

>
>> And finally: we have a soundcard running which has 196 channels. The
>> interleave gstreamer plugin has a limitation
>> to expect a maximum of 64 channels only: In line 63 of interleaver.h,
>> the limitatin is given as
>>
>> gint default_channels_ordering_map[64];
>>
>> By connecting 196 channels, this limit does not really stop the
>> interleaver from returning requested pads as it seems.
>> But it will cause a segmentatin fault later. Setting the 64 to 196
>> solved this problem. The same modification maybe done in line 275 in
>> file interleave.c. But I have to admit those 196 channels are really
>> an unexpected hardware setting.
> We currently only have support for 64 positioned channels, but more
> unpositioned should be useful and relatively easy to support.
>
> Do you want to provide your patch? Instead of making the array bigger,
> for > 64 channels you would use an unpositioned layout
> (GST_AUDIO_CHANNEL_POSITION_UNPOSITIONED) and can leave the array
> empty.
>
>

It might be a clean way to introduce a global configuration variable for
gstreamer to set the
maximum number of allowed channels at a central position since also the
alsa plugin
has a limitiation of channels to a number of 8 given that users open a
plughw device.
For my 196 channels, I had to increase the number of channels at
multiple positions.
By introducing a global variable (e.g. a global define), this aspect
would be clear from the beginning
and might be adapted if installing the gstreamer lib manually. If I
modify the max value from
64 to 256 (my current solution), the next soundcard may have 384
channels ;-)

Best regards

Hauke


_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

Re: Question regarding use of interleaved and deinterleaved elements in processing chain

diegoavila
In reply to this post by hk-iks
Hello im facing the same problem could you help me_?



--
Sent from: http://gstreamer-devel.966125.n4.nabble.com/
_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel