Use of Queue in constructing Gstreamer Pipeline for Playback

classic Classic list List threaded Threaded
5 messages Options
Reply | Threaded
Open this post in threaded view
|

Use of Queue in constructing Gstreamer Pipeline for Playback

DeepakRohan
Hi,

        I have a question regarding the audio-queue and video-queue elements in a gstreamer pipeline:

        For an audio video playback, when we create pipeline we have "audio-queue" for audio part of the pipeline and "video-queue" for video part of the pipeline.
1.  What If I want to remove "audio-queue" and have only "video-queue". Will it affect the behavior or performance of the pipeline in any way?. If yes then what are the cases?

I know that video-queue is required as we may need to buffer few frames (Maybe the due to the decoding and display order).

But can we have case 1 valid for all times. If yes then we need not to buffer for audio at all.

Thank You In Advance.
Reply | Threaded
Open this post in threaded view
|

Re: Use of Queue in constructing Gstreamer Pipeline for Playback

Sebastian Dröge-3
On Wed, 2016-09-28 at 05:24 -0700, DeepakRohan wrote:

> Hi,
>
>         I have a question regarding the audio-queue and video-queue elements
> in a gstreamer pipeline:
>
>         For an audio video playback, when we create pipeline we have
> "audio-queue" for audio part of the pipeline and "video-queue" for video
> part of the pipeline.
> 1.  What If I want to remove "audio-queue" and have only "video-queue". Will
> it affect the behavior or performance of the pipeline in any way?. If yes
> then what are the cases?
>
> I know that video-queue is required as we may need to buffer few frames
> (Maybe the due to the decoding and display order).
>
> But can we have case 1 valid for all times. If yes then we need not to
> buffer for audio at all.
The answers to that all depend on the exact pipeline you created.
Generally a queue is needed for decoupling threads from each other, and
e.g. after a 1-to-N element (like tee, demuxer) you will always need
some kind of queue after each srcpad. Other cases depend on pipeline
details.

--
Sebastian Dröge, Centricular Ltd · http://www.centricular.com
_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel

signature.asc (949 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: Use of Queue in constructing Gstreamer Pipeline for Playback

DeepakRohan
Hi Sebastian,

      Again thanks a lot for the quick reply.

My application is always going to do playback related operations. It may never be used for any capturing purposes.
The pipeline looks very similar to the below:

                                                  -----> Audio elements(except Audio Queue) ....... AudioSink
                                                  I
source --> typefind --> Demuxer ---
                                                  I
                                                  ----> Video elements (with Video Queue) .......... VideoSink


So now with the above pipeline always is there a chance or slight possibility that I may face issues later on.

From my command gst-launch-1.0 testing it has worked out so far, but I have not tested it for all possible cases (different audio, video and subtitle codecs with following properties for: audio - sample rate, bit-rate, channels. video - resolution, framerate, level and profile).

Since I am not sure of the consequences of removing audio-queue, because for me it worked on command line as well as with the application. My application creates exactly the same pipeline as mentioned in the above diagram (sort of diagram).

Please can you mention cases where removing audio-queue may cause issues for the above way of pipeline creation.

Thank You in Advance
Reply | Threaded
Open this post in threaded view
|

Re: Use of Queue in constructing Gstreamer Pipeline for Playback

Sebastian Dröge-3
On Fri, 2016-09-30 at 07:41 -0700, DeepakRohan wrote:

> Hi Sebastian,
>
>       Again thanks a lot for the quick reply.
>
> My application is always going to do playback related operations. It may
> never be used for any capturing purposes.
> The pipeline looks very similar to the below:
>
>                                                   -----> Audio
> elements(except Audio Queue) ....... AudioSink
>                                                   I
> source --> typefind --> Demuxer ---
>                                                   I
>                                                   ----> Video elements (with
> Video Queue) .......... VideoSink
>
>
> So now with the above pipeline always is there a chance or slight
> possibility that I may face issues later on.
>
> From my command gst-launch-1.0 testing it has worked out so far, but I have
> not tested it for all possible cases (different audio, video and subtitle
> codecs with following properties for: audio - sample rate, bit-rate,
> channels. video - resolution, framerate, level and profile).
>
> Since I am not sure of the consequences of removing audio-queue, because for
> me it worked on command line as well as with the application. My application
> creates exactly the same pipeline as mentioned in the above diagram (sort of
> diagram).
>
> Please can you mention cases where removing audio-queue may cause issues for
> the above way of pipeline creation.
Without the queue, the demuxer will directly push from its own thread
to the audio sink. By default all sinks will block until every sink has
a buffer, so if it happens that there is first audio in your container
and only then video, then the demuxer will push audio, the audio sink
will wait for the video sink and block the demuxer, and the demuxer has
no possibility to push video to the video sink.

Another case where this is problematic is if your sinks are
synchronising to the clock and the container has not a perfect
interleave. Consider the case where there is always 1s of audio, then
1s of video, and 1s of audio again. What will happen is that the audio
sink will first consume the 1s of audio, while the video sink is
starving for 1s. Then the video queue will fill up 1s, the video sink
can play 1s (which is all too late now), and the audio sink again makes
the demuxer output 1s of audio while the video sink is starving.


There are more possible scenarios like this. Generally, use queues
after each demuxer source pad to prevent this. Or even better, in your
case, use a single multiqueue with one pad per demuxer source pad. Or
yet even better, use uridecodebin or decodebin for your pipeline, which
will automatically insert queues/multiqueues as needed.

--
Sebastian Dröge, Centricular Ltd · http://www.centricular.com
_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel

signature.asc (949 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: Use of Queue in constructing Gstreamer Pipeline for Playback

DeepakRohan
Hi Sebastian,

So If I have the below pipeline then it will be fine:
So that we are buffering somewhere (in below case its audio multiqueue for audio), so as to make sure that we synchronized
on the outgoing audio-video samples.

                                                  -----> Audio MultiQueue --> InputSelector --> Audio elements(No Audio Queue)......AudioSink
                                                  I
source --> typefind --> Demuxer ---
                                                  I
                                                  ----> Video elements (with Video Queue) .......... VideoSink

Thank You In Advance.