how to pull buffer from two appsink from two different pipeline and pushing to audiomixer elemnt of a new pipeline using appsrc element

classic Classic list List threaded Threaded
3 messages Options
Reply | Threaded
Open this post in threaded view
|

how to pull buffer from two appsink from two different pipeline and pushing to audiomixer elemnt of a new pipeline using appsrc element

YogaV
This post was updated on .
Hi.... I have succeed to pull the data from tow live streamer pipeline using appsink's. Now I want to push the data to newly created pipeline to mix these two streaming buffers and store it into wave file. The new pipeline call recordpipeline that contains below elements.
appsrc1->audiomixer->audioconverter->wavenc->filesink.
I did that successfully by pushing the buffer to appsrc1. But appsrc2 is always getting failed and I am able to store the audio that is coming from appsrc1. So resulting wave file is always containing only the buffer that's given by appsrc1. but I also want to merge the buffer that I am pushing it to appsrc2 element. Please refer attached record pipeline screen shot.RecordPipeline
Reply | Threaded
Open this post in threaded view
|

Re: how to pull buffer from two appsink from two different pipeline and pushing to audiomixer elemnt of a new pipeline using appsrc element

Arjen Veenhuizen
Did you make sure that both buffers have (approximately) the same timestamp before pushing them into the respective appsrr element?
Reply | Threaded
Open this post in threaded view
|

Re: how to pull buffer from two appsink from two different pipeline and pushing to audiomixer elemnt of a new pipeline using appsrc element

YogaV
Hi Arjen, Thanks for your reply. I didn't set up any timestamp for the buffers that I am extracting from two appsink elements from two different pipelines. I am extracting the buffers from registered appsink callback functions and push that buffers as "sample" to appsrc(which is in another pipeline call record pipeline) using function gst_app_src_push_sample. during this I am not setting any timestamp details in samples. I came across one Gstreamer-devel thread and saw the reply like below

a) use the same clock on both pipeline (gst_pipeline_use_clock())
b) set the same base time on both pipelines
(gst_element_set_base_time()) and set start time to GST_CLOCK_TIME_NONE
(gst_element_set_start_time())
c) configure the latency correctly, that is: query the latency (with
the latency query) on the appsink once it is PLAYING, and configure
exactly those values on the corresponding appsrc as min/max latency.

Additionally you need to ensure that the segment event that comes out
of appsrc is the same as the one that went into appsink, but in your
pipeline that is most likely the case. But better double check and if
not you need to ensure that appsrc is producing the same segment event.

If you use a single, connected pipeline, GStreamer will ensure these
things by itself already. If you use multiple, you will need to do that
yourself.

Currently I am assigned to some other requirement. I will post my reply soon after I tested this. Thanks again