possible gstreamer scenario validation

classic Classic list List threaded Threaded
3 messages Options
Reply | Threaded
Open this post in threaded view
|

possible gstreamer scenario validation

tasos
Hello.
I would like to ask an easy question(i think).
My scenario is the following.
I receive a stream(x264) which i edit via opencv.
I can send those edited images to ffmpeg and create a video which i
can also stream.The problem is that i can't have audio.
And the possible solutions...
Writing a program using ffmpeg's api is not something i could do right
now,and i think
that i will face more problems during development.
I think that gstreamer could help me.
Of course i have read than sync is very important and not very easy.
I was reading about appsink here
> https://stackoverflow.com/questions/46219454/how-to-open-a-gstreamer-pipeline-from-opencv-with-videowriter
So appsink can get my edited images like they were added to the pipeline
and then do whatever i want.
Possibly adding the audio that i want?
So my question is if something like this is possible,by only adding some
audio stuff at the pipeline?
Could you point me to some direction or at least let me know if this is
impossible via gstreamer.
Thank you very much!

_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

Re: possible gstreamer scenario validation

Nicolas Dufresne-5
Le jeudi 12 juillet 2018 à 00:27 +0300, tasos a écrit :

> Hello.
> I would like to ask an easy question(i think).
> My scenario is the following.
> I receive a stream(x264) which i edit via opencv.
> I can send those edited images to ffmpeg and create a video which i
> can also stream.The problem is that i can't have audio.
> And the possible solutions...
> Writing a program using ffmpeg's api is not something i could do right
> now,and i think
> that i will face more problems during development.
> I think that gstreamer could help me.
> Of course i have read than sync is very important and not very easy.
> I was reading about appsink here
> > https://stackoverflow.com/questions/46219454/how-to-open-a-gstreamer-pipeline-from-opencv-with-videowriter

This is a pretty neat approach. If streaming with RTP is what you are
looking for, it is likely you will stream that over a different port.
So you could add an audio graph to you appsrc ... ! udpsink pipeline.

To reuse the examples, assuming Linux:

writer.open("appsrc ! videoconvert ! jpegenc ! jpegparse ! rtpjpegpay pt=96 ! udpsink host=192.168.1.25 port=5000"
            "pulsesrc ! openenc ! rtpopuspay ! udpsink host=192.168.1.25 port=5002", 0, (double)30, cv::Size(640, 360), true);


> So appsink can get my edited images like they were added to the pipeline
> and then do whatever i want.
> Possibly adding the audio that i want?
> So my question is if something like this is possible,by only adding some
> audio stuff at the pipeline?
> Could you point me to some direction or at least let me know if this is
> impossible via gstreamer.
> Thank you very much!



>
> _______________________________________________
> gstreamer-devel mailing list
> [hidden email]
> https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

Re: possible gstreamer scenario validation

tasos
Hello and thank you very much for replying.
I will try it asap and let you know.
Thanks again!

On 7/12/2018 2:29 AM, Nicolas Dufresne wrote:

> Le jeudi 12 juillet 2018 à 00:27 +0300, tasos a écrit :
>> Hello.
>> I would like to ask an easy question(i think).
>> My scenario is the following.
>> I receive a stream(x264) which i edit via opencv.
>> I can send those edited images to ffmpeg and create a video which i
>> can also stream.The problem is that i can't have audio.
>> And the possible solutions...
>> Writing a program using ffmpeg's api is not something i could do right
>> now,and i think
>> that i will face more problems during development.
>> I think that gstreamer could help me.
>> Of course i have read than sync is very important and not very easy.
>> I was reading about appsink here
>>> https://stackoverflow.com/questions/46219454/how-to-open-a-gstreamer-pipeline-from-opencv-with-videowriter
> This is a pretty neat approach. If streaming with RTP is what you are
> looking for, it is likely you will stream that over a different port.
> So you could add an audio graph to you appsrc ... ! udpsink pipeline.
>
> To reuse the examples, assuming Linux:
>
> writer.open("appsrc ! videoconvert ! jpegenc ! jpegparse ! rtpjpegpay pt=96 ! udpsink host=192.168.1.25 port=5000"
>              "pulsesrc ! openenc ! rtpopuspay ! udpsink host=192.168.1.25 port=5002", 0, (double)30, cv::Size(640, 360), true);
>
>
>> So appsink can get my edited images like they were added to the pipeline
>> and then do whatever i want.
>> Possibly adding the audio that i want?
>> So my question is if something like this is possible,by only adding some
>> audio stuff at the pipeline?
>> Could you point me to some direction or at least let me know if this is
>> impossible via gstreamer.
>> Thank you very much!
>
>
>> _______________________________________________
>> gstreamer-devel mailing list
>> [hidden email]
>> https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
> _______________________________________________
> gstreamer-devel mailing list
> [hidden email]
> https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel


_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel