Hi,
I'm Ghizlane, and I just discovered the module gstcef in open source under github :
It's really great because it's a project that will help me a lot to get the audio but also the video I want to send to an RTSP server.
In rtsp the audio and the video are not muxed but a SDP declares the audio and video flows in a first exchanges (always in RTSP).
Then each one is retransmitted separately by RTP.
Starting from the pipeline that they propose for the test, I tried several variants and now I have the following pipeline:
gst-launch-1.0 cefsrc url = file: /// .....! queue max-size-buffers = 500 ! cefdemux name = d d.video ! video / x-raw, format = BGRA, width = 1280, height = 720, framerate = 30/1! queue max-size-buffers = 500 ! videoconvert! videorate max-rate = 2500! videoscale! x264enc bitrate = 2500 tune = zerolatency speed-preset = superfast key-int-max = 60! mux. d.audio ! audio / x-raw, channels = 1, rate = 48000! queue max-size-buffers = 500 ! audioconvert! audiorate! voaacenc bitrate = 96000 ! rtspclientsink debug = 1 latency = 0 stream-name = "...." location = rtsp: // ....
You will recognize the beginning; but the end has been cut in half for each Audio and Video stream. Indeed, I use cefdemux module because I have to convert video to H264 and audio to AAC-LC separately.
For each stream, it performs a RTSP request separately but I do not think this way is the most ideal.
According to you, do you think it is possible to recover the A / V streams once converted to the rtspclientsink module in order to make only one rtsp request?
Thank you very much for your return.
Best regards
Ghizlane
_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel