Add audio and record RTP stream.

classic Classic list List threaded Threaded
7 messages Options
Reply | Threaded
Open this post in threaded view
|

Add audio and record RTP stream.

Andrew Borntrager
Hi all! I have a working pipeline running on a Raspberry Pi:

gst-launch-1.0 -v vl2src device=/dev/video0 ! queue ! video/x-h264, width=1280, height=720, framerate=15/1 ! queue ! h264parse ! queue ! rtph264pay pt=127 config-interval=4 ! udpsink host=***********.ddns.net port=5000

I have a windows laptop with this:

gst-launch-1.0  udpsrc caps="application/x-rtp, media=(string)video, clock-rate=(int) 90000, encoding-name=(string)H264, sampling=(string)YCbCr-4:4:4, depth=(string)8, width=(string)320, height=(string)240, payload=(int)96, clock-base=(uint)4068866987, seqnum-base=(uint)24582" port=5000 ! rtph264depay ! decodebin !queue! autovideosink

This works very well. However, I would like to add audio (Im using a Logitech C920 webcam). Also, i would like to record incoming stream directly to laptop (possibly the "tee" command??). Any latency optimizations would be greatly appreciated, even if it means momentary degradation of video. I'm kind of a newbie, so copy/paste and inserting into my pipeline is also greatly appreciated! 

_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

Re: Add audio and record RTP stream.

Nicolas Dufresne-4
Le jeudi 07 juillet 2016 à 08:31 -0400, Andrew Borntrager a écrit :

> Hi all! I have a working pipeline running on a Raspberry Pi:
>
> gst-launch-1.0 -v vl2src device=/dev/video0 ! queue ! video/x-h264,
> width=1280, height=720, framerate=15/1 ! queue ! h264parse ! queue !
> rtph264pay pt=127 config-interval=4 ! udpsink
> host=***********.ddns.net port=5000
>
> I have a windows laptop with this:
>
> gst-launch-1.0  udpsrc caps="application/x-rtp, media=(string)video,
> clock-rate=(int) 90000, encoding-name=(string)H264,
> sampling=(string)YCbCr-4:4:4, depth=(string)8, width=(string)320,
> height=(string)240, payload=(int)96, clock-base=(uint)4068866987,
> seqnum-base=(uint)24582" port=5000 ! rtph264depay ! decodebin !queue!
> autovideosink
You should use rtpjitterbuffer and set it's latency property, right
after each udpsrc. It will remove the burst effect and ensure sync
between audio and video.

>
> This works very well. However, I would like to add audio (Im using a
> Logitech C920 webcam). Also, i would like to record incoming stream
> directly to laptop (possibly the "tee" command??). Any latency
> optimizations would be greatly appreciated, even if it means
> momentary degradation of video. I'm kind of a newbie, so copy/paste
> and inserting into my pipeline is also greatly appreciated! 

For the audio part, it's pretty much the same method. It depends on
your raspery pi setup. There is 2 possible audio source, alsasrc, for
which you need to set the device property based on the output of
arecord -L, or pulsesrc which also have a device property and the list
of sources can be opbained using "pactl list short sources". Most
people uses OPUS to encode audio, the clock rate is always 48000.

cheers,
Nicolas
_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel

signature.asc (188 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: Add audio and record RTP stream.

Andrew Borntrager
Thank you for the rapid response. I am trying to use the built in microphone in the webcam. I obtained the device property using pactl list short sources (thanks for the great tip) and inserted as follows:

 gst-launch-1.0 -v vl2src device=/dev/video0 ! queue ! -e pulserc device="alsa_input.usb-046d_HD_Pro_Webcam_C920_118F5B1F-02-C920.analog-stereo" ! queue ! video/x-h264, width=1280, height=720, framerate=15/1 ! queue ! h264parse ! queue ! rtph264pay pt=127 config-interval=4 ! udpsink host=***********.ddns.net port=5000

Does anyone know where I went wrong?



On Thu, Jul 7, 2016 at 9:22 AM, Nicolas Dufresne <[hidden email]> wrote:
Le jeudi 07 juillet 2016 à 08:31 -0400, Andrew Borntrager a écrit :
> Hi all! I have a working pipeline running on a Raspberry Pi:
>
> gst-launch-1.0 -v vl2src device=/dev/video0 ! queue ! video/x-h264,
> width=1280, height=720, framerate=15/1 ! queue ! h264parse ! queue !
> rtph264pay pt=127 config-interval=4 ! udpsink
> host=***********.ddns.net port=5000
>
> I have a windows laptop with this:
>
> gst-launch-1.0  udpsrc caps="application/x-rtp, media=(string)video,
> clock-rate=(int) 90000, encoding-name=(string)H264,
> sampling=(string)YCbCr-4:4:4, depth=(string)8, width=(string)320,
> height=(string)240, payload=(int)96, clock-base=(uint)4068866987,
> seqnum-base=(uint)24582" port=5000 ! rtph264depay ! decodebin !queue!
> autovideosink

You should use rtpjitterbuffer and set it's latency property, right
after each udpsrc. It will remove the burst effect and ensure sync
between audio and video.

>
> This works very well. However, I would like to add audio (Im using a
> Logitech C920 webcam). Also, i would like to record incoming stream
> directly to laptop (possibly the "tee" command??). Any latency
> optimizations would be greatly appreciated, even if it means
> momentary degradation of video. I'm kind of a newbie, so copy/paste
> and inserting into my pipeline is also greatly appreciated! 

For the audio part, it's pretty much the same method. It depends on
your raspery pi setup. There is 2 possible audio source, alsasrc, for
which you need to set the device property based on the output of
arecord -L, or pulsesrc which also have a device property and the list
of sources can be opbained using "pactl list short sources". Most
people uses OPUS to encode audio, the clock rate is always 48000.

cheers,
Nicolas
_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel



_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

Re: Add audio and record RTP stream.

Nicolas Dufresne-5
Le jeudi 07 juillet 2016 à 11:51 -0400, Andrew Borntrager a écrit :
>  gst-launch-1.0 -v vl2src device=/dev/video0 ! queue ! -e pulserc device="alsa_input.usb-046d_HD_Pro_Webcam_C920_118F5B1F-02-C920.analog-stereo" ! queue ! video/x-h264, width=1280, height=720, framerate=15/1 ! queue ! h264parse ! queue ! rtph264pay pt=127 config-interval=4 ! udpsink host=***********.ddns.net port=5000

You need to properly split the two graphs, and you'll need to use two
UDP port (one per stream). An example:

gst-launch-1.0 -v \
  vl2src device=/dev/video0 ! h264parse ! rtph264pay pt=127 config-interval=4 ! udpsink port=5001 \
  pulsesrc device="..." ! opusenc ! rtpopuspay pt=96 ! udpsink port=5002
_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel

signature.asc (188 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: Add audio and record RTP stream.

Marc Leeman

> You need to properly split the two graphs, and you'll need to use two
> UDP port (one per stream). An example:

Do not use odd ports for your data stream, see RFC 3550.

For smooth feedback, you will need to enable RTCP too; however, I do
not know a gst-launch pipeline that you can build to be RFC compliant.

The examples that you can find in the rtpbin documentation use a port
combination that is not quite correct (has to do with sockets being re-
used for RTCP).

We do have convenience bins (rtpsrc/rtpsink) that can help you in that
respect, they are in bugzilla; and I can send you an update from our
git if it would help you out.


_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel

signature.asc (188 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: Add audio and record RTP stream.

Sebastian Dröge-3
On Fr, 2016-07-08 at 11:12 +0200, Marc Leeman wrote:

> >
> > You need to properly split the two graphs, and you'll need to use
> > two
> > UDP port (one per stream). An example:
>
> Do not use odd ports for your data stream, see RFC 3550.
>
> For smooth feedback, you will need to enable RTCP too; however, I do
> not know a gst-launch pipeline that you can build to be RFC compliant.
>
> The examples that you can find in the rtpbin documentation use a port
> combination that is not quite correct (has to do with sockets being re-
> used for RTCP).
There is no socket reuse in these examples, but that is done inside
rtspsrc and gst-rtsp-server to make NATs a bit more happy.

What other than the ports is wrong in which examples? Can you provide
patches? :)

--

Sebastian Dröge, Centricular Ltd · http://www.centricular.com
_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel

signature.asc (968 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: Add audio and record RTP stream.

Marc Leeman

> What other than the ports is wrong in which examples? Can you provide
> patches? :)

Only the ports! Only the ports!
_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel

signature.asc (188 bytes) Download Attachment