Receiving RTP without udpsrc

classic Classic list List threaded Threaded
6 messages Options
Reply | Threaded
Open this post in threaded view
|

Receiving RTP without udpsrc

wsnark
Hi,

I'm trying to build a GStreamer-based RTP receiver in a sandboxed architecture:
- A "listener" process listens at UDP port and redirect the stream to receiver
- "Receiver" process runs in a sandbox without network access, gets data from "listener" over a pipe.

The reason for such architecture is increased security - so that vulnerable parsing code is run with minimum privileges.

At prototyping phase I'm trying to create a PoC using "gst-launch-1.0", but I cannot find a way to create a working pipeline to play RTP stream from a pipe instead of udpsrc.

For example, usual udpsrc receiving pipeline that works:
gst-launch-1.0 udpsrc  port=3000 caps="application/x-rtp, media=(string)audio, clock-rate=(int)8000, encoding-name=(string)PCMU" ! rtppcmudepay ! mulawdec ! pulsesink

Corresponding sending part is
gst-launch-1.0 filesrc location="test.wav" ! wavparse ! audioconvert ! audioresample ! mulawenc ! rtppcmupay ! udpsink host=127.0.0.1 port=3000

Changing udpsrc to filesrc doesn't work:
gst-launch-1.0 filesrc location="/tmp/pipe" !  "application/x-rtp, media=(string)audio, clock-rate=(int)8000, encoding-name=(string)PCMU" ! rtppcmudepay ! mulawdec ! pulsesink

Sending part:
gst-launch-1.0 filesrc location="test.wav" ! wavparse ! audioconvert ! audioresample ! mulawenc ! rtppcmupay ! filesink location=/tmp/pipe

The stream is actually played, but just garbled sound. Error output:
WARNING: from element /GstPipeline:pipeline0/GstRtpPcmuDepay:rtppcmudepay0: Could not decode stream.
Additional debug info:
gstrtpbasedepayload.c(503): gst_rtp_base_depayload_handle_buffer (): /GstPipeline:pipeline0/GstRtpPcmuDepay:rtppcmudepay0:
Received invalid RTP payload, dropping

If I capture incoming stream to file, then I'm unable to play it either (same behavior). If I remove RTP elements from the pipeline, raw PCMU is played fine.

So my questions are:
1. Is it possible to play RTP stream without udpsrc using gst-launch-1.0?
2. Is it possible to implement this in code, in own application?

Thanks,
Wire Snark


_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

Re: Receiving RTP without udpsrc

Peter Maersk-Moller-2
Hi

Using a named pipe is notoriously difficult to get to work. The following pipeline using unnamed pipe works.

gst-launch-1.0 audiotestsrc is-live=1 ! audioconvert ! audioresample ! mulawenc ! rtppcmupay ! fdsink fd=3 3>&1 1>&2 | gst-launch-1.0 fdsrc fd=0 !  "application/x-rtp, media=(string)audio, clock-rate=(int)8000, encoding-name=(string)PCMU" ! queue ! rtpjitterbuffer ! rtppcmudepay ! mulawdec ! queue ! pulsesink 

However that won't get you where you say you want to go. However worth noting is that using fdsrc and fdsink is easy to get to work.

I use xinetd (inetd's replacement). It will listen for UDP packets on a port and start up a process and forward these packets to the stdin of the process. The process can then use GStreamer's fdsrc to receive the packets. You pipeline can use chroot to sandbox it if set up corrrectly and xinetd can be setup to only accept packets from the IP addresses you define.

I use this xinetd configuration (place it in /etc/xinetd.d/something-audio)

service snowmix-audio-feed-udp-1-1
{
        disable         = yes
        type            = UNLISTED
        id              = snowmix-audio-feed-udp-1-1-stream
        socket_type     = dgram
        protocol        = udp
        user            = stream
        wait            = yes
        instances       = 1
        server          = /home/stream/Projects/Snowmix-0.5.2/xinetd/xinetd-audio-feed
        #server_args    = port format rate channels
        server_args     = 9100 vorbis/rtp 44100 1
        port            = 9100
        only_from       = 127.0.0.1/32 192.168.0.0/16
}

When xinetd receives a udp packet on port 9100, it will here execute the script /home/stream/Projects/Snowmix-0.5.2/xinetd/xinetd-audio-feed with the 4 arguments "9100 vorbis/rtp 44100 1"

In my case, my script uses the arguments to setup a pipeline suitable for decoding and the execute the pipeline using "fdsrc fd=0" as the source of the stream.

I usually use an audio encoding and a packet format suitable and robust for streaming. You use ulaw. Some may say that it is not the obvious first choice for a robust streaming, but if it works for you and you are happy about it ...




On Sun, Dec 25, 2016 at 12:11 PM, <[hidden email]> wrote:
Hi,

I'm trying to build a GStreamer-based RTP receiver in a sandboxed architecture:
- A "listener" process listens at UDP port and redirect the stream to receiver
- "Receiver" process runs in a sandbox without network access, gets data from "listener" over a pipe.

The reason for such architecture is increased security - so that vulnerable parsing code is run with minimum privileges.

At prototyping phase I'm trying to create a PoC using "gst-launch-1.0", but I cannot find a way to create a working pipeline to play RTP stream from a pipe instead of udpsrc.

For example, usual udpsrc receiving pipeline that works:
gst-launch-1.0 udpsrc  port=3000 caps="application/x-rtp, media=(string)audio, clock-rate=(int)8000, encoding-name=(string)PCMU" ! rtppcmudepay ! mulawdec ! pulsesink

Corresponding sending part is
gst-launch-1.0 filesrc location="test.wav" ! wavparse ! audioconvert ! audioresample ! mulawenc ! rtppcmupay ! udpsink host=127.0.0.1 port=3000

Changing udpsrc to filesrc doesn't work:
gst-launch-1.0 filesrc location="/tmp/pipe" !  "application/x-rtp, media=(string)audio, clock-rate=(int)8000, encoding-name=(string)PCMU" ! rtppcmudepay ! mulawdec ! pulsesink

Sending part:
gst-launch-1.0 filesrc location="test.wav" ! wavparse ! audioconvert ! audioresample ! mulawenc ! rtppcmupay ! filesink location=/tmp/pipe

The stream is actually played, but just garbled sound. Error output:
WARNING: from element /GstPipeline:pipeline0/GstRtpPcmuDepay:rtppcmudepay0: Could not decode stream.
Additional debug info:
gstrtpbasedepayload.c(503): gst_rtp_base_depayload_handle_buffer (): /GstPipeline:pipeline0/GstRtpPcmuDepay:rtppcmudepay0:
Received invalid RTP payload, dropping

If I capture incoming stream to file, then I'm unable to play it either (same behavior). If I remove RTP elements from the pipeline, raw PCMU is played fine.

So my questions are:
1. Is it possible to play RTP stream without udpsrc using gst-launch-1.0?
2. Is it possible to implement this in code, in own application?

Thanks,
Wire Snark


_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel



_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

Re: Receiving RTP without udpsrc

Tim Müller
In reply to this post by wsnark
On Sun, 2016-12-25 at 12:11 +0100, [hidden email] wrote:

Hi,

> At prototyping phase I'm trying to create a PoC using "gst-launch-
> 1.0", but I cannot find a way to create a working pipeline to play
> RTP stream from a pipe instead of udpsrc.
>  (...)
>
> Changing udpsrc to filesrc doesn't work:
> gst-launch-1.0 filesrc location="/tmp/pipe" !  "application/x-rtp,
> media=(string)audio, clock-rate=(int)8000, encoding-
> name=(string)PCMU" ! rtppcmudepay ! mulawdec ! pulsesink
>
> Sending part: 
> gst-launch-1.0 filesrc location="test.wav" ! wavparse ! audioconvert
> ! audioresample ! mulawenc ! rtppcmupay ! filesink location=/tmp/pipe
>
> (...)
> If I capture incoming stream to file, then I'm unable to play it
> either (same behavior). If I remove RTP elements from the pipeline,
> raw PCMU is played fine.
>
> So my questions are:
> 1. Is it possible to play RTP stream without udpsrc using gst-launch-
> 1.0?
> 2. Is it possible to implement this in code, in own application?

It is definitely possible to do this in code in a proper app.

With gst-launch-1.0 it's a bit more involved.

The reason replacing udpsrc with a filesrc or fdsrc doesn't work is
that with RTP/UDP the packetisation of the data matters. The RTP
elements need to know where a packet starts and ends, and by convention
one buffer represents one UDP/RTP packet received.

If you just dump the RTP data into a pipe or file, then you turn it
into a stream and the packetisation is lost.

You can use the rtpstreampay/rtpstreamdepay elements to maintain the
packetisation, but there are more things to look out for.

The RTP elements (esp. rtpjitterbuffer) expect that packets are
timestamped with the capture time, so that it can calculate clock drift
between sender and receiver. So even if you use rtpstreampay/depay that
won't give you timestamps. It'll still work for testing purposes to
some extent, but it won't work nicely in real-world scenarios.

In an app you could write some code to timestamp buffers coming out of
rtpstreamdepay with the clock running time. (ignoring the delay/jitter
between actual capture and sending it through the pipe).

In case it doesn't have to be a pipe, there's also e.g. shmsink/src:

gst-launch-1.0 audiotestsrc ! mulawenc ! rtppcmupay ! shmsink socket-
path=/tmp/foo shm-size=1000000 -v

gst-launch-1.0 shmsrc socket-path=/tmp/foo do-timestamp=true is-
live=true ! 'application/x-rtp, media=(string)audio, clock-
rate=(int)8000, encoding-name=(string)PCMU' ! rtpjitterbuffer
latency=50 ! queue ! decodebin ! alsasink

If you want to handle the transport bits (pipe etc.) all yourself you
could also just inject buffers into a pipeline using an appsrc element.

Cheers
 -Tim

--
Tim Müller, Centricular Ltd - http://www.centricular.com
_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

Re: Receiving RTP without udpsrc

Peter Maersk-Moller-2
Hi Tim

On Sun, Dec 25, 2016 at 1:48 PM, Tim Müller <[hidden email]> wrote:
The reason replacing udpsrc with a filesrc or fdsrc doesn't work is
that with RTP/UDP the packetisation of the data matters. The RTP
elements need to know where a packet starts and ends, and by convention
one buffer represents one UDP/RTP packet received.

If you just dump the RTP data into a pipe or file, then you turn it
into a stream and the packetisation is lost.

What you write is true, however on Linux and OS X, it appear that when you can create two processes connected with an unnamed pipe, packetization seems to be maintained. It will not work for named pipes, where packetization is lost. Using xinetd/fdsrc or fdsink/fdsrc and subseqently using unnamed pipe from stdout to stdin, whenever the sending process sends a packet, the reading process appear to read the data in the data chunk size it was sent. Admittedly though I can not find this substatiated in the original Unix specs, it seems to work. It might however be a good idea, if the writing process flushes it output after each packet. Would that be an idea for an option to set for fdsink?

You can use the rtpstreampay/rtpstreamdepay elements to maintain the
packetisation, but there are more things to look out for.

The RTP elements (esp. rtpjitterbuffer) expect that packets are
timestamped with the capture time, so that it can calculate clock drift
between sender and receiver. So even if you use rtpstreampay/depay that
won't give you timestamps. It'll still work for testing purposes to
some extent, but it won't work nicely in real-world scenarios.

If the sender is sending live data (and sets the sync option), on a non-overloaded receiver system, setting do-timestamp to true seems to make it work, although this is an approximation. But would not audiorate fix this for raw data?

 
In an app you could write some code to timestamp buffers coming out of
rtpstreamdepay with the clock running time. (ignoring the delay/jitter
between actual capture and sending it through the pipe).

In case it doesn't have to be a pipe, there's also e.g. shmsink/src:

Can you sandbox the reader with chroot if you use shmsink/src? Doesn't it have to be able to read /dev/shm or is it just something I imagine?
 
--Peter

_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

Re: Receiving RTP without udpsrc

Sebastian Dröge-3
On Sun, 2016-12-25 at 14:52 +0100, Peter Maersk-Moller wrote:
> What you write is true, however on Linux and OS X, it appear that
> when you can create two processes connected with an unnamed pipe,
> packetization seems to be maintained

Not really, only if you're lucky enough that the scheduler gives both
applications CPU time often enough and equally so that each write() on
one side is paired with one read() on the other. It is definitely not
guaranteed over pipes or any kind of stream socket/fd, and will fail
sooner or later if you just wait long enough. Don't write any
application that depends on this behaviour.

--
Sebastian Dröge, Centricular Ltd · http://www.centricular.com
_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel

signature.asc (1019 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: Receiving RTP without udpsrc

wsnark
In reply to this post by wsnark

Peter, Tim, Sebastian - thank you very much for the answers. It is clear for me now how to handle RTP from non-udpsrc.

I have played a bit with gst-launch-1.0 and rtpstreampay/rtpstreamdepay (without timestamps) - and like Tim said, they work almost fine together with udp on a localhost. Going to do a bit more tests over Wifi network in a demo environment.

About socket activation. Thanks for pointing at xinetd, Peter. So far I have been considering only own "proxy" - but actually need to consider xinetd and systemd for this purpose [1]. Though looks like custom proxy will be more flexible - allows timestamps and custom transport.

About shmsink/src. I'm not good at shared memory stuff - is there any documentation on this plugin besides very brief [2]?

About pcmu - used as an example. Though I suppose this G.711 should be fairly good for Wifi/LAN voice path. BTW, what is recommended as the best for sending/receiving voice on embedded Linux - Opus?

[1] http://0pointer.de/blog/projects/inetd.html
[2] https://gstreamer.freedesktop.org/data/doc/gstreamer/head/gst-plugins-bad-plugins/html/gst-plugins-bad-plugins-shmsink.html

Thanks,
Wire Snark

_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel