|
Hello everyone!
Withing one project I'v developed GStreamer FEC plugin to boost multimedia apps performance while streaming withing unreliable medium, e.g. WiFi network space. In an unreliable medium there are packet losses, so a stream is produced with redundant packets (generated with Reed-Solomon) those are used to recover original lost packets on a client side. FEC works on RTP layer, so it's payload/protocol irrelevant in common case.
To get more understanding please take a look on typical pipelines for streaming H.264 video:
Sender:
gst-launch-1.0 videotestsrc ! x264enc tune=zerolatency ! rtph264pay ! fecencoder name=fe k=16 r=8 ek=6 er=6 fe.src ! udpsink async=false fe.fec ! udpsink async=false port=5005
Receiver:
gst-launch-1.0 -v udpsrc ! .sink_src fecdecoder name=fd ! application/x-rtp ! rtpjitterbuffer latency=200 ! rtph264depay ! avdec_h264 ! videoconvert ! autovideosink udpsrc port=5005 ! fd.sink_fec
Here there are two sockets between sender and receiver: SRC + FEC, so redundant packets are simply transmitted out-of-band. There are 50% redundancy and 50% packet losses emulation applied. While non-FEC-aware client becomes completely unusable under this condition, a FEC-aware one recovers all lost packets resulting initial video.
Now I'm considering packing several elementary video and audio streams into MREG-2 transport stream or MP4 container for streaming with FEC in the same manner. From the FEC implementation nature it's desirable:
1. To have all elementary streams units of the same size approximately, or, alternatively, force to a constant bit rate.
2. Multiplex elementary streams units in interleaving mode, i.e. A1 V1 A2 V2 ...
3. Place header information constantly over the stream and under given frequency (like 'fragment-duration'), so a client might connect at any time.
4. Minimize latency whenever it possible for real-time purpose.
How to achieve these things, e.g. for MPEG-2TS and MP4 formats?
Thanks in advance!
|