Decklink embedded audio via OPUS MPEG-TS and SRT

classic Classic list List threaded Threaded
1 message Options
Reply | Threaded
Open this post in threaded view
|

Decklink embedded audio via OPUS MPEG-TS and SRT

Fuzzy86
Hello GStreamer community,

I've been working on a gst-launch pipeline for a couple of days now. It's currently working in the format below, but the basic gist is that an SDI input on a decklink card is presented as an SRT listener and then in reverse an SRT caller then pipe that to an SDI out on another decklink card. Pretty basic encoder/decoder scenario. At the moment it's working with 2 channels of audio and is using the Nvidia GPU encode/decode pipeline in an MPEG-TS container.

# Testing SRT Sink with video and audio Decklink***works***
gst-launch-1.0 `
decklinkvideosrc device-number=0  mode=1080p50 ! deinterlace ! autovideoconvert ! nvh265enc preset=low-latency-hp rc-mode=cbr-ld-hq bitrate=5000  ! h265parse config-interval=-1 ! queue ! mux. `
decklinkaudiosrc device-number=0 channels=2 ! opusenc ! opusparse ! queue ! mux. `
mpegtsmux name=mux alignment=7 ! srtsink wait-for-connection=false mode=listener localport=1234 latency=0

# Testing SRT Source with video and audio GPU **works**
gst-launch-1.0  `
srtsrc uri=srt://127.0.0.1:1234 wait-for-connection=false latency=0 ! tsparse ! tsdemux latency=0 name=ts `
ts. ! queue ! h265parse config-interval=-1 ! nvh265dec ! autovideoconvert ! queue ! decklinkvideosink device-number=1 mode=1080p50 `
ts. ! queue ! opusparse ! opusdec ! audioconvert ! audioresample ! queue ! decklinkaudiosink device-number=1

As I mentioned, the above is working well. If there's anything to add that the community find sensible, then I'm all ears.

Where i am falling short, is working out a way to enable all 8 at a minimum, ideally all 16 tracks of audio from the SDI input, to pass through via OPUS and then output at the other end. I have been reading that this could be an issue with the decklink presenting unordered PCM audio, and that I should use the "rawaudioparse" element. I'm just getting completely lost as to where to start debugging this, how to see said logical debug and then formulate a plan.

Currently the above pipeline is working well and presenting about 16 frames of end to end latency in an ideal loopback scenario, so I'm happy with that. But getting more than 2 tracks of audio to flow has me really scratching my head for more information.

Running gstreamer 1.18.4 in a windows 10 environment in PowerShell. latest decklink and Nvidia drivers. As I said, that part seems to be working well.

Thanks in advance!