So I'm struggling with this ... Imagine a network of devices with
microphones and speakers. Each device is running gstreamer. Devices can come and go over time and the number of devices is not known in advance... Each device transmits its audio on 224.1.1.1:5000. Each device receives multiple streams on 224.1.1.1:5000 and uses gstrtpbin to demux them based on SSRC and wants to feed them into an adder and sink them on an alsasink. I understand there's a problem with 'adder' in that it wants to collect data from all pads before adding anything which violates my usage model. I see also that 'liveadder' can accomplish this in theory. So as a test, my rtp sources look like this: gst-launch audiotestsrc is-live=true freq=660 ! audioconvert ! audio/x-raw-int,channels=1,depth=16,width=16,rate=8000 ! rtpL16pay ! udpsink host=224.1.1.1 port=5000 gst-launch audiotestsrc is-live=true freq=330 ! audioconvert ! audio/x-raw-int,channels=1,depth=16,width=16,rate=8000 ! rtpL16pay ! udpsink host=224.1.1.1 port=5000 And I'm using 'adder' in this manner: gst-launch-0.10 udpsrc multicast-group=224.1.1.1 port=5000 caps="application/x-rtp, media=(string)audio, clock-rate=(int)8000, encoding-name=(string)L16,channels=(int)1" ! gstrtpbin ! rtpL16depay ! audioconvert ! adder ! alsasink my problem is that it only 'works' if I start one of the sources first, then launch the adder pipeline. Then I get a tone. If I launch the adder pipeline first and then launch a single audiotestsrc pipeline next, I get a short 50ms 'bleep' and then silence; presumably because adder is confused about the sources... I can't actually seem to add/remove RTP streams dynamically... The same sort of thing happens with liveadder except that it doesn't matter the order in which I start the audiotestsrc stream.. But as soon as I add the second stream, I get the following: $ gst-launch-0.10 udpsrc multicast-group=224.1.1.1 port=5000 caps="application/x-rtp, media=(string)audio, clock-rate=(int)8000, encoding-name=(string)L16,channels=(int)1" ! gstrtpbin ! rtpL16depay ! audioconvert ! adder ! alsasink Setting pipeline to PAUSED ... Pipeline is live and does not need PREROLL ... Setting pipeline to PLAYING ... New clock: GstSystemClock ERROR: from element /GstPipeline:pipeline0/GstUDPSrc:udpsrc0: Internal data flow error. Additional debug info: gstbasesrc.c(2507): gst_base_src_loop (): /GstPipeline:pipeline0/GstUDPSrc:udpsrc0: streaming task paused, reason not-linked (-1) Execution ended after 245538038 ns. Setting pipeline to PAUSED ... Setting pipeline to READY ... Setting pipeline to NULL ... Freeing pipeline ... or more precisely: 0:00:09.701670986 23790 0xb4d11818 LOG GST_SCHEDULING gstpad.c:4408:gst_pad_push_data:<rtpbin0:recv_rtp_src_0_4055336445_96> pushing, but it was not linked So I speculate that what I need to do is build a new container that can catch on-new-ssrc and payload-type-change which will allow me to construct the recv_rtp_src_%d_%d_%d pad and link it to something... But I don't know what ... Anyway, I feel like I'm barking up the wrong tree but it's a big forest and I can't make sense of the map.. Does anyone have a better pipeline or approach to solving my problem? ------------------------------------------------------------------------------ Download new Adobe(R) Flash(R) Builder(TM) 4 The new Adobe(R) Flex(R) 4 and Flash(R) Builder(TM) 4 (formerly Flex(R) Builder(TM)) enable the development of rich applications that run across multiple browsers and platforms. Download your free trials today! http://p.sf.net/sfu/adobe-dev2dev _______________________________________________ gstreamer-devel mailing list [hidden email] https://lists.sourceforge.net/lists/listinfo/gstreamer-devel |
Hi,
On Thu, 2010-10-14 at 17:25 -0400, Herb Peyerl wrote: > $ gst-launch-0.10 udpsrc multicast-group=224.1.1.1 port=5000 caps="application/x-rtp, media=(string)audio, clock-rate=(int)8000, encoding-name=(string)L16,channels=(int)1" ! > gstrtpbin ! rtpL16depay ! audioconvert ! adder ! alsasink You can't do that using gst-launch. You need to write a small program that first creates the first part of your pipeline (until gstrtpbin) and then hooks up to the "pad-added" signal on gstrtpbin and when a new recv_rtp_src_* pad appears, then you need to add the depayloader/audioconvert/liveadder/alsasink. And then a second one appears, you need to add a new depayloader/audioconvert and request a new pad from liveadder. To remove timed out sources, set the "autoremove" property on gstrtpbin to TRUE and then listen for the "pad-removed" signal on gstrtpbin and remove the following elements in that handler (you can also hook up to the "unlinked" signal on those created pads, it might be easier to handle). You also want to use liveadder, I wrote it exactly for this case. -- Olivier Crête [hidden email] ------------------------------------------------------------------------------ Download new Adobe(R) Flash(R) Builder(TM) 4 The new Adobe(R) Flex(R) 4 and Flash(R) Builder(TM) 4 (formerly Flex(R) Builder(TM)) enable the development of rich applications that run across multiple browsers and platforms. Download your free trials today! http://p.sf.net/sfu/adobe-dev2dev _______________________________________________ gstreamer-devel mailing list [hidden email] https://lists.sourceforge.net/lists/listinfo/gstreamer-devel signature.asc (205 bytes) Download Attachment |
Hello Olivier,
I'm trying to code the small program as you described in your reply but I do not succeed. Perhaps you can help me. The elements in my pipeline are udpsrc, gstrtpbin, rtpL16depay, audioconv, audioresample, liveadder, alsasink sync=false I have tried two options one works but is not what I need and the other doesn't work. Option 1 I create all elements in the main program, create links between all elements except between gstrtpbin and rtpL16depay. Then set the pipeline to playing. In the "on-pad-added" function I link the pads of the gstrtpbin and the rtpL16depay. I have a working pipeline but I can not dynamically create (rtpL16depay, audioconv, audioresample) on a new rtp stream. Option 2 I create udpsrc, gstrtpbin, liveadder, alsasink in the main program, create links between udpsrc, gstrtpbin and between liveadder and alsasink. then connect a signal to gstrtpbin with liveadder as data. Then set the pipeline to playing. In the "on-pad-added" function I create the rtpL16depay, audioconv, audioresample elements and link them together. Then I link audioresample to liveadder (given via data) and I link the new gstrtpbin pad to a static sink pad of the new rtpL16depay element. I get no errors but also no sound in this option. I've tried to set the pipeline to PLAYING again in the "on-pad-added" function but that didn't help. Do you see my problem or do you have an example program for me? Thanks, Ernst-Jan |
In reply to this post by Herb Peyerl
Hi.
I'm not saying I have the answer to your problem, but have a look at the rtp example code in the test/examples/rtp folder in your gst-plugins-good folder. You need to have an rtp server for each one of your sources (assuming they are on different machines). If Good luck Biloute |
Free forum by Nabble | Edit this page |