Good day guys,
So the current system is set up to push a stream from an ARm board to the webserver. On the board it is a simple rtspsrc from the camera into a udpsink. Then on the server a udpsrc receives the stream and does work. This works a hundred persent with video only streams. Now i am trying to add audio to this system. On the camera you are able to send a audio(from a mic) and video in the same rtsp stream. I know this works as i am able to record locally on the board a audio and video file. now when sending this over udp to the server i am running into an issue. When setting up a remote recording and creating the following pipe: | queue | rtph264depay | h264parse | queue \ udpsrc | tee < | mp4mux | filesink | queue | rtpmpadepay | mpegaudioparse | queue / I am getting a not negotiated error when going from paused to playing: Debugging information: gstbasesrc.c(2950): gst_base_src_loop (): /GstPipeline:main-pipeline/GstUDPSrc:source: streaming stopped, reason not-negotiated (-4) When removing just the audio or video part of that pipeline i do not receive that error and it seems to be working fine. I did some checking and think it might be a capability problem as the pads can't be negotiated between the elements. Is this true? And if it is what can be a fix? currently i am setting up my udpsrc as follows : g_object_set(gstSource,"port",moProfileInstance->getDestPort(),"close-socket",FALSE,"caps",gst_caps_new_simple ("application/x-rtp",NULL),NULL); If the caps is not the problem what can it be then? As i said when not sending the stream over udp and doing the recording on the board, that pipe works a hundred persent fine as reference here is the pipeline on the board: rtspsrc | queue | rtph264depay | h264parse | queue \ | mp4mux | filesink rtspsrc | queue | rtpmpadepay | mpegaudioparse | queue / Any help or insight will be musc appreciated Regards DB |
On Sun, 2016-11-13 at 22:26 -0800, debruyn wrote:
> Good day guys, > > So the current system is set up to push a stream from an ARm board to the > webserver. On the board it is a simple rtspsrc from the camera into a > udpsink. Then on the server a udpsrc receives the stream and does work. This > works a hundred persent with video only streams. Now i am trying to add > audio to this system. > > On the camera you are able to send a audio(from a mic) and video in the same > rtsp stream. I know this works as i am able to record locally on the board a > audio and video file. now when sending this over udp to the server i am > running into an issue. When setting up a remote recording and creating the > following pipe: > * | queue | rtph264depay | h264parse | queue \ > udpsrc | tee < > > mp4mux | filesink > > | queue | rtpmpadepay | mpegaudioparse | queue /* kind of "demuxer" element instead of a tee here, so that only audio goes to the audio branch, and only video goes to the video branch. Tee will send everything to both branches. Something like rtpptdemux or rtpssrcdemux might work for you here. Sending both streams over different UDP ports might be a better solution though. Also note that this is not RTSP in your pipeline, just plain RTP and to do things properly is also missing the rtpjitterbuffer in front of the depayloader. -- Sebastian Dröge, Centricular Ltd · http://www.centricular.com _______________________________________________ gstreamer-devel mailing list [hidden email] https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel signature.asc (981 bytes) Download Attachment |
Good day Sebastian,
It is send together by selecting 'audio and video' stream in the camera config of the IP camera. In this case a HIK camera. It seems to be then encapsulated in the stream already when i receive it on the board. I will check your method as i can see where i could have forgotten to 'depack' the rtp. will let you know my findings Thanks for the reply regards DB |
In reply to this post by Sebastian Dröge-3
If you feel a demuxer is necessary why does the following setup work:
rtspsrc | queue | rtph264depay | h264parse | queue \ | mp4mux | filesink rtspsrc | queue | rtpmpadepay | mpegaudioparse | queue / And this not? | queue | rtph264depay | h264parse | queue \ udpsrc | tee < | mp4mux | filesink | queue | rtpmpadepay | mpegaudioparse | queue / Note I am also able to stream and record (video only) at the same time, and it works fine: | queue | rtph264depay | h264parse | queue | mp4mux | filesink udpsrc | tee < | queue | rtph264depay | avdec_h264 | theoraenc | oggmux | shout2send As far as the rtpptdemux element I get the same negotiation error but now it happens where i attach the demuxer. The pipeline looks like this: | queue | rtpptdemux | rtph264depay | h264parse | queue \ udpsrc | tee < | mp4mux | filesink | queue | rtpptdemux | rtpmpadepay | mpegaudioparse | queue / Please any advice can be usefull |
On Mon, 2016-11-14 at 03:41 -0800, debruyn wrote:
> If you feel a demuxer is necessary why does the following setup work: > > *rtspsrc | queue | rtph264depay | h264parse | queue \ > > > > mp4mux | filesink > > rtspsrc | queue | rtpmpadepay | mpegaudioparse | queue /* rtspsrc does the "demuxing" internally already, or in the case of UDP transport it even goes via two different udpsrcs. > Note I am also able to stream and record (video only) at the same > time, and > it works fine: > > * | queue | rtph264depay | h264parse | queue | > mp4mux | > filesink > udpsrc | tee > < > > | queue | rtph264depay | avdec_h264 | theoraenc > | > oggmux | shout2send* packets. The other one does not work because the tee would have to give two different caps, a different one to each branch. > As far as the rtpptdemux element I get the same negotiation error but > now it > happens where i attach the demuxer. The pipeline looks like this: > * > | queue | rtpptdemux | rtph264depay | h264parse > | > queue \ > udpsrc | tee > < > > > mp4mux | filesink To know whether you can demux by pt or by ssrc. You also need to handle the signals on the element to provide the correct caps for each of the branches. You can't use these elements from gst-launch but need to write actual code. -- Sebastian Dröge, Centricular Ltd · http://www.centricular.com _______________________________________________ gstreamer-devel mailing list [hidden email] https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel signature.asc (981 bytes) Download Attachment |
Is there a way to determine if a stream muxing it via payload type or ssrc? Also I am doing this in code, just specified it in that why to make it easier to read. And what element are you refering to, the tee? Is it possible to set the caps of individual src pads of the tee element? |
On Mon, 2016-11-14 at 04:34 -0800, debruyn wrote:
> Sebastian Dröge-3 wrote > > Same thing here. Also you need to understand how the stream is > > muxed. > > To know whether you can demux by pt or by ssrc. > > > > You also need to handle the signals on the element to provide the > > correct caps for each of the branches. You can't use these elements > > from gst-launch but need to write actual code. > > Is there a way to determine if a stream muxing it via payload type or > ssrc? > Also I am doing this in code, just specified it in that why to make > it easier to read. that. Take a look at what you get sent in wireshark for starters. > And what element are you refering to, the tee? Is it possible to set > the caps of individual src pads of the tee element? No it isn't, tee is not a solution for what you want here. I meant rtpptdemux and rtpssrcdemux. -- Sebastian Dröge, Centricular Ltd · http://www.centricular.com _______________________________________________ gstreamer-devel mailing list [hidden email] https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel signature.asc (981 bytes) Download Attachment |
So i have now replaced the tee element with rtpssrcdemux (As i believe the camera muxes it together via ssrc) but seem to have problems connecting to the queue elemets after it:
| queue | rtph264depay | h264parse | queue \ udpsrc | rtpssrcdemux < | mp4mux | filesink | queue | rtpmpadepay | mpegaudioparse | queue / I am trying to link it manually through the pads src_%u pad(of the demuxer) and the sink pad(of the queue) like follows: /* Link the elements created prior */ GstPadTemplate *templateVTeePad; templateVTeePad = gst_element_class_get_pad_template(GST_ELEMENT_GET_CLASS (mgstTee), "src_%u"); mpadVRecTee = gst_element_request_pad(mgstTee, templateVTeePad, NULL, NULL); debug("Obtained request pad %s for %s viceo record tee branch", gst_pad_get_name (mpadVRecTee),moProfileInstance->getCamLabel()); mpadVRecPreQueue = gst_element_get_static_pad(mgstRecVideoPreQueue,"sink"); if (gst_pad_link (mpadVRecTee, mpadVRecPreQueue) != GST_PAD_LINK_OK ) { debug ("tee could not be linked at %s video queue",moProfileInstance->getCamLabel()); return FALSE; } but i get the following errors: (vsr:7691): GStreamer-CRITICAL **: gst_element_request_pad: assertion 'templ->presence == GST_PAD_REQUEST' failed (vsr:7691): GStreamer-CRITICAL **: gst_object_get_name: assertion 'GST_IS_OBJECT (object)' failed 1479129666 DEBUG CamPipeline : Obtained request pad (null) for TestCam viceo record tee branch (vsr:7691): GStreamer-CRITICAL **: gst_pad_link_full: assertion 'GST_IS_PAD (srcpad)' failed Thanks for the help so far sebastian, i seem to be understanding this a bit more regards DB |
The name is still tee, but i did replace it with the correct element in factory make, just wasn't in the mood to replace it everywhere.
mgstTee = gst_element_factory_make("rtpssrcdemux","linker"); |
On Mon, 2016-11-14 at 05:12 -0800, debruyn wrote:
> The name is still tee, but i did replace it with the correct element > in > factory make, just wasn't in the mood to replace it everywhere. > > /mgstTee = gst_element_factory_make("rtpssrcdemux","linker");/ rtpssrcdemux does not have request pads but adds a new pad whenever a new ssrc is found. Also you still need a way to specify caps for each stream afterwards, which is why I would've expected the muxing to be done based on the pt... rtpptdemux has API for providing the caps for each stream. And different streams must have a different pt anyway. The ssrc is only to distinguish different sender/sources. -- Sebastian Dröge, Centricular Ltd · http://www.centricular.com _______________________________________________ gstreamer-devel mailing list [hidden email] https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel signature.asc (981 bytes) Download Attachment |
So using the rptptdemux now, but getting another negotiation error with the udpsrc. Currently it is created as follows:
g_object_set(gstSource,"port",moProfileInstance->getDestPort(),"close-socket",FALSE,"caps",gst_caps_new_simple ("application/x-rtp",NULL),NULL); Should i specify something else when linking it to rtpptdemux? The error i catch is : Debugging information: gstbasesrc.c(2950): gst_base_src_loop (): /GstPipeline:main-pipeline/GstUDPSrc:source: streaming stopped, reason not-negotiated (-4) regards DB |
On Tue, 2016-11-15 at 01:20 -0800, debruyn wrote:
> So using the rptptdemux now, but getting another negotiation error > with the > udpsrc. Currently it is created as follows: > *g_object_set(gstSource,"port",moProfileInstance- > >getDestPort(),"close-socket",FALSE,"caps",gst_caps_new_simple > ("application/x-rtp",NULL),NULL);* > > Should i specify something else when linking it to rtpptdemux? > > The error i catch is : > *Debugging information: gstbasesrc.c(2950): gst_base_src_loop (): > /GstPipeline:main-pipeline/GstUDPSrc:source: > streaming stopped, reason not-negotiated (-4)* the correct caps for each pt? If not, that's your problem. Also make sure to connect the pad for the right with with the correct following elements. Otherwise check the debug logs to see where the not-negotiated exactly comes from. -- Sebastian Dröge, Centricular Ltd · http://www.centricular.com _______________________________________________ gstreamer-devel mailing list [hidden email] https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel signature.asc (981 bytes) Download Attachment |
So after some probing I found the following regarding the capabilities thats being set.
In the following pipeline on the board : rtspsrc | udpsink I connect the rtspsrc to the udpsink by listening for the pad-added signal. When i receive that signal i printed out the current caps on the requested rtspsrc src pad. I got number 1 and then number 2: 1) Caps set to : application/x-rtp, media=(string)video, payload=(int)96, clock-rate=(int)90000, encoding-name=(string)H264, profile-level-id=(string)420029, packetization-mode=(string)1, sprop-parameter-sets=(string)"Z00AH5plAoAt/4C1AQEBQAAA+gAAF1w6GAG3gAG3eu8uNDADbwADbvXeXCg\=\,aO48gA\=\=", a-recvonly=(string)"", x-dimensions=(string)"1280\,720", ssrc=(uint)1342655012, clock-base=(uint)1601300418, seqnum-base=(uint)50561, npt-start=(guint64)0, play-speed=(double)1, play-scale=(double)1 2) Caps set to : application/x-rtp, media=(string)audio, payload=(int)14, clock-rate=(int)90000, encoding-name=(string)MPA, a-recvonly=(string)"", a-Media_header=(string)"MEDIAINFO\=494D4B48010100000400010000200110803E000000FA000000000000000000000000000000000000\;", a-appversion=(string)1.0, ssrc=(uint)1146891223, clock-base=(uint)1601303400, seqnum-base=(uint)55485, npt-start=(guint64)0, play-speed=(double)1, play-scale=(double)1 Which told me that it is indeed two differant sources that the camera sets up because i got the pad added signal twice. In the following pipeline on the server: udpsrc | tee | queue | rtph264depay | acdec_h264 | theoraenc | oggmux | shout2send I monitored the current caps on the udpsrc as the state changed to playing. The only caps i got from the srcpad was: Caps set to : application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264 This led me to think that i never even pushed the audio to the server. So i then changed the pipe on the board to this: rtspsrc | queue | udpsink And monitored the src pad of the queue and the only caps i got was the video one. I did not receive any audio caps. Thus I want to ask if the following conclusion is sound? That i will need to parse the video and audio in two differant pipelines and send it in two differant streams to the server? Or is there a way to combine those two ssrc on the board into one and send them together? Thanks for all the help thus far sebastian, i really do appreciate it and you have been an immense help Regards DB |
On Wed, 2016-11-16 at 21:51 -0800, debruyn wrote:
> > Thus I want to ask if the following conclusion is sound? That i will need to > parse the video and audio in two differant pipelines and send it in two > differant streams to the server? Or is there a way to combine those two ssrc > on the board into one and send them together? You can combine them into the same stream and use rtpptdemux as explained before, you can also depayload and mux them into some container (MPEG-TS) and then pass that over RTP. Many possibilities. All this can happen in the same pipeline, even if you decide to send them out as separate streams. -- Sebastian Dröge, Centricular Ltd · http://www.centricular.com _______________________________________________ gstreamer-devel mailing list [hidden email] https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel signature.asc (981 bytes) Download Attachment |
Free forum by Nabble | Edit this page |