Audiomixer dropping input when mixing live and non-live sources

classic Classic list List threaded Threaded
6 messages Options
Reply | Threaded
Open this post in threaded view
|

Audiomixer dropping input when mixing live and non-live sources

Sean DuBois
Hey list!

I am attempting to combine a mixture of live and non-live sources, however I am having trouble
with the audiomixer dropping audio. The following is my example pipeline, however the audio is lost from my rtmpsrc.
The rtmpsrc is 'live' it is a h264/aac FLV and is is produced from a remote camera on the fly.

```
#include <gst/gst.h>

int main(int argc, char *argv[]) {
  gst_init(&argc, &argv);

  auto *loop = g_main_loop_new(NULL, FALSE);
  auto *pipeline = gst_parse_launch(
      "videotestsrc is-live=true ! compositor name=c ! video/x-raw,width=1280,height=720 ! queue ! autovideosink "
      "audiotestsrc volume=0.0 is-live=true ! audiomixer name=a ! queue ! autoaudiosink "
      "rtmpsrc location=\"rtmp://localhost/serve/live\" ! decodebin name=d ! videoconvert name=vconv ! queue ! c. d. ! audioconvert name=aconv ! queue ! a.",
      NULL);

  gst_element_set_state(pipeline, GST_STATE_PLAYING);
  g_main_loop_run(loop);

  return 0;
}
````

If I remove `is-live=true` from the videotestsrc and audiotestsrc the audio works.
If I add latency=2000000000 to the compositor/audiomixer the audio works.

However, I can't add the latency attribute because other srcs on the audiomixer/compositor (rtp) break things very quickly

One thing I do find peculiar is that the compositor always works it is just empty, there is some difference in logic/state
between the audiomixer/compositor (where the compositor behavior is the well behaving one)

I also can do a GST_PAD_PROBE_BUFFER and add ~2 seconds to the PTS of the raw audio buffers on the audioconvert sink pad, and that fixes it as well.
However I don't understand where that 2 second of loss is coming from? I would like to measure/understand, before I do a hack
like that.

So if anyone has any ideas/can point out what I am doing wrong I would love to hear!

thanks
_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

Re: Audiomixer dropping input when mixing live and non-live sources

Nicolas Dufresne-5


Le 12 mars 2017 4:35 AM, "Sean DuBois" <[hidden email]> a écrit :
Hey list!

I am attempting to combine a mixture of live and non-live sources, however I am having trouble
with the audiomixer dropping audio. The following is my example pipeline, however the audio is lost from my rtmpsrc.
The rtmpsrc is 'live' it is a h264/aac FLV and is is produced from a remote camera on the fly.

```
#include <gst/gst.h>

int main(int argc, char *argv[]) {
  gst_init(&argc, &argv);

  auto *loop = g_main_loop_new(NULL, FALSE);
  auto *pipeline = gst_parse_launch(
      "videotestsrc is-live=true ! compositor name=c ! video/x-raw,width=1280,height=720 ! queue ! autovideosink "
      "audiotestsrc volume=0.0 is-live=true ! audiomixer name=a ! queue ! autoaudiosink "
      "rtmpsrc location=\"rtmp://localhost/serve/live\" ! decodebin name=d ! videoconvert name=vconv ! queue ! c. d. ! audioconvert name=aconv ! queue ! a.",
      NULL);

  gst_element_set_state(pipeline, GST_STATE_PLAYING);
  g_main_loop_run(loop);

  return 0;
}
````

If I remove `is-live=true` from the videotestsrc and audiotestsrc the audio works.
If I add latency=2000000000 to the compositor/audiomixer the audio works.

However, I can't add the latency attribute because other srcs on the audiomixer/compositor (rtp) break things very quickly

Can you clarify how it fails for you? You need some latency for this to work, but 2s might bee to much. You would need enough latency on rtp jitter buffer too.

One thing I do find peculiar is that the compositor always works it is just empty, there is some difference in logic/state
between the audiomixer/compositor (where the compositor behavior is the well behaving one)

Video is simpler to deal with, since you can repeat frames without you noticing.


I also can do a GST_PAD_PROBE_BUFFER and add ~2 seconds to the PTS of the raw audio buffers on the audioconvert sink pad, and that fixes it as well.
However I don't understand where that 2 second of loss is coming from? I would like to measure/understand, before I do a hack
like that.

So if anyone has any ideas/can point out what I am doing wrong I would love to hear!

thanks
_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel


_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

Re: Audiomixer dropping input when mixing live and non-live sources

Sean DuBois
On Sun, Mar 12, 2017 at 10:08:16AM -0400, Nicolas Dufresne wrote:

> Le 12 mars 2017 4:35 AM, "Sean DuBois" <[hidden email]> a écrit :
>
> Hey list!
>
> I am attempting to combine a mixture of live and non-live sources, however
> I am having trouble
> with the audiomixer dropping audio. The following is my example pipeline,
> however the audio is lost from my rtmpsrc.
> The rtmpsrc is 'live' it is a h264/aac FLV and is is produced from a remote
> camera on the fly.
>
>
> ```
> #include <gst/gst.h>
>
> int main(int argc, char *argv[]) {
>   gst_init(&argc, &argv);
>
>   auto *loop = g_main_loop_new(NULL, FALSE);
>   auto *pipeline = gst_parse_launch(
>       "videotestsrc is-live=true ! compositor name=c !
> video/x-raw,width=1280,height=720 ! queue ! autovideosink "
>       "audiotestsrc volume=0.0 is-live=true ! audiomixer name=a ! queue !
> autoaudiosink "
>       "rtmpsrc location=\"rtmp://localhost/serve/live\" ! decodebin name=d
> ! videoconvert name=vconv ! queue ! c. d. ! audioconvert name=aconv ! queue
> ! a.",
>       NULL);
>
>   gst_element_set_state(pipeline, GST_STATE_PLAYING);
>   g_main_loop_run(loop);
>
>   return 0;
> }
> ````
>
> If I remove `is-live=true` from the videotestsrc and audiotestsrc the audio
> works.
> If I add latency=2000000000 to the compositor/audiomixer the audio works.
>
> However, I can't add the latency attribute because other srcs on the
> audiomixer/compositor (rtp) break things very quickly
>
>
> Can you clarify how it fails for you? You need some latency for this to
> work, but 2s might bee to much. You would need enough latency on rtp jitter
> buffer too.
>
>
> One thing I do find peculiar is that the compositor always works it is just
> empty, there is some difference in logic/state
> between the audiomixer/compositor (where the compositor behavior is the
> well behaving one)
>
>
> Video is simpler to deal with, since you can repeat frames without you
> noticing.
>
>
> I also can do a GST_PAD_PROBE_BUFFER and add ~2 seconds to the PTS of the
> raw audio buffers on the audioconvert sink pad, and that fixes it as well.
> However I don't understand where that 2 second of loss is coming from? I
> would like to measure/understand, before I do a hack
> like that.
>
> So if anyone has any ideas/can point out what I am doing wrong I would love
> to hear!
>
> thanks
> _______________________________________________
> gstreamer-devel mailing list
> [hidden email]
> https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel

> _______________________________________________
> gstreamer-devel mailing list
> [hidden email]
> https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel

Hi Nicolas, thanks for the quick response

> Can you clarify how it fails for you? You need some latency for this to
> work, but 2s might bee to much. You would need enough latency on rtp jitter
> buffer too.

The audiomixer drops all incoming buffers from my rtmpsrc, the output is completely
silent (but if I add a audiotestsrc it works, proving that the audiomixer is working)

How would I know the magnitude of latency I should be adding? It seems
variable, 2 seconds works and sometimes it doesn't. Is there a signal
from the audiomixer pad that I could watch and up the latency until it
works?

The other issue is that when I do add latency and RTP sources the
compositor starts to act very poorly. This makes sense to me, I don't want any
latency from my RTP input if it doesn't arrive in time I am ok
discarding it. However for my RTMP I know that the video/audio is fine,
it just seems to have 'fallen behind' because a PadProbe that adds 2
seconds to it fixes everything. I just don't know what I should be
measuring to figure out where those 2 seconds are coming from.


Here is my RTP input, however even if I do get latency+RTP working it
still doesn't change the issue that I don't know how much latency to
add.
```
gst-launch-1.0 udpsrc port=5000 caps="application/x-rtp" ! rtph264depay ! decodebin ! videoconvert ! compositor latency=5000000000 sink_1::xpos=500 name=c ! autovideosink videotestsrc is-live=true ! c.

gst-launch-1.0 videotestsrc ! x264enc speed-preset=veryfast ! rtph264pay ! udpsink port=5000 host="127.0.0.1"
```
_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

Re: Audiomixer dropping input when mixing live and non-live sources

Sean DuBois
On Sun, Mar 12, 2017 at 01:41:31PM -0500, Sean DuBois wrote:

> On Sun, Mar 12, 2017 at 10:08:16AM -0400, Nicolas Dufresne wrote:
> > Le 12 mars 2017 4:35 AM, "Sean DuBois" <[hidden email]> a écrit :
> >
> > Hey list!
> >
> > I am attempting to combine a mixture of live and non-live sources, however
> > I am having trouble
> > with the audiomixer dropping audio. The following is my example pipeline,
> > however the audio is lost from my rtmpsrc.
> > The rtmpsrc is 'live' it is a h264/aac FLV and is is produced from a remote
> > camera on the fly.
> >
> >
> > ```
> > #include <gst/gst.h>
> >
> > int main(int argc, char *argv[]) {
> >   gst_init(&argc, &argv);
> >
> >   auto *loop = g_main_loop_new(NULL, FALSE);
> >   auto *pipeline = gst_parse_launch(
> >       "videotestsrc is-live=true ! compositor name=c !
> > video/x-raw,width=1280,height=720 ! queue ! autovideosink "
> >       "audiotestsrc volume=0.0 is-live=true ! audiomixer name=a ! queue !
> > autoaudiosink "
> >       "rtmpsrc location=\"rtmp://localhost/serve/live\" ! decodebin name=d
> > ! videoconvert name=vconv ! queue ! c. d. ! audioconvert name=aconv ! queue
> > ! a.",
> >       NULL);
> >
> >   gst_element_set_state(pipeline, GST_STATE_PLAYING);
> >   g_main_loop_run(loop);
> >
> >   return 0;
> > }
> > ````
> >
> > If I remove `is-live=true` from the videotestsrc and audiotestsrc the audio
> > works.
> > If I add latency=2000000000 to the compositor/audiomixer the audio works.
> >
> > However, I can't add the latency attribute because other srcs on the
> > audiomixer/compositor (rtp) break things very quickly
> >
> >
> > Can you clarify how it fails for you? You need some latency for this to
> > work, but 2s might bee to much. You would need enough latency on rtp jitter
> > buffer too.
> >
> >
> > One thing I do find peculiar is that the compositor always works it is just
> > empty, there is some difference in logic/state
> > between the audiomixer/compositor (where the compositor behavior is the
> > well behaving one)
> >
> >
> > Video is simpler to deal with, since you can repeat frames without you
> > noticing.
> >
> >
> > I also can do a GST_PAD_PROBE_BUFFER and add ~2 seconds to the PTS of the
> > raw audio buffers on the audioconvert sink pad, and that fixes it as well.
> > However I don't understand where that 2 second of loss is coming from? I
> > would like to measure/understand, before I do a hack
> > like that.
> >
> > So if anyone has any ideas/can point out what I am doing wrong I would love
> > to hear!
> >
> > thanks
> > _______________________________________________
> > gstreamer-devel mailing list
> > [hidden email]
> > https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
>
> > _______________________________________________
> > gstreamer-devel mailing list
> > [hidden email]
> > https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
>
> Hi Nicolas, thanks for the quick response
>
> > Can you clarify how it fails for you? You need some latency for this to
> > work, but 2s might bee to much. You would need enough latency on rtp jitter
> > buffer too.
>
> The audiomixer drops all incoming buffers from my rtmpsrc, the output is completely
> silent (but if I add a audiotestsrc it works, proving that the audiomixer is working)
>
> How would I know the magnitude of latency I should be adding? It seems
> variable, 2 seconds works and sometimes it doesn't. Is there a signal
> from the audiomixer pad that I could watch and up the latency until it
> works?
>
> The other issue is that when I do add latency and RTP sources the
> compositor starts to act very poorly. This makes sense to me, I don't want any
> latency from my RTP input if it doesn't arrive in time I am ok
> discarding it. However for my RTMP I know that the video/audio is fine,
> it just seems to have 'fallen behind' because a PadProbe that adds 2
> seconds to it fixes everything. I just don't know what I should be
> measuring to figure out where those 2 seconds are coming from.
>
>
> Here is my RTP input, however even if I do get latency+RTP working it
> still doesn't change the issue that I don't know how much latency to
> add.
> ```
> gst-launch-1.0 udpsrc port=5000 caps="application/x-rtp" ! rtph264depay ! decodebin ! videoconvert ! compositor latency=5000000000 sink_1::xpos=500 name=c ! autovideosink videotestsrc is-live=true ! c.
>
> gst-launch-1.0 videotestsrc ! x264enc speed-preset=veryfast ! rtph264pay ! udpsink port=5000 host="127.0.0.1"
> ```
> _______________________________________________
> gstreamer-devel mailing list
> [hidden email]
> https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel

Also, here is what I mean by a PadProbe that modifies timestamps, this
pipeline works but I am not sure *why* The +-3 seconds on audio/video
was just from me tweaking things. I would love to know the reason that
those magnitudes work (and then use the smallest number possible, or do
the math off of the pipeline clock)

```
#include <gst/gst.h>

GstPadProbeReturn video_pad_probe(GstPad *pad, GstPadProbeInfo *info, gpointer user_data) {
  auto *buffer = GST_PAD_PROBE_INFO_BUFFER(info);
  static gboolean first = true;
  if (first) {
    first = false;
  } else {
    GST_BUFFER_PTS(buffer) -= 3000000000;
  }

  return GST_PAD_PROBE_OK;
}

GstPadProbeReturn audio_pad_probe(GstPad *pad, GstPadProbeInfo *info, gpointer user_data) {
  auto *buffer = GST_PAD_PROBE_INFO_BUFFER(info);
  static gboolean first = true;
  if (first) {
    first = false;
  } else {
    GST_BUFFER_PTS(buffer) -= 3000000000;
  }

  return GST_PAD_PROBE_OK;
}

int main(int argc, char *argv[]) {
  gst_init(&argc, &argv);

  auto *loop = g_main_loop_new(NULL, FALSE);
  auto *pipeline = gst_parse_launch(
      "flvmux name=mux ! queue max-size-bytes=0 max-size-time=0 max-size-buffers=0 ! rtmpsink location=\"rtmp://localhost/inbound/test\" "
      "videotestsrc is-live=true ! compositor name=c ! video/x-raw,width=1280,height=720 ! queue ! x264enc speed-preset=veryfast tune=zerolatency ! queue ! mux. "
      "audiotestsrc is-live=true volume=0.0 ! audiomixer name=a ! queue ! voaacenc ! queue ! mux. "
      "rtmpsrc location=\"rtmp://localhost/serve/live\" ! decodebin name=d ! videoconvert name=vconv ! queue ! c. d. ! audioconvert name=aconv ! queue ! a.",
      NULL);

  gst_pad_add_probe(gst_element_get_static_pad(gst_bin_get_by_name(GST_BIN(pipeline), "aconv"), "sink"), GST_PAD_PROBE_TYPE_BUFFER, audio_pad_probe, nullptr, nullptr);
  gst_pad_add_probe(gst_element_get_static_pad(gst_bin_get_by_name(GST_BIN(pipeline), "vconv"), "sink"), GST_PAD_PROBE_TYPE_BUFFER, video_pad_probe, nullptr, nullptr);

  gst_element_set_state(pipeline, GST_STATE_PLAYING);
  g_main_loop_run(loop);

  return 0;
}
```
_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

Re: Audiomixer dropping input when mixing live and non-live sources

Nicolas Dufresne-5
In reply to this post by Sean DuBois
Le dimanche 12 mars 2017 à 13:41 -0500, Sean DuBois a écrit :

> Hi Nicolas, thanks for the quick response
>
> > Can you clarify how it fails for you? You need some latency for
> > this to
> > work, but 2s might bee to much. You would need enough latency on
> > rtp jitter
> > buffer too.
>
> The audiomixer drops all incoming buffers from my rtmpsrc, the output
> is completely
> silent (but if I add a audiotestsrc it works, proving that the
> audiomixer is working)
Interesting, that looks like everything is being dropped, as it's late.
 I'm wondering if rtmpsrc is really implementing a live source. A live
source would set timestamp base on arrival time, while this problem
seems to indicate that it sets timestamp from zero or something,
regardless of how much time it took to start. What you could try
though, is to pause you pipeline first, wait a bit, and then start it.

>
> How would I know the magnitude of latency I should be adding? It
> seems
> variable, 2 seconds works and sometimes it doesn't. Is there a signal
> from the audiomixer pad that I could watch and up the latency until
> it
> works?

rtmp is TCP, so yes, it's variable, un-controlled. Ideally, the
rtmpsource element would compute the initial latency, and advertise it
over the pipeline should it could all work. Then just a small amount in
the mixer for jitter would be sufficient.

>
> The other issue is that when I do add latency and RTP sources the
> compositor starts to act very poorly. This makes sense to me, I don't
> want any
> latency from my RTP input if it doesn't arrive in time I am ok
> discarding it. However for my RTMP I know that the video/audio is
> fine,
> it just seems to have 'fallen behind' because a PadProbe that adds 2
> seconds to it fixes everything. I just don't know what I should be
> measuring to figure out where those 2 seconds are coming from.
The mixing latency is to compensate small delays between inputs, 0 is a
little racy.

>
>
> Here is my RTP input, however even if I do get latency+RTP working it
> still doesn't change the issue that I don't know how much latency to
> add.
> ```
> gst-launch-1.0 udpsrc port=5000 caps="application/x-rtp" !
> rtph264depay ! decodebin ! videoconvert ! compositor
> latency=5000000000 sink_1::xpos=500 name=c ! autovideosink
> videotestsrc is-live=true ! c.
You are missing an rtpjitterbuffer. That means your packet could endup
in the wrong order, time information will be jittery too. This is far
from ideal when doing RTP. It will also make the compositor start
really early, which then make the other streams appear late.

>
> gst-launch-1.0 videotestsrc ! x264enc speed-preset=veryfast !
> rtph264pay ! udpsink port=5000 host="127.0.0.1"
> ```
_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel

signature.asc (188 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: Audiomixer dropping input when mixing live and non-live sources

Sean DuBois
On Sun, Mar 12, 2017 at 06:18:15PM -0400, Nicolas Dufresne wrote:

> Le dimanche 12 mars 2017 à 13:41 -0500, Sean DuBois a écrit :
> > Hi Nicolas, thanks for the quick response
> >
> > > Can you clarify how it fails for you? You need some latency for
> > > this to
> > > work, but 2s might bee to much. You would need enough latency on
> > > rtp jitter
> > > buffer too.
> >
> > The audiomixer drops all incoming buffers from my rtmpsrc, the output
> > is completely
> > silent (but if I add a audiotestsrc it works, proving that the
> > audiomixer is working)
>
> Interesting, that looks like everything is being dropped, as it's late.
>  I'm wondering if rtmpsrc is really implementing a live source. A live
> source would set timestamp base on arrival time, while this problem
> seems to indicate that it sets timestamp from zero or something,
> regardless of how much time it took to start. What you could try
> though, is to pause you pipeline first, wait a bit, and then start it.
>
> >
> > How would I know the magnitude of latency I should be adding? It
> > seems
> > variable, 2 seconds works and sometimes it doesn't. Is there a signal
> > from the audiomixer pad that I could watch and up the latency until
> > it
> > works?
>
> rtmp is TCP, so yes, it's variable, un-controlled. Ideally, the
> rtmpsource element would compute the initial latency, and advertise it
> over the pipeline should it could all work. Then just a small amount in
> the mixer for jitter would be sufficient.
>
> >
> > The other issue is that when I do add latency and RTP sources the
> > compositor starts to act very poorly. This makes sense to me, I don't
> > want any
> > latency from my RTP input if it doesn't arrive in time I am ok
> > discarding it. However for my RTMP I know that the video/audio is
> > fine,
> > it just seems to have 'fallen behind' because a PadProbe that adds 2
> > seconds to it fixes everything. I just don't know what I should be
> > measuring to figure out where those 2 seconds are coming from.
>
> The mixing latency is to compensate small delays between inputs, 0 is a
> little racy.
>
> >
> >
> > Here is my RTP input, however even if I do get latency+RTP working it
> > still doesn't change the issue that I don't know how much latency to
> > add.
> > ```
> > gst-launch-1.0 udpsrc port=5000 caps="application/x-rtp" !
> > rtph264depay ! decodebin ! videoconvert ! compositor
> > latency=5000000000 sink_1::xpos=500 name=c ! autovideosink
> > videotestsrc is-live=true ! c.
>
> You are missing an rtpjitterbuffer. That means your packet could endup
> in the wrong order, time information will be jittery too. This is far
> from ideal when doing RTP. It will also make the compositor start
> really early, which then make the other streams appear late.
>
> >
> > gst-launch-1.0 videotestsrc ! x264enc speed-preset=veryfast !
> > rtph264pay ! udpsink port=5000 host="127.0.0.1"
> > ```



> _______________________________________________
> gstreamer-devel mailing list
> [hidden email]
> https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
So this is probably an awful hack, but here is how I solved it (in case
anyone ends up here from a search engine)

Note you MUST clean this up if you don't want it to leak
(gst_bin_get_by_name and small stuff like that) I use unique_ptr locally
but ripped it out for this example

The only thing is that now audiorate/audioresample don't work they
complain about discontinuity I might end up making another thread about
that.


```
#include <gst/gst.h>

#include <gst/app/gstappsink.h>
#include <gst/app/gstappsrc.h>

GstFlowReturn NewSample(GstElement *object, gpointer user_data) {
  GstBuffer *buff;
  GstSample *sample;

  if ((sample = gst_app_sink_pull_sample((GstAppSink *)object)) == NULL) {
    return GST_FLOW_EOS;
  }
  if ((buff = gst_sample_get_buffer(sample)) == NULL) {
    gst_sample_unref(sample);
    return GST_FLOW_EOS;
  }

  gst_sample_unref(sample);
  gst_buffer_ref(buff);

  GST_BUFFER_DURATION(buff) = GST_BUFFER_PTS(buff) = GST_BUFFER_DTS(buff) = GST_CLOCK_TIME_NONE;
  gst_app_src_push_buffer(appsrc, buff);

  return GST_FLOW_OK;
}

void PadAdded(GstElement *object, GstPad *arg0, gpointer user_data) {
  auto *pipeline = (GstBin *)user_data;
  auto *caps = gst_pad_get_current_caps(arg0);
  g_autofree gchar *caps_string = gst_caps_to_string(caps);

  if (g_strrstr(caps_string, "video") != NULL) {
    g_object_set(gst_bin_get_by_name(pipeline, "h264_appsrc"), "caps", caps, nullptr);
    if (GST_PAD_LINK_FAILED(gst_pad_link(arg0, gst_element_get_static_pad(gst_bin_get_by_name(pipeline, "h264_queue"), "sink")))) {
      g_print("Video link failed \n");
    }
  } else {
    g_object_set(gst_bin_get_by_name(pipeline, "aac_appsrc"), "caps", caps, nullptr);
    if (GST_PAD_LINK_FAILED(gst_pad_link(arg0, gst_element_get_static_pad(gst_bin_get_by_name(pipeline, "aac_queue"), "sink")))) {
      g_print("Audio link failed \n");
    }
  }
}

void add_rtmp(GstBin *pipeline) {
  auto *rtmpsrc          = gst_element_factory_make("rtmpsrc", nullptr),
       *flvdemux         = gst_element_factory_make("flvdemux", nullptr),
       *h264_queue       = gst_element_factory_make("queue", "h264_queue"),
       *h264_appsink     = gst_element_factory_make("appsink", nullptr),
       *h264_appsrc      = gst_element_factory_make("appsrc", "h264_appsrc"),
       *avdec_h264       = gst_element_factory_make("avdec_h264", nullptr),
       *videoconvert     = gst_element_factory_make("videoconvert", nullptr),
       *video_queue      = gst_element_factory_make("queue", nullptr),
       *aac_queue        = gst_element_factory_make("queue", "aac_queue"),
       *aac_appsink      = gst_element_factory_make("appsink", nullptr),
       *aac_appsrc       = gst_element_factory_make("appsrc", "aac_appsrc"),
       *avdec_aac        = gst_element_factory_make("avdec_aac", nullptr),
       *audioconvert     = gst_element_factory_make("audioconvert", nullptr),
       *audio_queue      = gst_element_factory_make("queue", nullptr),
       *h264parse        = gst_element_factory_make("h264parse", nullptr);

  gst_bin_add_many(pipeline, rtmpsrc, flvdemux, h264_queue, h264_appsink, h264_appsrc, avdec_h264, videoconvert, aac_queue, aac_appsink, aac_appsrc, avdec_aac, audioconvert, audio_queue, video_queue, h264parse, nullptr);

  gst_element_link(rtmpsrc, flvdemux);
  gst_element_link(h264_queue, h264_appsink);
  gst_element_link(aac_queue, aac_appsink);

  gst_element_link_many(h264_appsrc, h264parse, avdec_h264, videoconvert, video_queue, gst_bin_get_by_name(pipeline, "c"), nullptr);
  gst_element_link_many(aac_appsrc, avdec_aac, audioconvert, audio_queue, gst_bin_get_by_name(pipeline, "a"), nullptr);

  g_object_set(rtmpsrc, "location", YOUR_RTMP_URL, nullptr);

  g_object_set(h264_appsink, "emit-signals", TRUE, nullptr);
  g_object_set(aac_appsink, "emit-signals", TRUE, nullptr);

  g_object_set(h264_appsrc, "is-live", TRUE, "do-timestamp", TRUE, nullptr);
  gst_util_set_object_arg(G_OBJECT(h264_appsrc), "format", "time");

  g_object_set(aac_appsrc, "is-live", TRUE, "do-timestamp", TRUE, nullptr);
  gst_util_set_object_arg(G_OBJECT(aac_appsrc), "format", "time");

  g_signal_connect(flvdemux, "pad-added", G_CALLBACK(PadAdded), pipeline);

  g_signal_connect(h264_appsink, "new-sample", G_CALLBACK(NewSample), h264_appsrc);
  g_signal_connect(aac_appsink, "new-sample", G_CALLBACK(NewSample), aac_appsrc);

}


int main(int argc, char *argv[]) {
  gst_init(&argc, &argv);

  auto *loop = g_main_loop_new(NULL, FALSE);
  auto *pipeline = gst_parse_launch(
      "flvmux name=mux ! queue max-size-bytes=0 max-size-time=0 max-size-buffers=0 ! rtmpsink location=\"rtmp://localhost/inbound/test\" "
      "videotestsrc is-live=true ! compositor name=c ! video/x-raw,width=1280,height=720 ! queue ! x264enc speed-preset=veryfast tune=zerolatency ! queue ! mux. "
      "audiotestsrc is-live=true volume=0.0 ! audiomixer name=a ! queue ! voaacenc ! queue ! mux. ", NULL);

  add_rtmp(GST_BIN(pipeline));
  gst_element_set_state(pipeline, GST_STATE_PLAYING);
  g_main_loop_run(loop);

  return 0;
}
```
_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel