I develop in C an application relying on soundmixer plugin. My aim is to
trigger sounds on events. As reactivity is critical, I wish to keep gstreamer pipeline alive, and dynamically plug new sounds on the audiomixer element (after applying an offset on the new sink of the mixer). It works pretty well, except that each time if plug a new sound in, the first 200msec (more or less) of the sound get truncated. I've been looking into the code, everything looks fine as far as I can see (no mistake on sound plugin, neither on running time/offset applied). So I made a try on a simple pipeline with gst-launch, with an offset applied on the mixer sink: gst-launch-1.0 filesrc location=/etc/pa/doublclick_aigu_grave.wav ! wavparse ! audiomixer sink_0::offset=1000000000 ! alsasink It turned out that this pipeline truncate the beginning of the sound as well. When I activated the logs with --gst-debug=alsa:5, I got a bunch of logs before sound started : alsa gstalsasink.c:1054:gst_alsasink_write:<alsasink0> written 441 frames out of 441 What is going on here ? Why is the sound truncated ? -- Sent from: http://gstreamer-devel.966125.n4.nabble.com/ _______________________________________________ gstreamer-devel mailing list [hidden email] https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel |
I have run into similar issues. Seems like clocks provided by sound sinks are broken in some situations. Try to add provide-clock=false to your alsasink. The pipeline will then use a system clock which behaves better in my case. Le Lundi 4 décembre 2017 10h32, toub <[hidden email]> a écrit : I develop in C an application relying on soundmixer plugin. My aim is to trigger sounds on events. As reactivity is critical, I wish to keep gstreamer pipeline alive, and dynamically plug new sounds on the audiomixer element (after applying an offset on the new sink of the mixer). It works pretty well, except that each time if plug a new sound in, the first 200msec (more or less) of the sound get truncated. I've been looking into the code, everything looks fine as far as I can see (no mistake on sound plugin, neither on running time/offset applied). So I made a try on a simple pipeline with gst-launch, with an offset applied on the mixer sink: gst-launch-1.0 filesrc location=/etc/pa/doublclick_aigu_grave.wav ! wavparse ! audiomixer sink_0::offset=1000000000 ! alsasink It turned out that this pipeline truncate the beginning of the sound as well. When I activated the logs with --gst-debug=alsa:5, I got a bunch of logs before sound started : alsa gstalsasink.c:1054:gst_alsasink_write:<alsasink0> written 441 frames out of 441 What is going on here ? Why is the sound truncated ? -- Sent from: http://gstreamer-devel.966125.n4.nabble.com/ _______________________________________________ gstreamer-devel mailing list _______________________________________________ gstreamer-devel mailing list [hidden email] https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel |
Thank you for your help,
I made a try with provide-clock=false, but it has no effect on the sample pipeline runned by gst-launch. I tried the patch on my app also, in that case it truncates every sound, the first one and the following ones (before it used to play correctly the first wav file, then truncates the following ones). Regarding the issue 788362, it concerns a broken clock on pause/resume. In my case the pipeline remains in playing mode, so I don't know if the 2 issues are related. -- Sent from: http://gstreamer-devel.966125.n4.nabble.com/ _______________________________________________ gstreamer-devel mailing list [hidden email] https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel |
In reply to this post by toub
Le lundi 04 décembre 2017 à 01:53 -0700, toub a écrit :
> I develop in C an application relying on soundmixer plugin. My aim is to > trigger sounds on events. As reactivity is critical, I wish to keep > gstreamer pipeline alive, and dynamically plug new sounds on the audiomixer > element (after applying an offset on the new sink of the mixer). > It works pretty well, except that each time if plug a new sound in, the > first 200msec (more or less) of the sound get truncated. > I've been looking into the code, everything looks fine as far as I can see > (no mistake on sound plugin, neither on running time/offset applied). > So I made a try on a simple pipeline with gst-launch, with an offset applied > on the mixer sink: I do that same in one of my application. You need to substract (using pad offsets) some of the latency, otherwise the data will be late, hence dropped. It won't be a surprise if I tell you that alsasink default configured latency (see buffer-time property) is 200ms. Nicolas _______________________________________________ gstreamer-devel mailing list [hidden email] https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel |
I already apply an offset on the mixer sink pad. When a new playback is
required, I compute running time and apply this offset to the new mixer sink pad. I'll give a try on that, but I'm not sure it'll be sufficient, as sometimes the sound can be truncated much more than 200ms (sometimes more than 1s) By the way, is there any mean to modifiy the pipeline latency computation ? -- Sent from: http://gstreamer-devel.966125.n4.nabble.com/ _______________________________________________ gstreamer-devel mailing list [hidden email] https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel |
Le mardi 05 décembre 2017 à 08:49 -0700, toub a écrit :
> By the way, is there any mean to modifiy the pipeline latency > computation ? Yes, for global latency, see gst_pipeline_set_latency(), for per sink latency, you have to override the "do-latency" signal. This is typically done by copying over the GstBin implementation and modifying it for you need. You can then have per sink latency. Per sink latency is needed if you want to do something like this: Host 1: src -> tee -> speaker network Host 2: netsrc -> speaker And you want both the local speaker and the remove speaker to playback at the same time (all assuming you have a shared clock). By overriding the do-latency signal, you can set extra latency on Host 1 speaker sink, latency that correspond to the expected latency for data to be streamed from the network sink to the Host 2 speaker. Nicolas _______________________________________________ gstreamer-devel mailing list [hidden email] https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel signature.asc (201 bytes) Download Attachment |
I made a try with gst_pipeline_set_latency(). First played sound is
truncated, next one (while first sound is still playing) is delaued a little but not truncated. However, as sometimes sound is truncated much more than 200ms, I don't think it is a solution to have latency set this way. In any way, why do I have to set latency ? As far a I know, buffers produced by mixer a timelapse T are not delayed before arriving to alsasink. Also, I could not find how to use the do-latency signal to modify alsa sink. Could you give me a sample ? Thanks in any case, Étienne -- Sent from: http://gstreamer-devel.966125.n4.nabble.com/ _______________________________________________ gstreamer-devel mailing list [hidden email] https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel |
Le jeudi 07 décembre 2017 à 02:45 -0700, toub a écrit :
> I made a try with gst_pipeline_set_latency(). First played sound is > truncated, next one (while first sound is still playing) is delaued a little > but not truncated. > However, as sometimes sound is truncated much more than 200ms, I don't think > it is a solution to have latency set this way. > > In any way, why do I have to set latency ? As far a I know, buffers produced > by mixer a timelapse T are not delayed before arriving to alsasink. As you have live sources you need latency, because by the time we have capture the audio, the data is already late. The latency is the amount of time you give to your pipeline to transport data from source to sink and render it, plus the extra time needed to synchronize to the sink that renders last. > > Also, I could not find how to use the do-latency signal to modify alsa sink. > Could you give me a sample ? There is no example that I know of, I usually start from the default implementation: https://cgit.freedesktop.org/gstreamer/gstreamer/tree/gst/gstpipeline.c#n619 > > Thanks in any case, > > Étienne > > > > -- > Sent from: http://gstreamer-devel.966125.n4.nabble.com/ > _______________________________________________ > gstreamer-devel mailing list > [hidden email] > https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel gstreamer-devel mailing list [hidden email] https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel signature.asc (201 bytes) Download Attachment |
Nicolas Dufresne-5 wrote
> Le jeudi 07 décembre 2017 à 02:45 -0700, toub a écrit : >> >> In any way, why do I have to set latency ? As far a I know, buffers >> produced >> by mixer a timelapse T are not delayed before arriving to alsasink. > > As you have live sources you need latency, because by the time we have > capture the audio, the data is already late. The latency is the amount > of time you give to your pipeline to transport data from source to sink > and render it, plus the extra time needed to synchronize to the sink > that renders last. But in my case there are no live sources, only filesrc element which are linked to the pipeline on random time. Should I consider these sources as live sources ? >> >> Also, I could not find how to use the do-latency signal to modify alsa >> sink. >> Could you give me a sample ? > > There is no example that I know of, I usually start from the default > implementation: > > https://cgit.freedesktop.org/gstreamer/gstreamer/tree/gst/gstpipeline.c#n619 Ok I'll try to adapt default implementation next week. What is the latency that I should apply ? It turns out that the longer it takes before a new sound is triggered, the more truncated is the sound. So I expect that latency should be adapted dyamically but I cannot see how I could compute the latency to apply if it's not constant ? -- Sent from: http://gstreamer-devel.966125.n4.nabble.com/ _______________________________________________ gstreamer-devel mailing list [hidden email] https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel |
Free forum by Nabble | Edit this page |