Below is the 'theoretical' pipeline that would cancel of particular
user's audio contribution in an audio conference mixer. Theory goes like, we invert the user's audio samples from the original and it finallyadded to the amixer output. It should cancel off. However i can't figure of why i doesn't work in the pipeline below. The idea of the mixer is that it sums of all the user's audio contribution and when streaming back to individual user, their contribution is canceled of with an 'invert' + 'addder' elements. I suspect clocking. or is it because these pipelines are separate ie not in the single pipeline ? Readable representation of pipeline gst-launch audiotestsrc name="sinewave" wave=sine ! tee name="audio_in_user1" audio_in_user1. ! queue ! audioconvert ! amixer.sink0 audiotestsrc wave=ticks ! queue ! audioconvert ! amixer.sink2 adder name="amixer" ! tee name="mixerout" mixerout. ! queue ! audio_out_user1.sink1 audio_in_user1. ! queue ! audioinvert degree=1 ! audioconvert ! audio_out_user1.sink1 adder name="audio_out_user1" ! alsasink Copy paste execute representation gst-launch audiotestsrc name="sinewave" wave=sine ! tee name="audio_in_user1" audio_in_user1. ! queue ! audioconvert ! amixer.sink0 audiotestsrc wave=ticks ! queue ! audioconvert ! amixer.sink2 adder name="amixer" ! tee name="mixerout" mixerout. ! queue ! audio_out_user1.sink1 audio_in_user1. ! queue ! audioinvert degree=1 ! audioconvert ! audio_out_user1.sink1 adder name="audio_out_user1" ! alsasink A sample pipeline that works from above theory, pipeline has only one audio source and it is cancelled in the adder. audioinvert degree=1 gst-launch audiotestsrc name="sinewave" wave=sine ! tee name="audiosource" audiosource. ! queue ! audioconvert ! adder.sink0 audiosource. ! queue ! audioinvert degree=1 ! audioconvert ! adder.sink1 adder name="adder" ! alsasink audioinvert degree=1 gst-launch audiotestsrc name="sinewave" wave=sine ! tee name="audiosource" audiosource. ! queue ! audioconvert ! adder.sink0 audiosource. ! queue ! audioinvert degree=0.55 ! audioconvert ! adder.sink1 adder name="adder" ! alsasink _______________________________________________ gstreamer-devel mailing list [hidden email] http://lists.freedesktop.org/mailman/listinfo/gstreamer-devel |
This is not an easy problem to solve. Your solution assumes that there isn't any delay, so you can invert the sample fast enough. But, if you can identify the participant's input, why not simply mute that input? On Sun, May 5, 2013 at 3:26 AM, Althaf K Backer <[hidden email]> wrote: Below is the 'theoretical' pipeline that would cancel of particular _______________________________________________ gstreamer-devel mailing list [hidden email] http://lists.freedesktop.org/mailman/listinfo/gstreamer-devel |
This architecture is such that, we stream back the audio participants
audio contribution via mixing them, to each of them negating their own contribution. Issue is not about muting the participant, yes it can be done that way, however, for such an implementation, we need n-adders for n-users, and so for n-th user's adder , we will not insert their audio contribution. This method is an inefficient one, for a large group of clients, say 1000+, and not to mention Linux has limit on number of threads a process can have. user-1]-[audio]-[tee0] user-2]-[audio]-[tee1] user-3]-[audio]-[tee2] ------------------------------- [tee2(src0)]-> [user-1-adder]-> [tee3](src0]-> ------------------------------- [tee1(src0)]-> [user-2-adder]-> [tee3](src1]-> ------------------------------- [tee1(src1)]-> [user-3-adder]-> [tee2](src1]-> ------------------------------ ^^^This method is inefficient, which number of users grow. The method that i propose, has a a Master-adder, which sums up all the audio contribution, and each user has a adder + invert, which is as i under stand is much less resource consuming than the former. Yes i do see your point about the timing. I'm still rethinking new design for this. On Tue, May 21, 2013 at 11:44 PM, Chuck Crisler <[hidden email]> wrote: > This is not an easy problem to solve. Your solution assumes that there isn't > any delay, so you can invert the sample fast enough. But, if you can > identify the participant's input, why not simply mute that input? > > > On Sun, May 5, 2013 at 3:26 AM, Althaf K Backer > <[hidden email]> wrote: >> >> Below is the 'theoretical' pipeline that would cancel of particular >> user's audio contribution in an audio conference mixer. Theory goes >> like, we invert the user's audio samples from the original and it >> finallyadded to the amixer output. It should cancel off. However i >> can't figure of why i doesn't work in the pipeline below. The idea of >> the mixer is that it sums of all the user's audio contribution and >> when streaming back to individual user, their contribution is canceled >> of with an 'invert' + 'addder' elements. >> >> I suspect clocking. or is it because these pipelines are separate ie >> not in the single pipeline ? >> >> Readable representation of pipeline >> >> gst-launch >> audiotestsrc name="sinewave" wave=sine ! tee name="audio_in_user1" >> audio_in_user1. ! queue ! audioconvert ! amixer.sink0 >> audiotestsrc wave=ticks ! queue ! audioconvert ! amixer.sink2 >> adder name="amixer" ! tee name="mixerout" >> mixerout. ! queue ! audio_out_user1.sink1 >> audio_in_user1. ! queue ! audioinvert degree=1 ! audioconvert ! >> audio_out_user1.sink1 >> adder name="audio_out_user1" ! alsasink >> >> Copy paste execute representation >> >> gst-launch audiotestsrc name="sinewave" wave=sine ! tee >> name="audio_in_user1" audio_in_user1. ! queue ! audioconvert ! >> amixer.sink0 audiotestsrc wave=ticks ! queue ! audioconvert ! >> amixer.sink2 adder name="amixer" ! tee name="mixerout" mixerout. ! >> queue ! audio_out_user1.sink1 audio_in_user1. ! queue ! audioinvert >> degree=1 ! audioconvert ! audio_out_user1.sink1 adder >> name="audio_out_user1" ! alsasink >> >> A sample pipeline that works from above theory, pipeline has only one >> audio source and it is cancelled in the adder. >> >> audioinvert degree=1 >> >> gst-launch audiotestsrc name="sinewave" wave=sine ! tee >> name="audiosource" audiosource. ! queue ! audioconvert ! adder.sink0 >> audiosource. ! queue ! audioinvert degree=1 ! audioconvert ! >> adder.sink1 adder name="adder" ! alsasink >> >> >> audioinvert degree=1 >> >> gst-launch audiotestsrc name="sinewave" wave=sine ! tee >> name="audiosource" audiosource. ! queue ! audioconvert ! adder.sink0 >> audiosource. ! queue ! audioinvert degree=0.55 ! audioconvert ! >> adder.sink1 adder name="adder" ! alsasink >> _______________________________________________ >> gstreamer-devel mailing list >> [hidden email] >> http://lists.freedesktop.org/mailman/listinfo/gstreamer-devel > > > > _______________________________________________ > gstreamer-devel mailing list > [hidden email] > http://lists.freedesktop.org/mailman/listinfo/gstreamer-devel > gstreamer-devel mailing list [hidden email] http://lists.freedesktop.org/mailman/listinfo/gstreamer-devel |
Hello Althaf,
I am looking for similar use case, can you please share if you have a stable design for this and any example for such one Hopefully you would have build one since this long Thanks -- Sent from: http://gstreamer-devel.966125.n4.nabble.com/ _______________________________________________ gstreamer-devel mailing list [hidden email] https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel |
Hey there,
Just wanted to bump this thread in case you had found a solution because I am also building a similar application and haven't yet found a viable answer to audiomixing in a conference with gstreamer and webrtc. Thanks -- Sent from: http://gstreamer-devel.966125.n4.nabble.com/ _______________________________________________ gstreamer-devel mailing list [hidden email] https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel |
Free forum by Nabble | Edit this page |