Hi all,
I am playing with a pipeline with texas instruments dm365 like: alsasrc->audio encoder->appsink, where appsink grabs the encoded data and sends it by RTP. The audio encoder is a mulawenc, but modified to use ittiam g.711 codec. Now, with command "top", I can see that CPU utilization is low (0.2%) for a period time, then it will be high(like 90%) for a while, and this high-low CPU utilization circle repeats. As I understand, the "chain" function in mulawenc accepts data pushed by alsasrc, at a frequency about 400 times per second, for a 64kbps g.711 and 160-byte frame size. I think this is a killer for the performance. I am quite new to gstreamer, so I hope some one will help me with this problem. Regards, Rafael ------------------------------------------------------------------------------ Special Offer-- Download ArcSight Logger for FREE (a $49 USD value)! Finally, a world-class log management solution at an even better price-free! Download using promo code Free_Logger_4_Dev2Dev. Offer expires February 28th, so secure your free ArcSight Logger TODAY! http://p.sf.net/sfu/arcsight-sfd2d _______________________________________________ gstreamer-devel mailing list [hidden email] https://lists.sourceforge.net/lists/listinfo/gstreamer-devel |
On 01.02.2011 02:41, mingqian Han wrote:
> Hi all, > I am playing with a pipeline with texas instruments dm365 like: > alsasrc->audio encoder->appsink, where appsink grabs the encoded data > and sends it by RTP. > The audio encoder is a mulawenc, but modified to use ittiam g.711 codec. g711 is a trivial codec. please compare the performance also to the unmodified g711 encoder that comes with gstreamer. Next compare the results of using fakesink instead of appsink. Having a high cpu load when actually doing something is not bad. If you want to make the encoding more bursty you could use bigger buffers on the audiosrc side and output several frames in one go from the audioencoder (using a bufferlist). Stefan > Now, with command "top", I can see that CPU utilization is low (0.2%) > for a period time, then it will be high(like 90%) for a while, and > this high-low CPU utilization circle repeats. > As I understand, the "chain" function in mulawenc accepts data pushed > by alsasrc, at a frequency about 400 times per second, for a 64kbps > g.711 and 160-byte frame size. I think this is a killer for the > performance. > I am quite new to gstreamer, so I hope some one will help me with this > problem. > > Regards, > > Rafael > > > ------------------------------------------------------------------------------ > Special Offer-- Download ArcSight Logger for FREE (a $49 USD value)! > Finally, a world-class log management solution at an even better price-free! > Download using promo code Free_Logger_4_Dev2Dev. Offer expires > February 28th, so secure your free ArcSight Logger TODAY! > http://p.sf.net/sfu/arcsight-sfd2d > > > _______________________________________________ > gstreamer-devel mailing list > [hidden email] > https://lists.sourceforge.net/lists/listinfo/gstreamer-devel ------------------------------------------------------------------------------ Special Offer-- Download ArcSight Logger for FREE (a $49 USD value)! Finally, a world-class log management solution at an even better price-free! Download using promo code Free_Logger_4_Dev2Dev. Offer expires February 28th, so secure your free ArcSight Logger TODAY! http://p.sf.net/sfu/arcsight-sfd2d _______________________________________________ gstreamer-devel mailing list [hidden email] https://lists.sourceforge.net/lists/listinfo/gstreamer-devel |
I am encountering a very similar problem. When using the command "gst-launch -v alsasrc ! mulawenc ! rtppcmupay ! fakesink silent=true" the CPU usage raises to 97% after a while (some seconds) and drops to 0 again later (e.g. after 60-100 seconds).
I guess the problem is somewhere in alsasrc, because a different latency-time and buffer-time has linear influence to the cpu usage (e.g. when using "gst-launch -v alsasrc latency-time=20000 buffer-time=400000 ! mulawenc ! rtppcmupay ! fakesink silent=true" CPU usage raised to 49% after a while, when using "gst-launch -v alsasrc latency-time=40000 buffer-time=800000 ! mulawenc ! rtppcmupay ! fakesink silent=true" CPU usage raised to 25% after a while...) Does someone have a clue how to identify and solve the problem? I am using DM6446 with TI AIC33 Audio Codec running at vendor-specific 2.6.27 kernel (based on http://gitorious.org/linux-davinci/linux-davinci/trees/v2.6.27-davinci1 with ALSA driver patched to cb6e2063697e91ca6983f9fe6958d20469b43641 from alsa-kernel [2008-11-18] tree) Advanced Linux Sound Architecture Driver Version 1.0.17. ASoC version 0.13.2 AIC3X Audio Codec 0.2 asoc: aic3x <-> davinci-i2s mapping ok ALSA device list: #0: DaVinci EVM (aic3x) Version of GStreamer 0.10.25 |
Hi,
I have two audio pipelines (actually I have two playbin2, and I'd like to keep them); I want to send one to the left audio channel and one to the right one. Does anyone know a way to accomplish this? FYI: I'm fiddling with audio-sink property of playbin2 and linking to a {audiopanorama + autoaudiosink} bin with no success. Thanks for any suggestion Federico _______________________________________________ gstreamer-devel mailing list [hidden email] http://lists.freedesktop.org/mailman/listinfo/gstreamer-devel |
In reply to this post by abue.gst.dev@googlemail.com
is there pulse audio on your system?
use "device-name=hw:0,0" for alsasrc attributes. when there is pulse audio, usually it does some format conversion, re sampling, which consumes cpu
2012/4/10 [hidden email] <[hidden email]> I am encountering a very similar problem. When using the command "gst-launch _______________________________________________ gstreamer-devel mailing list [hidden email] http://lists.freedesktop.org/mailman/listinfo/gstreamer-devel |
There is no pulseaudio running but plughw is used as default.
Unfortunately I can't use "hw:0" as device because the ALSA driver only offers 2 channel access while my GStreamer pipeline requires mono (1 channel). Therefore I already tried to create a virtual ALSA device to avoid stereo to mono conversion by plughw: $ nano /etc/asound.conf # create a two virtual one-channel devices out of real two-channel device # to check whether this has lower cpu usage than using plughw pcm.split_2_to_1 { type route; slave.pcm "hw:0,0"; slave.channels 2; ttable.0.0 1; ttable.0.1 1; } And used this virtual device with the command: "gst-launch -v alsasrc device=split_2_to_1 ! mulawenc ! rtppcmupay ! fakesink silent=true". Unfortunately this did not help either. > is there pulse audio on your system? > use "device-name=hw:0,0" for alsasrc attributes. > > when there is pulse audio, usually it does some format conversion, re > sampling, which consumes cpu > _______________________________________________ gstreamer-devel mailing list [hidden email] http://lists.freedesktop.org/mailman/listinfo/gstreamer-devel |
In reply to this post by Aihua Zhao
To check if the alsasrc element of GStreamer is the problem, I used
"arecord -r 8000 -c 1 -f S16_LE --period-time=10000 --buffer-time=200000 > /dev/null" for comparison, and I also could see a similar problem: after some (more) seconds the CPU usage raises to >90% and drops to 0% afterwards again (but it stays at >90% for only ca. 33s vs ca. 78s when using gstreamer pipe ). I just monitored the problem with top and vmstat - vmstat log for arecord example can be seen at http://pastebin.com/H3LRtyPg When using direct hw access by starting the command "arecord -D hw:0 -r 8000 -c 2 -f S16_LE --period-time=10000 --buffer-time=200000 > /dev/null" I also can reproduce the problem, but it seems to be not that hard: http://pastebin.com/a5xaP049 The fact that the problem also occurs when using arecord tells me, that there might be also a problem in the ALSA driver or in the ALSA library. What I can't really understand is, why it has a more critical effect when using gstreamer pipe, especially why a pipe with udpsink keeps CPU usage at >90% whereas a gstreamer pipe with fakesink lets CPU usage drop down again after a while? _______________________________________________ gstreamer-devel mailing list [hidden email] http://lists.freedesktop.org/mailman/listinfo/gstreamer-devel |
In reply to this post by Federico Zamperini
On 04/10/2012 05:50 PM, Federico Zamperini wrote:
> Hi, > I have two audio pipelines (actually I have two playbin2, and I'd > like to keep them); I want to send one to the left audio channel and > one to the right one. > Does anyone know a way to accomplish this? > > FYI: I'm fiddling with audio-sink property of playbin2 and linking to > a {audiopanorama + autoaudiosink} bin with no success. What is not working? gst-launch playbin2 uri=... audio-sink="audiopanorama panorama=1.0 ! autoaudiosink" play the audio fully right for me (and -1.0 fully left). Stefan > > Thanks for any suggestion > > Federico > _______________________________________________ > gstreamer-devel mailing list > [hidden email] > http://lists.freedesktop.org/mailman/listinfo/gstreamer-devel _______________________________________________ gstreamer-devel mailing list [hidden email] http://lists.freedesktop.org/mailman/listinfo/gstreamer-devel |
In reply to this post by Aihua Zhao
I just did some tests with other frameworks. When using streamutil from
pjproject-1.10 (pjsip.org) the problem is not reproducible. So it seems the PJMedia uses some other way to access ALSA driver than GStreamer. I'l have to check this in detail. _______________________________________________ gstreamer-devel mailing list [hidden email] http://lists.freedesktop.org/mailman/listinfo/gstreamer-devel |
In reply to this post by Stefan Sauer
Il 15/04/2012 21:31, Stefan Sauer ha scritto:
> On 04/10/2012 05:50 PM, Federico Zamperini wrote: >> Hi, >> I have two audio pipelines (actually I have two playbin2, and I'd >> like to keep them); I want to send one to the left audio channel and >> one to the right one. >> Does anyone know a way to accomplish this? >> >> FYI: I'm fiddling with audio-sink property of playbin2 and linking to >> a {audiopanorama + autoaudiosink} bin with no success. > > What is not working? gst-launch playbin2 uri=... > audio-sink="audiopanorama panorama=1.0 ! autoaudiosink" play the audio > fully right for me (and -1.0 fully left). > > Stefan Yes, the gst-launch pipeline was working, but the pipeline in my code wasn't. That's because I forgot to ghost the audiopanorama sink pad, and now it's working even in my code. For posterity's sake here's the code (error checking removed): GstElement *autoaudiosink = gst_element_factory_make("autoaudiosink", "autoaudiosink"); GstElement *audiopanorama = gst_element_factory_make("audiopanorama", "audiopanorama"); GstElement *myaudiosink = gst_bin_new("myaudiosink"); gst_bin_add_many(GST_BIN(myaudiosink), audiopanorama, autoaudiosink, NULL); gst_element_link(audiopanorama, autoaudiosink); /* add audiopanorama sink pad to custom bin */ GstPad *pad = gst_element_get_static_pad(audiopanorama, "sink"); gst_element_add_pad (myaudiosink, gst_ghost_pad_new ("sink", pad)); gst_object_unref (GST_OBJECT (pad)); GstElement *pipeline = gst_element_factory_make("playbin2", player->name); g_object_set(G_OBJECT(pipeline), "audio-sink", myaudiosink, NULL); Thank you for your reply. Cheers Federico _______________________________________________ gstreamer-devel mailing list [hidden email] http://lists.freedesktop.org/mailman/listinfo/gstreamer-devel |
Free forum by Nabble | Edit this page |