Trouble using x264enc with a tee

classic Classic list List threaded Threaded
10 messages Options
Reply | Threaded
Open this post in threaded view
|

Trouble using x264enc with a tee

JonathanHenson
I am writing a multiplexed video/audio streaming thread for a video server I am working on. I was having problems so I am testing, to make sure the pipeline works by knocking the data into a file. However, the end goal will be to use a multifdsink to send the stream to a socket. So, you will notice that the multifdsink element is actually a filesink for the time being. This pipeline works with other encoders, but when I use x264enc for the video encoder, the pipeline freezes and no data is put into the file. Also, there is a tee in both the audio and video portions of the pipeline so that other threads can grab the raw buffer if they need to access the data. That way only one thread is ever accessing the camera. If I remove the tee in the video pipeline, the pipeline works. Also I have tested that if I put an xvimagesink on both sides of the pipeline that both windows get the stream, so I am pretty sure that the tee is not the problem. Thanks, here is the class implementation. /* * H264Stream.cpp * * Created on: Nov 12, 2010 * Author: jonathan */ #include "H264Stream.h" H264Stream::H264Stream() : PThread (1000, NoAutoDeleteThread, HighestPriority, "H264Stream"), encoding(false) { //temporary setting of variables width = 352; height = 288; fps = 25; audioChannels = 2; audioSampleRate = 8000; bitWidth = 16; //create pipeline h264Pipeline = gst_pipeline_new("h264Pipeline"); //----------------------------------create videoPipe Elements------------------------------------------------------------------------------ //raw camera source v4l2Src = gst_element_factory_make("v4l2src", "v4l2Src"); //Text Filters chanNameFilter = gst_element_factory_make("textoverlay", "chanNameOverlay"); osdMessageFilter = gst_element_factory_make("textoverlay", "osdOverlay"); sessionTimerFilter = gst_element_factory_make("textoverlay", "sessionTimerOverlay"); //raw video caps GstCaps* rawVideoCaps = gst_caps_new_simple ("video/x-raw-yuv", "format", GST_TYPE_FOURCC, 0x30323449, "width", G_TYPE_INT, width, "height", G_TYPE_INT, height, "framerate", GST_TYPE_FRACTION, fps, 1, NULL); GstCaps* h264VideoCaps = gst_caps_new_simple ("video/x-h264","framerate", GST_TYPE_FRACTION, fps, 1, "width", G_TYPE_INT, width, "height", G_TYPE_INT, height, NULL); //video tee videoTee = gst_element_factory_make("tee", "videoTee"); //create tee src 1 receiver (videoSink) videoSink = gst_element_factory_make("appsink", "videoSink"); //create tee src 2 receiver (videoQueue) videoQueue = gst_element_factory_make("queue", "videoQueue"); videoAppSinkQueue = gst_element_factory_make("queue", "videoAppSinkQueue"); //create h264 Encoder videoEncoder = gst_element_factory_make("x264enc", "h264Enc"); //-----------------------------------------------------create audioPipe elements----------------------------------------------------------------------------- //create Alsa Source alsaSrc = gst_element_factory_make("alsasrc", "alsaSrc"); //create raw Audio Caps GstCaps* rawAudioCaps = gst_caps_new_simple("audio/x-raw-int", "channels", G_TYPE_INT, audioChannels, "rate", G_TYPE_INT, audioSampleRate, "width", G_TYPE_INT, bitWidth, "depth", G_TYPE_INT, bitWidth, "endianness", G_TYPE_INT, 1234, NULL); volume = gst_element_factory_make("volume", "volume"); //create audio tee soundTee = gst_element_factory_make("tee", "audioTee"); //create element to receive tee source #1 (audioSink) soundSink = gst_element_factory_make("appsink", "audioSink"); //create element to receive tee source #2 (audioQueue) soundQueue = gst_element_factory_make("queue", "audioQueue"); soundAppSinkQueue = gst_element_factory_make("queue", "soundAppSinkQueue"); //create an audio encoder to use when ready. soundEncoder = gst_element_factory_make("ffenc_mp2", "audioEncoder"); //-----------------------------------------------------Create Multiplexing Elements----------------------------------------------------------------------------- //create multiplexer (currently avi) multiplexer = gst_element_factory_make("avimux", "multiplexer"); //create multifdsink multifdSink = gst_element_factory_make("filesink", "multiFDSink"); g_object_set (G_OBJECT (multifdSink), "location", "/home/jonathan/test.avi" , NULL); //-----------------------------------------------------LINKERUP!---------------------------------------------------------------------------------------------- //add all elements(except for audio encoder as it isn't used yet) to the pipeline gst_bin_add_many (GST_BIN (h264Pipeline), v4l2Src, chanNameFilter, osdMessageFilter, sessionTimerFilter, videoQueue, videoAppSinkQueue, videoTee, videoSink, videoEncoder, alsaSrc, volume, soundTee, soundSink, soundQueue, soundAppSinkQueue, multiplexer, multifdSink, NULL); //link video source with text overlay surfaces bool link = gst_element_link_filtered(v4l2Src, chanNameFilter, rawVideoCaps); link = gst_element_link_filtered(chanNameFilter, osdMessageFilter, rawVideoCaps); link = gst_element_link_filtered(osdMessageFilter, sessionTimerFilter, rawVideoCaps); //link raw video with text to tee link = gst_element_link_filtered(sessionTimerFilter, videoTee, rawVideoCaps); //link video Tee to both videoSink and videoEncoder. To do this, we must request pads. //this pad is for the tee -> videoSink connection GstPad* videoSrcAppSinkPad = gst_element_get_request_pad(videoTee, "src%d"); //this pad is for the tee -> queue connection GstPad* videoSrcH264Pad = gst_element_get_request_pad(videoTee, "src%d"); //get static pads for the sinks receiving the tee GstPad* videoSinkAppSinkPad = gst_element_get_static_pad(videoAppSinkQueue, "sink"); GstPad* videoSinkH264Pad = gst_element_get_static_pad(videoQueue, "sink"); //link the pads GstPadLinkReturn padLink; padLink = gst_pad_link(videoSrcAppSinkPad, videoSinkAppSinkPad); padLink = gst_pad_link(videoSrcH264Pad, videoSinkH264Pad); gst_object_unref (GST_OBJECT (videoSrcAppSinkPad)); gst_object_unref (GST_OBJECT (videoSrcH264Pad)); gst_object_unref (GST_OBJECT (videoSinkAppSinkPad)); gst_object_unref (GST_OBJECT (videoSinkH264Pad)); link = gst_element_link_filtered(videoAppSinkQueue, videoSink, rawVideoCaps); link = gst_element_link_filtered(videoQueue, videoEncoder, rawVideoCaps); //We are done with the video part of the pipe for now. Now we link the sound elements together //link the alsa source to the volume element link = gst_element_link_filtered(alsaSrc, volume, rawAudioCaps); //link output from volume to soundTee link = gst_element_link_filtered(volume, soundTee, rawAudioCaps); //link audio Tee to both audioSink and multiplexer(when we do audio encoding we can do this with audioEncoder instead. To do this, we must request pads. //this pad is for the tee -> audioSink connection GstPad* audioSrcAppSinkPad = gst_element_get_request_pad(soundTee, "src%d"); //this pad is for the tee -> queue connection GstPad* audioSrcQueuePad = gst_element_get_request_pad(soundTee, "src%d"); //get pads for the sinks receiving the tee GstPad* audioSinkAppSinkPad = gst_element_get_static_pad(soundAppSinkQueue, "sink"); GstPad* audioSinkQueuePad = gst_element_get_static_pad(soundQueue, "sink"); //link the pads padLink = gst_pad_link(audioSrcAppSinkPad, audioSinkAppSinkPad); padLink = gst_pad_link(audioSrcQueuePad, audioSinkQueuePad); gst_object_unref (GST_OBJECT (audioSrcAppSinkPad)); gst_object_unref (GST_OBJECT (audioSrcQueuePad)); gst_object_unref (GST_OBJECT (audioSinkAppSinkPad)); gst_object_unref (GST_OBJECT (audioSinkQueuePad)); link = gst_element_link_filtered(soundAppSinkQueue, soundSink, rawAudioCaps); //Now we multiplex the two parallel streams to do this, we must request pads from the multiplexer. //this pad is for the audioQueue -> multiplex connection GstPad* audioSinkPad = gst_element_get_request_pad(multiplexer, "audio_%d"); //this pad is for the tee -> queue connection GstPad* videoSinkPad = gst_element_get_request_pad(multiplexer, "video_%d"); //get pads for the sources sending to multipexer GstPad* audioSrcPad = gst_element_get_static_pad(soundQueue, "src"); GstPad* videoSrcPad = gst_element_get_static_pad(videoEncoder, "src"); //do h264 caps negotiation //gst_pad_set_caps(videoSrcPad, h264VideoCaps); //gst_pad_set_caps(videoSinkPad, h264VideoCaps); //link the pads padLink = gst_pad_link(audioSrcPad, audioSinkPad); padLink = gst_pad_link(videoSrcPad, videoSinkPad); gst_object_unref (GST_OBJECT (audioSrcPad)); gst_object_unref (GST_OBJECT (audioSinkPad)); gst_object_unref (GST_OBJECT (videoSrcPad)); gst_object_unref (GST_OBJECT (videoSinkPad)); //finally we link the multiplexed stream to the multifdsink link = gst_element_link(multiplexer, multifdSink); gst_caps_unref(rawVideoCaps); gst_caps_unref(rawAudioCaps); gst_caps_unref(h264VideoCaps); } H264Stream::~H264Stream() { for(std::map<int, ClientSocket*>::iterator pair = streamHandles.begin(); pair != streamHandles.end(); pair++) { g_signal_emit_by_name(multifdSink, "remove", pair->first, NULL); delete pair->second; } streamHandles.clear(); gst_element_set_state (h264Pipeline, GST_STATE_NULL); gst_object_unref (GST_OBJECT (h264Pipeline)); } void H264Stream::Main() { while(true) { PWaitAndSignal m(mutex); if(encoding) { OSDSettings osd; if(osd.getShowChanName()) { g_object_set (G_OBJECT (chanNameFilter), "silent", false , NULL); g_object_set (G_OBJECT (chanNameFilter), "text", osd.getChanName().c_str() , NULL); g_object_set (G_OBJECT (chanNameFilter), "halignment", osd.getChanNameHAlign() , NULL); g_object_set (G_OBJECT (chanNameFilter), "valignment", osd.getChanNameVAlign() , NULL); g_object_set (G_OBJECT (chanNameFilter), "wrap-mode", osd.getChanNameWordWrapMode() , NULL); g_object_set (G_OBJECT (chanNameFilter), "font-desc", osd.getChanNameFont().c_str() , NULL); g_object_set (G_OBJECT (chanNameFilter), "shaded-background", osd.getChanNameShadow() , NULL); } else { g_object_set (G_OBJECT (chanNameFilter), "text", "" , NULL); g_object_set (G_OBJECT (chanNameFilter), "silent", true , NULL); } if(osd.getShowOSDMessage()) { g_object_set (G_OBJECT (osdMessageFilter), "silent", false , NULL); g_object_set (G_OBJECT (osdMessageFilter), "text", osd.getOSDMessage().c_str() , NULL); g_object_set (G_OBJECT (osdMessageFilter), "halignment", osd.getOSDMessageHAlign() , NULL); g_object_set (G_OBJECT (osdMessageFilter), "valignment", osd.getOSDMessageVAlign() , NULL); g_object_set (G_OBJECT (osdMessageFilter), "wrap-mode", osd.getOSDMessageWordWrapMode() , NULL); g_object_set (G_OBJECT (osdMessageFilter), "font-desc", osd.getOSDMessageFont().c_str() , NULL); g_object_set (G_OBJECT (osdMessageFilter), "shaded-background", osd.getOSDMessageShadow() , NULL); } else { g_object_set (G_OBJECT (osdMessageFilter), "text", "" , NULL); g_object_set (G_OBJECT (osdMessageFilter), "silent", true , NULL); } if(osd.getShowSessionTimer()) { g_object_set (G_OBJECT (sessionTimerFilter), "silent", false , NULL); g_object_set (G_OBJECT (sessionTimerFilter), "text", osd.getSessionTimer().c_str() , NULL); g_object_set (G_OBJECT (sessionTimerFilter), "halignment", osd.getSessionTimerHAlign() , NULL); g_object_set (G_OBJECT (sessionTimerFilter), "valignment", osd.getSessionTimerVAlign() , NULL); g_object_set (G_OBJECT (sessionTimerFilter), "wrap-mode", osd.getSessionTimerWordWrapMode() , NULL); g_object_set (G_OBJECT (sessionTimerFilter), "font-desc", osd.getSessionTimerFont().c_str() , NULL); g_object_set (G_OBJECT (sessionTimerFilter), "shaded-background", osd.getSessionTimerShadow() , NULL); } else { g_object_set (G_OBJECT (sessionTimerFilter), "text", "" , NULL); g_object_set (G_OBJECT (sessionTimerFilter), "silent", true , NULL); } this->Sleep(1000); } } } void H264Stream::RemoveStream(int handle) { if(handle != -1) { g_signal_emit_by_name(multifdSink, "remove", handle, G_TYPE_NONE); delete streamHandles[handle]; streamHandles.erase(handle); } if(!streamHandles.size()) StopEncoding(); } bool H264Stream::CheckAndBeginEncoding() { if(!encoding) { GstStateChangeReturn stateRet; stateRet = gst_element_set_state (h264Pipeline, GST_STATE_PLAYING); GstState state; stateRet = gst_element_get_state(h264Pipeline, &state, NULL, GST_SECOND); encoding = true; this->Restart(); return true; } else return true; } bool H264Stream::StopEncoding() { gst_element_set_state (h264Pipeline, GST_STATE_READY); encoding = false; return true; } int H264Stream::AddStreamOutput(string ip, string port) { PWaitAndSignal m(mutex); if(CheckAndBeginEncoding()) { ClientSocket* socket = new ClientSocket(ip, atoi(port.c_str())); int fd = socket->getDescriptor(); if(fd != -1) { //g_signal_emit_by_name(gst_app.multiFDSink, "add", fd, G_TYPE_NONE); streamHandles.insert(std::pair<int, ClientSocket*>(fd, socket)); return fd; } } return -1; } GstBuffer* H264Stream::GetAudioBuffer() { PWaitAndSignal m(mutex); if (soundSink != NULL) { return gst_app_sink_pull_buffer (GST_APP_SINK (soundSink)); } return NULL; } GstBuffer* H264Stream::GetVideoBuffer() { PWaitAndSignal m(mutex); if (videoSink != NULL) { return gst_app_sink_pull_buffer (GST_APP_SINK (videoSink)); } return NULL; } GstCaps* H264Stream::GetCurrentAudioCaps() { PWaitAndSignal m(mutex); if (soundSink != NULL) { return gst_app_sink_get_caps (GST_APP_SINK (soundSink)); } return NULL; } GstCaps* H264Stream::GetCurrentVideoCaps() { PWaitAndSignal m(mutex); if (videoSink != NULL) { return gst_app_sink_get_caps (GST_APP_SINK (videoSink)); } return NULL; } bool H264Stream::SetSessionAudioCaps(GstCaps* caps) { PWaitAndSignal m(mutex); if (soundSink != NULL) { gst_app_sink_set_caps (GST_APP_SINK (soundSink), caps); gst_caps_unref(caps); return true; } return false; } bool H264Stream::SetSessionVideoCaps(GstCaps* caps) { PWaitAndSignal m(mutex); if (videoSink != NULL) { gst_app_sink_set_caps (GST_APP_SINK (videoSink), caps); gst_caps_unref(caps); return true; } return false; } void H264Stream::SetVolume(gfloat value) { g_object_set(G_OBJECT (volume), "volume", value, NULL); } Here is the class definition #ifndef H264STREAM_H_ #define H264STREAM_H_ #include #include #include #include #include #include #include #include <gst/gst.h> #include <glib/gi18n.h> #include <gst/app/gstappsink.h> #include <gst/app/gstappbuffer.h> #include "OSDSettings.h" #include "AudioSettings.h" #include "Communications.h" #include "common.h" #include "services.h" class H264Stream : public PThread { public: H264Stream(); virtual ~H264Stream(); /* * The user is responsible for renegotiating caps if they are different from the configuration file. i.e. after receiving H323 caps. * The user is also responsible for unrefing this buffer. */ GstBuffer* GetAudioBuffer(); /* * Current caps in case renegotiation is neccessary (for h323 and SIP caps negotiations) */ GstCaps* GetCurrentAudioCaps(); /* * Sets the caps for the Audio Buffer (for use by H323 and SIP server) */ bool SetSessionAudioCaps(GstCaps* caps); /* * The user is responsible for renegotiating caps if they are different from the configuration file. i.e. after receiving H323 caps. * The user is also responsible for unrefing this buffer. */ GstBuffer* GetVideoBuffer(); /* * Current caps in case renegotiation is neccessary (for h323 and SIP caps negotiations) */ GstCaps* GetCurrentVideoCaps(); /* * Sets the caps for the Video Buffer(for use by H323 and SIP server) */ bool SetSessionVideoCaps(GstCaps* caps); /* * Sends output stream to host at port */ int AddStreamOutput(string host, string port); /* * Remove file descriptor from output stream. */ void RemoveStream(int fd); void SetVolume(gfloat volume); bool CheckAndBeginEncoding(); protected: virtual void Main(); private: Ekiga::ServiceCore core; bool StopEncoding(); std::map<int, ClientSocket*> streamHandles; unsigned size; unsigned height; unsigned width; unsigned fps; unsigned audioChannels; unsigned audioSampleRate; unsigned bitWidth; bool encoding; PMutex mutex; //pipeline GstElement *h264Pipeline; //Sound elements GstElement *alsaSrc, *volume, *soundTee, *soundSink, *soundAppSinkQueue, *soundQueue, *soundEncoder; //video elements GstElement *v4l2Src, *chanNameFilter, *osdMessageFilter, *sessionTimerFilter, *videoTee, *videoSink, *videoAppSinkQueue, *videoQueue, *videoEncoder; //multiplexed elements GstElement *multiplexer, *multifdSink; }; #endif /* H264STREAM_H_ */
Reply | Threaded
Open this post in threaded view
|

Re: Trouble using x264enc with a tee

Marco Ballesio
Hi,

it looks like your email client ate up all of the newlines.. useless
to say how hard it is to read its outcomes :)

btw, some hints below..

On Wed, Dec 1, 2010 at 6:10 PM, JonathanHenson
<[hidden email]> wrote:
> I am writing a multiplexed video/audio streaming thread for a video server I
> am working on. I was having problems so I am testing, to make sure the
> pipeline works by knocking the data into a file. However, the end goal will
> be to use a multifdsink to send the stream to a socket. So, you will notice
> that the multifdsink element is actually a filesink for the time being.

ok so it looks like you're currently using a filesink, aren't you?

> This
> pipeline works with other encoders, but when I use x264enc for the video
> encoder, the pipeline freezes and no data is put into the file. Also, there
> is a tee in both the audio and video portions of the pipeline so that other
> threads can grab the raw buffer if they need to access the data.

the default question under this condition is.. are you putting a queue
element after one of the two ends in the queue? The rationale behind
this is that you need to split the threads for the two downstream
pipelines.

> That way
> only one thread is ever accessing the camera. If I remove the tee in the
> video pipeline, the pipeline works.

+1 for my comment of above

> Also I have tested that if I put an
> xvimagesink on both sides of the pipeline that both windows get the stream,
> so I am pretty sure that the tee is not the problem.

actually, it may be if not properly used :)

sorry for not commenting on the remaining part, but I'm just over a
party for the Finnish Independence Day and it's too hard to read
considering the % of alcohol in my blood. Hopefully my comments of
above will give you an hint. Please attach again your sources and
output either as separate files or in pastebin if you need further
help.

Regards

------------------------------------------------------------------------------
What happens now with your Lotus Notes apps - do you make another costly
upgrade, or settle for being marooned without product support? Time to move
off Lotus Notes and onto the cloud with Force.com, apps are easier to build,
use, and manage than apps on traditional platforms. Sign up for the Lotus
Notes Migration Kit to learn more. http://p.sf.net/sfu/salesforce-d2d
_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

Re: Trouble using x264enc with a tee

JonathanHenson
Here is the code. I actually have started using ffmux_asf untill I can get the x264enc to work. However, this is the same exact pipeline except I am using the asf encoders and muxers instead of x264 and avi. However, now I have the problem of when I use filesink with a location, the file is perfect (though with no seeking--don't know what that is about), however, when I use this code, the client on the other end of the socket can't play the file. I added a standard file to the multifdsink as a test but it isn't receiving any output. Thanks for your reply, I hope your head gets better. /* * H264Stream.cpp * * Created on: Nov 12, 2010 * Author: jonathan */ #include "H264Stream.h" #include "VideoInput.h" int fileD; H264Stream::H264Stream() : PThread (1000, NoAutoDeleteThread, HighestPriority, "H264Stream"), encoding(false) { //temporary setting of variables width = 352; height = 288; fps = 25; audioChannels = 2; audioSampleRate = 8000; bitWidth = 16; GError* error = NULL; gchar* command = NULL; command = g_strdup_printf ("v4l2src ! video/x-raw-yuv, format=(fourcc)I420, width=%d, height=%d, framerate=(fraction)%d/1 !" " videobalance name=VideoBalance ! textoverlay name=chanNameFilter ! textoverlay name=osdMessageFilter ! textoverlay name=sessionTimerOverlay ! " "tee name=t ! queue ! appsink name=videoSink t. ! queue ! ffenc_wmv2 name=videoEncoder me-method=5 ! amux. alsasrc ! " "audio/x-raw-int, depth=%d, width=%d, channels=2, endianness=1234, rate=%d, signed=true ! volume name=volumeFilter ! " "tee name=souTee ! queue ! appsink name=soundSink souTee. ! queue ! ffenc_wmav2 ! amux. ffmux_asf name=amux ! multifdsink name=multifdsink", width, height, fps, bitWidth, bitWidth, audioSampleRate); g_print ("Pipeline: %s\n", command); h264Pipeline = gst_parse_launch (command, &error); if(error != NULL) std::cout << error->message << "\n"; chanNameFilter = gst_bin_get_by_name (GST_BIN (h264Pipeline), "chanNameFilter"); osdMessageFilter = gst_bin_get_by_name (GST_BIN (h264Pipeline), "osdMessageFilter"); sessionTimerFilter = gst_bin_get_by_name (GST_BIN (h264Pipeline), "sessionTimerOverlay"); videoBalance = gst_bin_get_by_name (GST_BIN (h264Pipeline), "VideoBalance"); videoEncoder = gst_bin_get_by_name (GST_BIN (h264Pipeline), "videoEncoder"); volume = gst_bin_get_by_name (GST_BIN (h264Pipeline), "volumeFilter"); multifdSink = gst_bin_get_by_name (GST_BIN (h264Pipeline), "multifdsink"); soundSink = gst_bin_get_by_name (GST_BIN (h264Pipeline), "soundSink"); } H264Stream::~H264Stream() { for(std::map<int, ClientSocket*>::iterator pair = streamHandles.begin(); pair != streamHandles.end(); pair++) { g_signal_emit_by_name(multifdSink, "remove", pair->first, NULL); delete pair->second; } streamHandles.clear(); gst_element_set_state (h264Pipeline, GST_STATE_NULL); gst_object_unref (GST_OBJECT (h264Pipeline)); } void H264Stream::Main() { while(true) { PWaitAndSignal m(mutex); if(encoding) { OSDSettings osd; if(osd.getShowChanName()) { g_object_set (G_OBJECT (chanNameFilter), "silent", false , NULL); g_object_set (G_OBJECT (chanNameFilter), "text", osd.getChanName().c_str() , NULL); g_object_set (G_OBJECT (chanNameFilter), "halignment", osd.getChanNameHAlign() , NULL); g_object_set (G_OBJECT (chanNameFilter), "valignment", osd.getChanNameVAlign() , NULL); g_object_set (G_OBJECT (chanNameFilter), "wrap-mode", osd.getChanNameWordWrapMode() , NULL); g_object_set (G_OBJECT (chanNameFilter), "font-desc", osd.getChanNameFont().c_str() , NULL); g_object_set (G_OBJECT (chanNameFilter), "shaded-background", osd.getChanNameShadow() , NULL); } else { g_object_set (G_OBJECT (chanNameFilter), "text", "" , NULL); g_object_set (G_OBJECT (chanNameFilter), "silent", true , NULL); } if(osd.getShowOSDMessage()) { g_object_set (G_OBJECT (osdMessageFilter), "silent", false , NULL); g_object_set (G_OBJECT (osdMessageFilter), "text", osd.getOSDMessage().c_str() , NULL); g_object_set (G_OBJECT (osdMessageFilter), "halignment", osd.getOSDMessageHAlign() , NULL); g_object_set (G_OBJECT (osdMessageFilter), "valignment", osd.getOSDMessageVAlign() , NULL); g_object_set (G_OBJECT (osdMessageFilter), "wrap-mode", osd.getOSDMessageWordWrapMode() , NULL); g_object_set (G_OBJECT (osdMessageFilter), "font-desc", osd.getOSDMessageFont().c_str() , NULL); g_object_set (G_OBJECT (osdMessageFilter), "shaded-background", osd.getOSDMessageShadow() , NULL); } else { g_object_set (G_OBJECT (osdMessageFilter), "text", "" , NULL); g_object_set (G_OBJECT (osdMessageFilter), "silent", true , NULL); } if(osd.getShowSessionTimer()) { g_object_set (G_OBJECT (sessionTimerFilter), "silent", false , NULL); g_object_set (G_OBJECT (sessionTimerFilter), "text", osd.getSessionTimer().c_str() , NULL); g_object_set (G_OBJECT (sessionTimerFilter), "halignment", osd.getSessionTimerHAlign() , NULL); g_object_set (G_OBJECT (sessionTimerFilter), "valignment", osd.getSessionTimerVAlign() , NULL); g_object_set (G_OBJECT (sessionTimerFilter), "wrap-mode", osd.getSessionTimerWordWrapMode() , NULL); g_object_set (G_OBJECT (sessionTimerFilter), "font-desc", osd.getSessionTimerFont().c_str() , NULL); g_object_set (G_OBJECT (sessionTimerFilter), "shaded-background", osd.getSessionTimerShadow() , NULL); } else { g_object_set (G_OBJECT (sessionTimerFilter), "text", "" , NULL); g_object_set (G_OBJECT (sessionTimerFilter), "silent", true , NULL); } this->Sleep(1000); } } } void H264Stream::RemoveStream(int handle) { if(handle != -1) { g_signal_emit_by_name(multifdSink, "remove", handle, G_TYPE_NONE); delete streamHandles[handle]; streamHandles.erase(handle); g_signal_emit_by_name(multifdSink, "remove", fileD, G_TYPE_NONE); close(fileD); } if(!streamHandles.size()) StopEncoding(); } bool H264Stream::CheckAndBeginEncoding() { if(!encoding) { GstStateChangeReturn stateRet; stateRet = gst_element_set_state (h264Pipeline, GST_STATE_PLAYING); GstState state; stateRet = gst_element_get_state(h264Pipeline, &state, NULL, GST_SECOND); encoding = true; this->Restart(); return true; } else return true; } bool H264Stream::StopEncoding() { gst_element_set_state (h264Pipeline, GST_STATE_READY); encoding = false; return true; } int H264Stream::AddStreamOutput(string ip, string port) { PWaitAndSignal m(mutex); if(CheckAndBeginEncoding()) { fileD = open("/home/jonathan/anotherTest.wmv", O_RDWR | O_APPEND | O_CREAT, 0666); if(fileD != -1) { g_signal_emit_by_name(multifdSink, "add", fileD, G_TYPE_NONE); //streamHandles.insert(std::pair<int, ClientSocket*>(fd, socket)); } ClientSocket* socket = new ClientSocket(ip, atoi(port.c_str())); int fd = socket->getDescriptor(); if(fd != -1) { g_signal_emit_by_name(multifdSink, "add", fd, G_TYPE_NONE); streamHandles.insert(std::pair<int, ClientSocket*>(fd, socket)); return fd; } } return -1; } GstBuffer* H264Stream::GetAudioBuffer() { PWaitAndSignal m(mutex); if (soundSink != NULL) { return gst_app_sink_pull_buffer (GST_APP_SINK (soundSink)); } return NULL; } GstBuffer* H264Stream::GetVideoBuffer() { PWaitAndSignal m(mutex); if (videoSink != NULL) { return gst_app_sink_pull_buffer (GST_APP_SINK (videoSink)); } return NULL; } GstCaps* H264Stream::GetCurrentAudioCaps() { PWaitAndSignal m(mutex); if (soundSink != NULL) { return gst_app_sink_get_caps (GST_APP_SINK (soundSink)); } return NULL; } GstCaps* H264Stream::GetCurrentVideoCaps() { PWaitAndSignal m(mutex); if (videoSink != NULL) { return gst_app_sink_get_caps (GST_APP_SINK (videoSink)); } return NULL; } bool H264Stream::SetSessionAudioCaps(GstCaps* caps) { PWaitAndSignal m(mutex); if (soundSink != NULL) { gst_app_sink_set_caps (GST_APP_SINK (soundSink), caps); gst_caps_unref(caps); return true; } return false; } bool H264Stream::SetSessionVideoCaps(GstCaps* caps) { PWaitAndSignal m(mutex); if (videoSink != NULL) { gst_app_sink_set_caps (GST_APP_SINK (videoSink), caps); gst_caps_unref(caps); return true; } return false; } void H264Stream::SetVolume(gfloat value) { g_object_set(G_OBJECT (volume), "volume", value, NULL); } bool H264Stream::SetSaturation(double color) { g_object_set(G_OBJECT (videoBalance), "saturation", color, NULL); return true; } bool H264Stream::SetBrightness(double brightness) { g_object_set(G_OBJECT (videoBalance), "brightness", brightness, NULL); return true; } bool H264Stream::SetHue(double hue) { g_object_set(G_OBJECT (videoBalance), "hue", hue, NULL); return true; } bool H264Stream::SetContrast(double contrast) { g_object_set(G_OBJECT (videoBalance), "contrast", contrast, NULL); return true; }
Reply | Threaded
Open this post in threaded view
|

Re: Trouble using x264enc with a tee

JonathanHenson
damn it, this damn thing keeps screwing up, let me try again.

Here is the code. I actually have started using ffmux_asf untill I can get the x264enc to work. However, this is the same exact pipeline except I am using the asf encoders and muxers instead of x264 and avi. However, now I have the problem of when I use filesink with a location, the file is perfect (though with no seeking--don't know what that is about), however, when I use this code, the client on the other end of the socket can't play the file. I added a standard file to the multifdsink as a test but it isn't receiving any output. Thanks for your reply, I hope your head gets better.

/*
 * H264Stream.cpp
 *
 *  Created on: Nov 12, 2010
 *      Author: jonathan
 */

#include "H264Stream.h"
#include "VideoInput.h"

int fileD;

H264Stream::H264Stream() : PThread (1000, NoAutoDeleteThread, HighestPriority, "H264Stream"),
        encoding(false)
{
        //temporary setting of variables
        width = 352;
        height = 288;
        fps = 25;

        audioChannels = 2;
        audioSampleRate = 8000;
        bitWidth = 16;


        GError* error = NULL;
        gchar* command = NULL;

        command = g_strdup_printf ("v4l2src ! video/x-raw-yuv, format=(fourcc)I420, width=%d, height=%d, framerate=(fraction)%d/1 !"
                        " videobalance name=VideoBalance ! textoverlay name=chanNameFilter ! textoverlay name=osdMessageFilter ! textoverlay name=sessionTimerOverlay ! "
                        "tee name=t ! queue ! appsink name=videoSink t. ! queue ! ffenc_wmv2 name=videoEncoder me-method=5 ! amux.  alsasrc ! "
                        "audio/x-raw-int, depth=%d, width=%d, channels=2, endianness=1234, rate=%d, signed=true ! volume name=volumeFilter ! "
                        "tee name=souTee ! queue ! appsink name=soundSink souTee. ! queue ! ffenc_wmav2 ! amux. ffmux_asf name=amux ! multifdsink name=multifdsink",
                         width, height, fps, bitWidth, bitWidth, audioSampleRate);

   g_print ("Pipeline: %s\n", command);
        h264Pipeline = gst_parse_launch (command, &error);

        if(error != NULL)
        std::cout << error->message << "\n";

        chanNameFilter = gst_bin_get_by_name (GST_BIN (h264Pipeline), "chanNameFilter");
        osdMessageFilter = gst_bin_get_by_name (GST_BIN (h264Pipeline), "osdMessageFilter");
        sessionTimerFilter = gst_bin_get_by_name (GST_BIN (h264Pipeline), "sessionTimerOverlay");
        videoBalance = gst_bin_get_by_name (GST_BIN (h264Pipeline), "VideoBalance");
        videoEncoder = gst_bin_get_by_name (GST_BIN (h264Pipeline), "videoEncoder");
        volume = gst_bin_get_by_name (GST_BIN (h264Pipeline), "volumeFilter");
        multifdSink = gst_bin_get_by_name (GST_BIN (h264Pipeline), "multifdsink");
        soundSink = gst_bin_get_by_name (GST_BIN (h264Pipeline), "soundSink");
}

H264Stream::~H264Stream()
{
        for(std::map<int, ClientSocket*>::iterator pair = streamHandles.begin(); pair != streamHandles.end(); pair++)
        {
                g_signal_emit_by_name(multifdSink, "remove", pair->first, NULL);
                delete pair->second;
        }

        streamHandles.clear();

        gst_element_set_state (h264Pipeline, GST_STATE_NULL);
        gst_object_unref (GST_OBJECT (h264Pipeline));
}

void H264Stream::Main()
{
        while(true)
        {
                PWaitAndSignal m(mutex);
                if(encoding)
                {
                  OSDSettings osd;

                  if(osd.getShowChanName())
                  {
                          g_object_set (G_OBJECT (chanNameFilter), "silent", false , NULL);
                          g_object_set (G_OBJECT (chanNameFilter), "text", osd.getChanName().c_str() , NULL);
                          g_object_set (G_OBJECT (chanNameFilter), "halignment", osd.getChanNameHAlign() , NULL);
                          g_object_set (G_OBJECT (chanNameFilter), "valignment", osd.getChanNameVAlign() , NULL);
                          g_object_set (G_OBJECT (chanNameFilter), "wrap-mode", osd.getChanNameWordWrapMode() , NULL);
                          g_object_set (G_OBJECT (chanNameFilter), "font-desc", osd.getChanNameFont().c_str() , NULL);
                          g_object_set (G_OBJECT (chanNameFilter), "shaded-background", osd.getChanNameShadow() , NULL);
                  }
                  else
                  {
                          g_object_set (G_OBJECT (chanNameFilter), "text", "" , NULL);
                          g_object_set (G_OBJECT (chanNameFilter), "silent", true , NULL);
                  }

                  if(osd.getShowOSDMessage())
                  {
                          g_object_set (G_OBJECT (osdMessageFilter), "silent", false , NULL);
                          g_object_set (G_OBJECT (osdMessageFilter), "text", osd.getOSDMessage().c_str() , NULL);
                          g_object_set (G_OBJECT (osdMessageFilter), "halignment", osd.getOSDMessageHAlign() , NULL);
                          g_object_set (G_OBJECT (osdMessageFilter), "valignment", osd.getOSDMessageVAlign() , NULL);
                          g_object_set (G_OBJECT (osdMessageFilter), "wrap-mode", osd.getOSDMessageWordWrapMode() , NULL);
                          g_object_set (G_OBJECT (osdMessageFilter), "font-desc", osd.getOSDMessageFont().c_str() , NULL);
                          g_object_set (G_OBJECT (osdMessageFilter), "shaded-background", osd.getOSDMessageShadow() , NULL);
                  }
                  else
                  {
                          g_object_set (G_OBJECT (osdMessageFilter), "text", "" , NULL);
                          g_object_set (G_OBJECT (osdMessageFilter), "silent", true , NULL);
                  }

                  if(osd.getShowSessionTimer())
                  {
                          g_object_set (G_OBJECT (sessionTimerFilter), "silent", false , NULL);
                          g_object_set (G_OBJECT (sessionTimerFilter), "text", osd.getSessionTimer().c_str() , NULL);
                          g_object_set (G_OBJECT (sessionTimerFilter), "halignment", osd.getSessionTimerHAlign() , NULL);
                          g_object_set (G_OBJECT (sessionTimerFilter), "valignment", osd.getSessionTimerVAlign() , NULL);
                          g_object_set (G_OBJECT (sessionTimerFilter), "wrap-mode", osd.getSessionTimerWordWrapMode() , NULL);
                          g_object_set (G_OBJECT (sessionTimerFilter), "font-desc", osd.getSessionTimerFont().c_str() , NULL);
                          g_object_set (G_OBJECT (sessionTimerFilter), "shaded-background", osd.getSessionTimerShadow() , NULL);

                  }
                  else
                  {
                          g_object_set (G_OBJECT (sessionTimerFilter), "text", "" , NULL);
                          g_object_set (G_OBJECT (sessionTimerFilter), "silent", true , NULL);
                  }

                        this->Sleep(1000);
                }
        }
}

void H264Stream::RemoveStream(int handle)
{
        if(handle != -1)
        {
                g_signal_emit_by_name(multifdSink, "remove", handle, G_TYPE_NONE);
                delete streamHandles[handle];
                streamHandles.erase(handle);

                g_signal_emit_by_name(multifdSink, "remove", fileD, G_TYPE_NONE);
                close(fileD);
        }

        if(!streamHandles.size())
                StopEncoding();
}

bool H264Stream::CheckAndBeginEncoding()
{
        if(!encoding)
        {
                GstStateChangeReturn stateRet;
                stateRet = gst_element_set_state (h264Pipeline, GST_STATE_PLAYING);

                GstState state;

                stateRet = gst_element_get_state(h264Pipeline, &state, NULL, GST_SECOND);
                encoding = true;
                this->Restart();
                return true;
        }
        else
                return true;
}

bool H264Stream::StopEncoding()
{
        gst_element_set_state (h264Pipeline, GST_STATE_READY);

        encoding = false;
        return true;
}

int H264Stream::AddStreamOutput(string ip, string port)
{
        PWaitAndSignal m(mutex);
        if(CheckAndBeginEncoding())
        {
                fileD = open("/home/jonathan/anotherTest.wmv", O_RDWR | O_APPEND | O_CREAT, 0666);

                if(fileD != -1)
                {
                        g_signal_emit_by_name(multifdSink, "add", fileD, G_TYPE_NONE);
                        //streamHandles.insert(std::pair<int, ClientSocket*>(fd, socket));
                }

                ClientSocket* socket = new ClientSocket(ip, atoi(port.c_str()));

                int fd = socket->getDescriptor();

                if(fd != -1)
                {
                        g_signal_emit_by_name(multifdSink, "add", fd, G_TYPE_NONE);
                        streamHandles.insert(std::pair<int, ClientSocket*>(fd, socket));
                        return fd;
                }


        }
        return -1;
}

GstBuffer* H264Stream::GetAudioBuffer()
{
        PWaitAndSignal m(mutex);

         if (soundSink != NULL) {
                 return gst_app_sink_pull_buffer (GST_APP_SINK (soundSink));
         }
         return NULL;
}

GstBuffer* H264Stream::GetVideoBuffer()
{
        PWaitAndSignal m(mutex);

         if (videoSink != NULL) {
                 return gst_app_sink_pull_buffer (GST_APP_SINK (videoSink));
         }
         return NULL;
}

GstCaps* H264Stream::GetCurrentAudioCaps()
{
        PWaitAndSignal m(mutex);

         if (soundSink != NULL) {
                 return gst_app_sink_get_caps (GST_APP_SINK (soundSink));
         }
         return NULL;
}

GstCaps* H264Stream::GetCurrentVideoCaps()
{
        PWaitAndSignal m(mutex);

         if (videoSink != NULL) {
                 return gst_app_sink_get_caps (GST_APP_SINK (videoSink));
         }
         return NULL;
}

bool H264Stream::SetSessionAudioCaps(GstCaps* caps)
{
         PWaitAndSignal m(mutex);

         if (soundSink != NULL) {
                 gst_app_sink_set_caps (GST_APP_SINK (soundSink), caps);
                 gst_caps_unref(caps);
                 return true;
         }
         return false;
}

bool H264Stream::SetSessionVideoCaps(GstCaps* caps)
{
         PWaitAndSignal m(mutex);

         if (videoSink != NULL) {
                 gst_app_sink_set_caps (GST_APP_SINK (videoSink), caps);
                 gst_caps_unref(caps);
                 return true;
         }
         return false;
}

void H264Stream::SetVolume(gfloat value)
{
        g_object_set(G_OBJECT (volume), "volume", value, NULL);
}

bool H264Stream::SetSaturation(double color)
{
        g_object_set(G_OBJECT (videoBalance), "saturation", color, NULL);

        return true;
}

bool H264Stream::SetBrightness(double brightness)
{
        g_object_set(G_OBJECT (videoBalance), "brightness", brightness, NULL);

        return true;
}

bool H264Stream::SetHue(double hue)
{
        g_object_set(G_OBJECT (videoBalance), "hue", hue, NULL);

        return true;
}

bool H264Stream::SetContrast(double contrast)
{
        g_object_set(G_OBJECT (videoBalance), "contrast", contrast, NULL);

        return true;
}




Reply | Threaded
Open this post in threaded view
|

Re: Trouble using x264enc with a tee

Marco Ballesio
Hi,

On Mon, Dec 6, 2010 at 10:41 PM, JonathanHenson
<[hidden email]> wrote:
>
> damn it, this damn thing keeps screwing up, let me try again.
>

this is better..

> Here is the code. I actually have started using ffmux_asf untill I can get
> the x264enc to work. However, this is the same exact pipeline except I am
> using the asf encoders and muxers instead of x264 and avi. However, now I
> have the problem of when I use filesink with a location, the file is perfect
> (though with no seeking--don't know what that is about), however, when I use
> this code, the client on the other end of the socket can't play the file.

What if you store the file in one pass (not using a socket) and AFTER
it you try to play? I know it's not exactly as in your requirements,
but it may help understanding the following..

Are there any reasons why you cannot use a streaming protocol for this?

Usually a muxer cannot store a complete set of information into a file
until it gets an EOS (that is, unless the player is using some
heuristics, it's rarely possible to properly play a file unless it has
been completely stored). Because of this, streaming protocols (e.g.
RTP) are imo more suitable even for inter-process streaming on the
same machine.

If that's an option I would suggest you not to involve network-aimed
components such as a jitter buffer: very basic udp/(de)payloader
pipelines over the loopback interface can do the job.

> added a standard file to the multifdsink as a test but it isn't receiving
> any output. Thanks for your reply, I hope your head gets better.
>

yep, definitely better now ;).

Btw I think that a named pipe with a simple filesink should be the same..

> /*
>  * H264Stream.cpp
>  *
>  *  Created on: Nov 12, 2010
>  *      Author: jonathan
>  */
>
> #include "H264Stream.h"
> #include "VideoInput.h"
>
> int fileD;
>
> H264Stream::H264Stream() : PThread (1000, NoAutoDeleteThread,
> HighestPriority, "H264Stream"),
>        encoding(false)
> {
>        //temporary setting of variables
>        width = 352;
>        height = 288;
>        fps = 25;
>
>        audioChannels = 2;
>        audioSampleRate = 8000;
>        bitWidth = 16;
>
>
>        GError* error = NULL;
>        gchar* command = NULL;
>
>        command = g_strdup_printf ("v4l2src ! video/x-raw-yuv, format=(fourcc)I420,
> width=%d, height=%d, framerate=(fraction)%d/1 !"
>                        " videobalance name=VideoBalance ! textoverlay name=chanNameFilter !
> textoverlay name=osdMessageFilter ! textoverlay name=sessionTimerOverlay ! "
>                        "tee name=t ! queue ! appsink name=videoSink t. ! queue ! ffenc_wmv2

I see you're using appsrc and appsink. Usually, the first good
question in such a case is: "do I really need such elements in my
pipeline?". The answer depends on your use case and requirements :).

Note that if your app is not producing and consuming buffers in the
proper way you may run into troubles. The behaviour will e.g. differ
depending on whether you're in pull or push mode. See the elements'
documentation for more details.

> name=videoEncoder me-method=5 ! amux.  alsasrc ! "
>                        "audio/x-raw-int, depth=%d, width=%d, channels=2, endianness=1234,
> rate=%d, signed=true ! volume name=volumeFilter ! "
>                        "tee name=souTee ! queue ! appsink name=soundSink souTee. ! queue !

maybe you don't need this second queue (not sure).

> ffenc_wmav2 ! amux. ffmux_asf name=amux ! multifdsink name=multifdsink",
>                         width, height, fps, bitWidth, bitWidth, audioSampleRate);

here, instead of a mux and multifdsink, you should really try with
payloaders and udpsinks (if you want to setup a media
producer/consumer processes pair).

Regards

>
>   g_print ("Pipeline: %s\n", command);
>        h264Pipeline = gst_parse_launch (command, &error);
>
>        if(error != NULL)
>        std::cout << error->message << "\n";
>
>        chanNameFilter = gst_bin_get_by_name (GST_BIN (h264Pipeline),
> "chanNameFilter");
>        osdMessageFilter = gst_bin_get_by_name (GST_BIN (h264Pipeline),
> "osdMessageFilter");
>        sessionTimerFilter = gst_bin_get_by_name (GST_BIN (h264Pipeline),
> "sessionTimerOverlay");
>        videoBalance = gst_bin_get_by_name (GST_BIN (h264Pipeline),
> "VideoBalance");
>        videoEncoder = gst_bin_get_by_name (GST_BIN (h264Pipeline),
> "videoEncoder");
>        volume = gst_bin_get_by_name (GST_BIN (h264Pipeline), "volumeFilter");
>        multifdSink = gst_bin_get_by_name (GST_BIN (h264Pipeline), "multifdsink");
>        soundSink = gst_bin_get_by_name (GST_BIN (h264Pipeline), "soundSink");
> }
>
> H264Stream::~H264Stream()
> {
>        for(std::map<int, ClientSocket*>::iterator pair = streamHandles.begin();
> pair != streamHandles.end(); pair++)
>        {
>                g_signal_emit_by_name(multifdSink, "remove", pair->first, NULL);
>                delete pair->second;
>        }
>
>        streamHandles.clear();
>
>        gst_element_set_state (h264Pipeline, GST_STATE_NULL);
>        gst_object_unref (GST_OBJECT (h264Pipeline));
> }
>
> void H264Stream::Main()
> {
>        while(true)
>        {
>                PWaitAndSignal m(mutex);
>                if(encoding)
>                {
>                  OSDSettings osd;
>
>                  if(osd.getShowChanName())
>                  {
>                          g_object_set (G_OBJECT (chanNameFilter), "silent", false , NULL);
>                          g_object_set (G_OBJECT (chanNameFilter), "text",
> osd.getChanName().c_str() , NULL);
>                          g_object_set (G_OBJECT (chanNameFilter), "halignment",
> osd.getChanNameHAlign() , NULL);
>                          g_object_set (G_OBJECT (chanNameFilter), "valignment",
> osd.getChanNameVAlign() , NULL);
>                          g_object_set (G_OBJECT (chanNameFilter), "wrap-mode",
> osd.getChanNameWordWrapMode() , NULL);
>                          g_object_set (G_OBJECT (chanNameFilter), "font-desc",
> osd.getChanNameFont().c_str() , NULL);
>                          g_object_set (G_OBJECT (chanNameFilter), "shaded-background",
> osd.getChanNameShadow() , NULL);
>                  }
>                  else
>                  {
>                          g_object_set (G_OBJECT (chanNameFilter), "text", "" , NULL);
>                          g_object_set (G_OBJECT (chanNameFilter), "silent", true , NULL);
>                  }
>
>                  if(osd.getShowOSDMessage())
>                  {
>                          g_object_set (G_OBJECT (osdMessageFilter), "silent", false , NULL);
>                          g_object_set (G_OBJECT (osdMessageFilter), "text",
> osd.getOSDMessage().c_str() , NULL);
>                          g_object_set (G_OBJECT (osdMessageFilter), "halignment",
> osd.getOSDMessageHAlign() , NULL);
>                          g_object_set (G_OBJECT (osdMessageFilter), "valignment",
> osd.getOSDMessageVAlign() , NULL);
>                          g_object_set (G_OBJECT (osdMessageFilter), "wrap-mode",
> osd.getOSDMessageWordWrapMode() , NULL);
>                          g_object_set (G_OBJECT (osdMessageFilter), "font-desc",
> osd.getOSDMessageFont().c_str() , NULL);
>                          g_object_set (G_OBJECT (osdMessageFilter), "shaded-background",
> osd.getOSDMessageShadow() , NULL);
>                  }
>                  else
>                  {
>                          g_object_set (G_OBJECT (osdMessageFilter), "text", "" , NULL);
>                          g_object_set (G_OBJECT (osdMessageFilter), "silent", true , NULL);
>                  }
>
>                  if(osd.getShowSessionTimer())
>                  {
>                          g_object_set (G_OBJECT (sessionTimerFilter), "silent", false , NULL);
>                          g_object_set (G_OBJECT (sessionTimerFilter), "text",
> osd.getSessionTimer().c_str() , NULL);
>                          g_object_set (G_OBJECT (sessionTimerFilter), "halignment",
> osd.getSessionTimerHAlign() , NULL);
>                          g_object_set (G_OBJECT (sessionTimerFilter), "valignment",
> osd.getSessionTimerVAlign() , NULL);
>                          g_object_set (G_OBJECT (sessionTimerFilter), "wrap-mode",
> osd.getSessionTimerWordWrapMode() , NULL);
>                          g_object_set (G_OBJECT (sessionTimerFilter), "font-desc",
> osd.getSessionTimerFont().c_str() , NULL);
>                          g_object_set (G_OBJECT (sessionTimerFilter), "shaded-background",
> osd.getSessionTimerShadow() , NULL);
>
>                  }
>                  else
>                  {
>                          g_object_set (G_OBJECT (sessionTimerFilter), "text", "" , NULL);
>                          g_object_set (G_OBJECT (sessionTimerFilter), "silent", true , NULL);
>                  }
>
>                        this->Sleep(1000);
>                }
>        }
> }
>
> void H264Stream::RemoveStream(int handle)
> {
>        if(handle != -1)
>        {
>                g_signal_emit_by_name(multifdSink, "remove", handle, G_TYPE_NONE);
>                delete streamHandles[handle];
>                streamHandles.erase(handle);
>
>                g_signal_emit_by_name(multifdSink, "remove", fileD, G_TYPE_NONE);
>                close(fileD);
>        }
>
>        if(!streamHandles.size())
>                StopEncoding();
> }
>
> bool H264Stream::CheckAndBeginEncoding()
> {
>        if(!encoding)
>        {
>                GstStateChangeReturn stateRet;
>                stateRet = gst_element_set_state (h264Pipeline, GST_STATE_PLAYING);
>
>                GstState state;
>
>                stateRet = gst_element_get_state(h264Pipeline, &state, NULL, GST_SECOND);
>                encoding = true;
>                this->Restart();
>                return true;
>        }
>        else
>                return true;
> }
>
> bool H264Stream::StopEncoding()
> {
>        gst_element_set_state (h264Pipeline, GST_STATE_READY);
>
>        encoding = false;
>        return true;
> }
>
> int H264Stream::AddStreamOutput(string ip, string port)
> {
>        PWaitAndSignal m(mutex);
>        if(CheckAndBeginEncoding())
>        {
>                fileD = open("/home/jonathan/anotherTest.wmv", O_RDWR | O_APPEND |
> O_CREAT, 0666);
>
>                if(fileD != -1)
>                {
>                        g_signal_emit_by_name(multifdSink, "add", fileD, G_TYPE_NONE);
>                        //streamHandles.insert(std::pair<int, ClientSocket*>(fd, socket));
>                }
>
>                ClientSocket* socket = new ClientSocket(ip, atoi(port.c_str()));
>
>                int fd = socket->getDescriptor();
>
>                if(fd != -1)
>                {
>                        g_signal_emit_by_name(multifdSink, "add", fd, G_TYPE_NONE);
>                        streamHandles.insert(std::pair<int, ClientSocket*>(fd, socket));
>                        return fd;
>                }
>
>
>        }
>        return -1;
> }
>
> GstBuffer* H264Stream::GetAudioBuffer()
> {
>        PWaitAndSignal m(mutex);
>
>         if (soundSink != NULL) {
>                 return gst_app_sink_pull_buffer (GST_APP_SINK (soundSink));
>         }
>         return NULL;
> }
>
> GstBuffer* H264Stream::GetVideoBuffer()
> {
>        PWaitAndSignal m(mutex);
>
>         if (videoSink != NULL) {
>                 return gst_app_sink_pull_buffer (GST_APP_SINK (videoSink));
>         }
>         return NULL;
> }
>
> GstCaps* H264Stream::GetCurrentAudioCaps()
> {
>        PWaitAndSignal m(mutex);
>
>         if (soundSink != NULL) {
>                 return gst_app_sink_get_caps (GST_APP_SINK (soundSink));
>         }
>         return NULL;
> }
>
> GstCaps* H264Stream::GetCurrentVideoCaps()
> {
>        PWaitAndSignal m(mutex);
>
>         if (videoSink != NULL) {
>                 return gst_app_sink_get_caps (GST_APP_SINK (videoSink));
>         }
>         return NULL;
> }
>
> bool H264Stream::SetSessionAudioCaps(GstCaps* caps)
> {
>         PWaitAndSignal m(mutex);
>
>         if (soundSink != NULL) {
>                 gst_app_sink_set_caps (GST_APP_SINK (soundSink), caps);
>                 gst_caps_unref(caps);
>                 return true;
>         }
>         return false;
> }
>
> bool H264Stream::SetSessionVideoCaps(GstCaps* caps)
> {
>         PWaitAndSignal m(mutex);
>
>         if (videoSink != NULL) {
>                 gst_app_sink_set_caps (GST_APP_SINK (videoSink), caps);
>                 gst_caps_unref(caps);
>                 return true;
>         }
>         return false;
> }
>
> void H264Stream::SetVolume(gfloat value)
> {
>        g_object_set(G_OBJECT (volume), "volume", value, NULL);
> }
>
> bool H264Stream::SetSaturation(double color)
> {
>        g_object_set(G_OBJECT (videoBalance), "saturation", color, NULL);
>
>        return true;
> }
>
> bool H264Stream::SetBrightness(double brightness)
> {
>        g_object_set(G_OBJECT (videoBalance), "brightness", brightness, NULL);
>
>        return true;
> }
>
> bool H264Stream::SetHue(double hue)
> {
>        g_object_set(G_OBJECT (videoBalance), "hue", hue, NULL);
>
>        return true;
> }
>
> bool H264Stream::SetContrast(double contrast)
> {
>        g_object_set(G_OBJECT (videoBalance), "contrast", contrast, NULL);
>
>        return true;
> }
>
>
>
>
>
> --
> View this message in context: http://gstreamer-devel.966125.n4.nabble.com/Trouble-using-x264enc-with-a-tee-tp3067583p3075282.html
> Sent from the GStreamer-devel mailing list archive at Nabble.com.
>
> ------------------------------------------------------------------------------
> What happens now with your Lotus Notes apps - do you make another costly
> upgrade, or settle for being marooned without product support? Time to move
> off Lotus Notes and onto the cloud with Force.com, apps are easier to build,
> use, and manage than apps on traditional platforms. Sign up for the Lotus
> Notes Migration Kit to learn more. http://p.sf.net/sfu/salesforce-d2d
> _______________________________________________
> gstreamer-devel mailing list
> [hidden email]
> https://lists.sourceforge.net/lists/listinfo/gstreamer-devel
>

------------------------------------------------------------------------------
What happens now with your Lotus Notes apps - do you make another costly
upgrade, or settle for being marooned without product support? Time to move
off Lotus Notes and onto the cloud with Force.com, apps are easier to build,
use, and manage than apps on traditional platforms. Sign up for the Lotus
Notes Migration Kit to learn more. http://p.sf.net/sfu/salesforce-d2d
_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

Re: Trouble using x264enc with a tee

JonathanHenson
"What if you store the file in one pass (not using a socket) and AFTER
it you try to play? I know it's not exactly as in your requirements,
but it may help understanding the following.. "

If i use filesink instead of multifdsink, I can play the file back fine. I am using the multifdsink so that I can stream to an indefinite number of clients over a TCP/IP socket. However, I just realized, that to receive the stream on the C#.NET side, I will need to use RTP (of which I know very little about--I did read the spec though).

"Are there any reasons why you cannot use a streaming protocol for this? "

Yes, I don't know shit about RTP. I need some help on the gstreamer side. It seems that I am not supposed to multiplex the stream before sending it in RTP but send to separate streams. I currently have:

command = g_strdup_printf ("v4l2src ! video/x-raw-yuv, format=(fourcc)I420, width=%d, height=%d, framerate=(fraction)%d/1 !"
                        " videobalance name=VideoBalance ! textoverlay name=chanNameFilter ! textoverlay name=osdMessageFilter ! textoverlay name=sessionTimerOverlay ! "
                        "tee name=t ! queue ! appsink name=videoSink t. ! queue ! ffenc_wmv1 name=videoEncoder me-method=5 ! amux.  alsasrc ! "
                        "audio/x-raw-int, depth=%d, width=%d, channels=2, endianness=1234, rate=%d, signed=true ! volume name=volumeFilter ! "
                        "tee name=souTee ! queue ! appsink name=soundSink souTee. ! queue ! ffenc_wmav1 ! amux. asfmux name=amux ! rtpasfpay ! multifdsink name=multifdsink",
                         width, height, fps, bitWidth, bitWidth, audioSampleRate);

I think I would need so use gstrtpbin instead of the multifdsink and get rid of the muxing?

"see you're using appsrc and appsink. Usually, the first good
question in such a case is: "do I really need such elements in my
pipeline?". The answer depends on your use case and requirements :).

Note that if your app is not producing and consuming buffers in the
proper way you may run into troubles. The behaviour will e.g. differ
depending on whether you're in pull or push mode. See the elements'
documentation for more details. "

I am using the appsinks because this app is using OPAL in other threads to answer H323 and SIP requests and it needs the raw data buffer. This thread is used for a client control computer and SDK which will monitor sessions, and make recordings of the sessions (i.e. a Windows Server 2008 Web Server using ASP.NET/ C#.net with an SDK I have written to talk to this device.) Do you have a better approach in mind?

The OPAL thread grabs this buffer when it needs it.

Thank you once again,

Jonathan

Reply | Threaded
Open this post in threaded view
|

Re: Trouble using x264enc with a tee

Tim-Philipp Müller-2
In reply to this post by JonathanHenson
On Mon, 2010-12-06 at 12:41 -0800, JonathanHenson wrote:

Hi,

> Here is the code. I actually have started using ffmux_asf untill I can get
> the x264enc to work. However, this is the same exact pipeline except I am
> using the asf encoders and muxers instead of x264 and avi. However, now I
> have the problem of when I use filesink with a location, the file is perfect
> (though with no seeking--don't know what that is about), however, when I use
> this code, the client on the other end of the socket can't play the file. I
> added a standard file to the multifdsink as a test but it isn't receiving
> any output.

Didn't look into it in too much detail, but most likely the problem is
that the other queues in the non-x264enc branches are too small. x264enc
has a very high latency by default (60-80 frames IIRC), so will need a
lot of input buffers before it outputs its first buffer (which is what
any muxer would wait for).

Either increase or unset queue limits in the other branches (e.g. queue
max-size-bytes=0 max-size-time=0 max-size-buffers=0), or use multiqueue
or use x264enc tune=zerolatency (this leads to much lower quality output
though). The live source in your pipeline will limit dataflow upstream
of the queues, so you shouldn't have a problem with queues growing
endlessly until you run out of memory if you remove all size limits.

Cheers
 -Tim




> /*
>  * H264Stream.cpp
>  *
>  *  Created on: Nov 12, 2010
>  *      Author: jonathan
>  */
>
> #include "H264Stream.h"
> #include "VideoInput.h"
>
> int fileD;
>
> H264Stream::H264Stream() : PThread (1000, NoAutoDeleteThread,
> HighestPriority, "H264Stream"),
> encoding(false)
> {
> //temporary setting of variables
> width = 352;
> height = 288;
> fps = 25;
>
> audioChannels = 2;
> audioSampleRate = 8000;
> bitWidth = 16;
>
>
> GError* error = NULL;
> gchar* command = NULL;
>
> command = g_strdup_printf ("v4l2src ! video/x-raw-yuv, format=(fourcc)I420,
> width=%d, height=%d, framerate=(fraction)%d/1 !"
> " videobalance name=VideoBalance ! textoverlay name=chanNameFilter !
> textoverlay name=osdMessageFilter ! textoverlay name=sessionTimerOverlay ! "
> "tee name=t ! queue ! appsink name=videoSink t. ! queue ! ffenc_wmv2
> name=videoEncoder me-method=5 ! amux.  alsasrc ! "
> "audio/x-raw-int, depth=%d, width=%d, channels=2, endianness=1234,
> rate=%d, signed=true ! volume name=volumeFilter ! "
> "tee name=souTee ! queue ! appsink name=soundSink souTee. ! queue !
> ffenc_wmav2 ! amux. ffmux_asf name=amux ! multifdsink name=multifdsink",
> width, height, fps, bitWidth, bitWidth, audioSampleRate);
>
>    g_print ("Pipeline: %s\n", command);
> h264Pipeline = gst_parse_launch (command, &error);
>
> if(error != NULL)
> std::cout << error->message << "\n";
>
> chanNameFilter = gst_bin_get_by_name (GST_BIN (h264Pipeline),
> "chanNameFilter");
> osdMessageFilter = gst_bin_get_by_name (GST_BIN (h264Pipeline),
> "osdMessageFilter");
> sessionTimerFilter = gst_bin_get_by_name (GST_BIN (h264Pipeline),
> "sessionTimerOverlay");
> videoBalance = gst_bin_get_by_name (GST_BIN (h264Pipeline),
> "VideoBalance");
> videoEncoder = gst_bin_get_by_name (GST_BIN (h264Pipeline),
> "videoEncoder");
> volume = gst_bin_get_by_name (GST_BIN (h264Pipeline), "volumeFilter");
> multifdSink = gst_bin_get_by_name (GST_BIN (h264Pipeline), "multifdsink");
> soundSink = gst_bin_get_by_name (GST_BIN (h264Pipeline), "soundSink");
> }
>
> H264Stream::~H264Stream()
> {
> for(std::map<int, ClientSocket*>::iterator pair = streamHandles.begin();
> pair != streamHandles.end(); pair++)
> {
> g_signal_emit_by_name(multifdSink, "remove", pair->first, NULL);
> delete pair->second;
> }
>
> streamHandles.clear();
>
> gst_element_set_state (h264Pipeline, GST_STATE_NULL);
> gst_object_unref (GST_OBJECT (h264Pipeline));
> }
>
> void H264Stream::Main()
> {
> while(true)
> {
> PWaitAndSignal m(mutex);
> if(encoding)
> {
>  OSDSettings osd;
>
>  if(osd.getShowChanName())
>  {
>  g_object_set (G_OBJECT (chanNameFilter), "silent", false , NULL);
>  g_object_set (G_OBJECT (chanNameFilter), "text",
> osd.getChanName().c_str() , NULL);
>  g_object_set (G_OBJECT (chanNameFilter), "halignment",
> osd.getChanNameHAlign() , NULL);
>  g_object_set (G_OBJECT (chanNameFilter), "valignment",
> osd.getChanNameVAlign() , NULL);
>  g_object_set (G_OBJECT (chanNameFilter), "wrap-mode",
> osd.getChanNameWordWrapMode() , NULL);
>  g_object_set (G_OBJECT (chanNameFilter), "font-desc",
> osd.getChanNameFont().c_str() , NULL);
>  g_object_set (G_OBJECT (chanNameFilter), "shaded-background",
> osd.getChanNameShadow() , NULL);
>  }
>  else
>  {
>  g_object_set (G_OBJECT (chanNameFilter), "text", "" , NULL);
>  g_object_set (G_OBJECT (chanNameFilter), "silent", true , NULL);
>  }
>
>  if(osd.getShowOSDMessage())
>  {
>  g_object_set (G_OBJECT (osdMessageFilter), "silent", false , NULL);
>  g_object_set (G_OBJECT (osdMessageFilter), "text",
> osd.getOSDMessage().c_str() , NULL);
>  g_object_set (G_OBJECT (osdMessageFilter), "halignment",
> osd.getOSDMessageHAlign() , NULL);
>  g_object_set (G_OBJECT (osdMessageFilter), "valignment",
> osd.getOSDMessageVAlign() , NULL);
>  g_object_set (G_OBJECT (osdMessageFilter), "wrap-mode",
> osd.getOSDMessageWordWrapMode() , NULL);
>  g_object_set (G_OBJECT (osdMessageFilter), "font-desc",
> osd.getOSDMessageFont().c_str() , NULL);
>  g_object_set (G_OBJECT (osdMessageFilter), "shaded-background",
> osd.getOSDMessageShadow() , NULL);
>  }
>  else
>  {
>  g_object_set (G_OBJECT (osdMessageFilter), "text", "" , NULL);
>  g_object_set (G_OBJECT (osdMessageFilter), "silent", true , NULL);
>  }
>
>  if(osd.getShowSessionTimer())
>  {
>  g_object_set (G_OBJECT (sessionTimerFilter), "silent", false , NULL);
>  g_object_set (G_OBJECT (sessionTimerFilter), "text",
> osd.getSessionTimer().c_str() , NULL);
>  g_object_set (G_OBJECT (sessionTimerFilter), "halignment",
> osd.getSessionTimerHAlign() , NULL);
>  g_object_set (G_OBJECT (sessionTimerFilter), "valignment",
> osd.getSessionTimerVAlign() , NULL);
>  g_object_set (G_OBJECT (sessionTimerFilter), "wrap-mode",
> osd.getSessionTimerWordWrapMode() , NULL);
>  g_object_set (G_OBJECT (sessionTimerFilter), "font-desc",
> osd.getSessionTimerFont().c_str() , NULL);
>  g_object_set (G_OBJECT (sessionTimerFilter), "shaded-background",
> osd.getSessionTimerShadow() , NULL);
>
>  }
>  else
>  {
>  g_object_set (G_OBJECT (sessionTimerFilter), "text", "" , NULL);
>  g_object_set (G_OBJECT (sessionTimerFilter), "silent", true , NULL);
>  }
>
> this->Sleep(1000);
> }
> }
> }
>
> void H264Stream::RemoveStream(int handle)
> {
> if(handle != -1)
> {
> g_signal_emit_by_name(multifdSink, "remove", handle, G_TYPE_NONE);
> delete streamHandles[handle];
> streamHandles.erase(handle);
>
> g_signal_emit_by_name(multifdSink, "remove", fileD, G_TYPE_NONE);
> close(fileD);
> }
>
> if(!streamHandles.size())
> StopEncoding();
> }
>
> bool H264Stream::CheckAndBeginEncoding()
> {
> if(!encoding)
> {
> GstStateChangeReturn stateRet;
> stateRet = gst_element_set_state (h264Pipeline, GST_STATE_PLAYING);
>
> GstState state;
>
> stateRet = gst_element_get_state(h264Pipeline, &state, NULL, GST_SECOND);
> encoding = true;
> this->Restart();
> return true;
> }
> else
> return true;
> }
>
> bool H264Stream::StopEncoding()
> {
> gst_element_set_state (h264Pipeline, GST_STATE_READY);
>
> encoding = false;
> return true;
> }
>
> int H264Stream::AddStreamOutput(string ip, string port)
> {
> PWaitAndSignal m(mutex);
> if(CheckAndBeginEncoding())
> {
> fileD = open("/home/jonathan/anotherTest.wmv", O_RDWR | O_APPEND |
> O_CREAT, 0666);
>
> if(fileD != -1)
> {
> g_signal_emit_by_name(multifdSink, "add", fileD, G_TYPE_NONE);
> //streamHandles.insert(std::pair<int, ClientSocket*>(fd, socket));
> }
>
> ClientSocket* socket = new ClientSocket(ip, atoi(port.c_str()));
>
> int fd = socket->getDescriptor();
>
> if(fd != -1)
> {
> g_signal_emit_by_name(multifdSink, "add", fd, G_TYPE_NONE);
> streamHandles.insert(std::pair<int, ClientSocket*>(fd, socket));
> return fd;
> }
>
>
> }
> return -1;
> }
>
> GstBuffer* H264Stream::GetAudioBuffer()
> {
> PWaitAndSignal m(mutex);
>
> if (soundSink != NULL) {
> return gst_app_sink_pull_buffer (GST_APP_SINK (soundSink));
> }
> return NULL;
> }
>
> GstBuffer* H264Stream::GetVideoBuffer()
> {
> PWaitAndSignal m(mutex);
>
> if (videoSink != NULL) {
> return gst_app_sink_pull_buffer (GST_APP_SINK (videoSink));
> }
> return NULL;
> }
>
> GstCaps* H264Stream::GetCurrentAudioCaps()
> {
> PWaitAndSignal m(mutex);
>
> if (soundSink != NULL) {
> return gst_app_sink_get_caps (GST_APP_SINK (soundSink));
> }
> return NULL;
> }
>
> GstCaps* H264Stream::GetCurrentVideoCaps()
> {
> PWaitAndSignal m(mutex);
>
> if (videoSink != NULL) {
> return gst_app_sink_get_caps (GST_APP_SINK (videoSink));
> }
> return NULL;
> }
>
> bool H264Stream::SetSessionAudioCaps(GstCaps* caps)
> {
> PWaitAndSignal m(mutex);
>
> if (soundSink != NULL) {
> gst_app_sink_set_caps (GST_APP_SINK (soundSink), caps);
> gst_caps_unref(caps);
> return true;
> }
> return false;
> }
>
> bool H264Stream::SetSessionVideoCaps(GstCaps* caps)
> {
> PWaitAndSignal m(mutex);
>
> if (videoSink != NULL) {
> gst_app_sink_set_caps (GST_APP_SINK (videoSink), caps);
> gst_caps_unref(caps);
> return true;
> }
> return false;
> }
>
> void H264Stream::SetVolume(gfloat value)
> {
> g_object_set(G_OBJECT (volume), "volume", value, NULL);
> }
>
> bool H264Stream::SetSaturation(double color)
> {
> g_object_set(G_OBJECT (videoBalance), "saturation", color, NULL);
>
> return true;
> }
>
> bool H264Stream::SetBrightness(double brightness)
> {
> g_object_set(G_OBJECT (videoBalance), "brightness", brightness, NULL);
>
> return true;
> }
>
> bool H264Stream::SetHue(double hue)
> {
> g_object_set(G_OBJECT (videoBalance), "hue", hue, NULL);
>
> return true;
> }
>
> bool H264Stream::SetContrast(double contrast)
> {
> g_object_set(G_OBJECT (videoBalance), "contrast", contrast, NULL);
>
> return true;
> }
>
>
>
>
>



------------------------------------------------------------------------------
What happens now with your Lotus Notes apps - do you make another costly
upgrade, or settle for being marooned without product support? Time to move
off Lotus Notes and onto the cloud with Force.com, apps are easier to build,
use, and manage than apps on traditional platforms. Sign up for the Lotus
Notes Migration Kit to learn more. http://p.sf.net/sfu/salesforce-d2d
_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

Re: Trouble using x264enc with a tee

Marco Ballesio
In reply to this post by JonathanHenson
Hi,

On Tue, Dec 7, 2010 at 9:12 PM, JonathanHenson
<[hidden email]> wrote:

>
> "What if you store the file in one pass (not using a socket) and AFTER
> it you try to play? I know it's not exactly as in your requirements,
> but it may help understanding the following.. "
>
> If i use filesink instead of multifdsink, I can play the file back fine. I
> am using the multifdsink so that I can stream to an indefinite number of
> clients over a TCP/IP socket. However, I just realized, that to receive the
> stream on the C#.NET side, I will need to use RTP (of which I know very
> little about--I did read the spec though).

Ok so this confirms my suspects -and you understood the issue
properly, as I read below ;) -. Please note that GStreamer DOES
support multicast streams.

>
> "Are there any reasons why you cannot use a streaming protocol for this? "
>
> Yes, I don't know shit about RTP. I need some help on the gstreamer side. It
> seems that I am not supposed to multiplex the stream before sending it in
> RTP but send to separate streams.

Yep, you don't need to mux anything when dealing with multimedia
streaming. Here are some pretty good examples that maybe you've
already seen:

http://library.gnome.org/devel/gst-plugins-libs/unstable/gst-plugins-good-plugins-gstrtpbin.html

See the "example pipelines" section.

> I currently have:
>
> command = g_strdup_printf ("v4l2src ! video/x-raw-yuv, format=(fourcc)I420,
> width=%d, height=%d, framerate=(fraction)%d/1 !"
>                        " videobalance name=VideoBalance ! textoverlay name=chanNameFilter !
> textoverlay name=osdMessageFilter ! textoverlay name=sessionTimerOverlay ! "
>                        "tee name=t ! queue ! appsink name=videoSink t. ! queue ! ffenc_wmv1
> name=videoEncoder me-method=5 ! amux.  alsasrc ! "
>                        "audio/x-raw-int, depth=%d, width=%d, channels=2, endianness=1234,
> rate=%d, signed=true ! volume name=volumeFilter ! "
>                        "tee name=souTee ! queue ! appsink name=soundSink souTee. ! queue !
> ffenc_wmav1 ! amux. asfmux name=amux ! rtpasfpay ! multifdsink
> name=multifdsink",
>                         width, height, fps, bitWidth, bitWidth, audioSampleRate);
>
> I think I would need so use gstrtpbin instead of the multifdsink and get rid
> of the muxing?

yes, you should plug a gstrtpbin (if you're going to stream over a
network), as described in the "Encode and payload H263 video captured
from a v4l2src. Encode and payload AMR audio generated from
audiotestsrc" example in gstrtpbin.

>
> "see you're using appsrc and appsink. Usually, the first good
> question in such a case is: "do I really need such elements in my
> pipeline?". The answer depends on your use case and requirements :).
>
> Note that if your app is not producing and consuming buffers in the
> proper way you may run into troubles. The behaviour will e.g. differ
> depending on whether you're in pull or push mode. See the elements'
> documentation for more details. "
>
> I am using the appsinks because this app is using OPAL in other threads to
> answer H323 and SIP requests and it needs the raw data buffer. This thread
> is used for a client control computer and SDK which will monitor sessions,
> and make recordings of the sessions (i.e. a Windows Server 2008 Web Server
> using ASP.NET/ C#.net with an SDK I have written to talk to this device.) Do
> you have a better approach in mind?

well, you may want to use telepathy and farsight/stream engine for this.

the "framework": http://telepathy.freedesktop.org/wiki/
the "SIP backend" (well, you also need libsofiasip):
http://git.collabora.co.uk/?p=telepathy-sofiasip.git
the "streaming backend" (using GStreamer) http://farsight.freedesktop.org/wiki/

After installing the proper plugins, SIP support is great, I can grant
it. I've read here and there about H323 support, but I've never tested
it personally and don't know where sources can be found (I've heard
about a mod_opal but never seen it).

The only drawback may be about the portability of the stack across
platforms (as it appears you're using MS stuff), so I suggest you to
check on the #farsight channel on freenode abot this issue. If the
effort is too hard, maybe you solution will be easier to deploy.

Btw I've no religious arguments against appsink/appsource and it
appear your case may well justify their usage.

Regards

>
> The OPAL thread grabs this buffer when it needs it.
>
> Thank you once again,
>
> Jonathan
>
>
> --
> View this message in context: http://gstreamer-devel.966125.n4.nabble.com/Trouble-using-x264enc-with-a-tee-tp3067583p3077032.html
> Sent from the GStreamer-devel mailing list archive at Nabble.com.
>
> ------------------------------------------------------------------------------
> What happens now with your Lotus Notes apps - do you make another costly
> upgrade, or settle for being marooned without product support? Time to move
> off Lotus Notes and onto the cloud with Force.com, apps are easier to build,
> use, and manage than apps on traditional platforms. Sign up for the Lotus
> Notes Migration Kit to learn more. http://p.sf.net/sfu/salesforce-d2d
> _______________________________________________
> gstreamer-devel mailing list
> [hidden email]
> https://lists.sourceforge.net/lists/listinfo/gstreamer-devel
>

------------------------------------------------------------------------------
What happens now with your Lotus Notes apps - do you make another costly
upgrade, or settle for being marooned without product support? Time to move
off Lotus Notes and onto the cloud with Force.com, apps are easier to build,
use, and manage than apps on traditional platforms. Sign up for the Lotus
Notes Migration Kit to learn more. http://p.sf.net/sfu/salesforce-d2d
_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

Re: Trouble using x264enc with a tee

JonathanHenson
"Ok so this confirms my suspects -and you understood the issue
properly, as I read below ;) -. Please note that GStreamer DOES
support multicast streams.

>
> "Are there any reasons why you cannot use a streaming protocol for this? "
>
> Yes, I don't know shit about RTP. I need some help on the gstreamer side. It
> seems that I am not supposed to multiplex the stream before sending it in
> RTP but send to separate streams.

Yep, you don't need to mux anything when dealing with multimedia
streaming. Here are some pretty good examples that maybe you've
already seen:

http://library.gnome.org/devel/gst-plugins-libs/unstable/gst-plugins-good-plugins-gstrtpbin.html

See the "example pipelines" section."

Ok so, here is my pipeline. Once again, x264enc isn't working right so I am testing with jpegenc until I can get it.

command = g_strdup_printf ("gstrtpbin name=rtpbin v4l2src ! video/x-raw-yuv, format=(fourcc)I420, width=%d, height=%d, framerate=(fraction)%d/1 !"
                        " videobalance name=VideoBalance ! textoverlay name=chanNameFilter ! textoverlay name=osdMessageFilter ! textoverlay name=sessionTimerOverlay ! "
                        "tee name=t ! queue ! appsink name=videoSink t. ! queue ! jpegenc name=videoEncoder ! rtpjpegpay ! rtpbin.send_rtp_sink_0 rtpbin.send_rtp_src_0 !"
                        " multiudpsink name=videoRtpSink rtpbin.send_rtcp_src_0 ! multiudpsink name=videoRtcpSink sync=false async=false udpsrc port=5005 ! rtpbin.recv_rtcp_sink_0 "
                        "alsasrc ! audio/x-raw-int, depth=%d, width=%d, channels=2, endianness=1234, rate=%d, signed=true ! volume name=volumeFilter ! "
                        "tee name=souTee ! queue ! appsink name=soundSink souTee. ! queue ! lame ! rtpmpapay  ! rtpbin.send_rtp_sink_1 rtpbin.send_rtp_src_1 ! multiudpsink name=audioRtpSink "
            "rtpbin.send_rtcp_src_1 ! multiudpsink name=audioRtcpSink sync=false async=false udpsrc port=5007 ! rtpbin.recv_rtcp_sink_1",
                         width, height, fps, bitWidth, bitWidth, audioSampleRate);

I am using the multiudpsink because that was the only way I know how to add the output to multiple clients. I do that here:

int H264Stream::AddStreamOutput(string ip, string port)
{
        PWaitAndSignal m(mutex);
        if(CheckAndBeginEncoding())
        {
                //ClientSocket* socket = new ClientSocket(ip, atoi(port.c_str()));

                //int fd = socket->getDescriptor();

                //if(fd != -1)
                //{
                // g_signal_emit_by_name(multifdSink, "add", fd, G_TYPE_NONE);
                // streamHandles.insert(std::pair<int, ClientSocket*>(fd, socket));

                // return fd;
                //}
                int port_ = atoi(port.c_str());

                g_signal_emit_by_name(videoRtpSink, "add", ip.c_str(), port_, G_TYPE_NONE);
                g_signal_emit_by_name(videoRtcpSink, "add", ip.c_str(), ++port_, G_TYPE_NONE);
                g_signal_emit_by_name(audioRtpSink, "add", ip.c_str(), ++port_, G_TYPE_NONE);
                g_signal_emit_by_name(audioRtcpSink, "add", ip.c_str(), ++port_, G_TYPE_NONE);

                int newHandle = streamHandles.size();
                streamHandles.insert(std::pair<int, string>(newHandle, ip));
                ipsAndPorts.insert(std::pair<int, int>(newHandle, port_));
                return newHandle;
        }
        return -1;
}

and I remove it here

void H264Stream::RemoveStream(int handle)
{
        if(handle != -1)
        {
                string ip = streamHandles[handle];
                int port_ = ipsAndPorts[handle];

                g_signal_emit_by_name(videoRtpSink, "remove", ip.c_str(), port_, G_TYPE_NONE);
                g_signal_emit_by_name(videoRtcpSink, "remove", ip.c_str(), ++port_, G_TYPE_NONE);
                g_signal_emit_by_name(audioRtpSink, "remove", ip.c_str(), ++port_, G_TYPE_NONE);
                g_signal_emit_by_name(audioRtcpSink, "remove", ip.c_str(), ++port_, G_TYPE_NONE);
                streamHandles.erase(handle);
                ipsAndPorts.erase(handle);
        }

        if(!streamHandles.size())
                StopEncoding();
}

So far, the client I am writing in C#.net receives both the RTP Packets and the RTCP packets fine.

Is there a better way to do this?


"well, you may want to use telepathy and farsight/stream engine for this.

the "framework": http://telepathy.freedesktop.org/wiki/
the "SIP backend" (well, you also need libsofiasip):
http://git.collabora.co.uk/?p=telepathy-sofiasip.git
the "streaming backend" (using GStreamer) http://farsight.freedesktop.org/wiki/

After installing the proper plugins, SIP support is great, I can grant
it. I've read here and there about H323 support, but I've never tested
it personally and don't know where sources can be found (I've heard
about a mod_opal but never seen it).

The only drawback may be about the portability of the stack across
platforms (as it appears you're using MS stuff), so I suggest you to
check on the #farsight channel on freenode abot this issue. If the
effort is too hard, maybe you solution will be easier to deploy.
"

I don't know of telepathy, but could you explain why it is better than OPAL? I am already using PTLib so, OPAL seems like a good choice.

Also, this piece of code will be on a Mini ITX board with an Atom processor running SLAX. I am developing on Ubuntu. The windows side is a .NET managed API for communicating with and managing this video server.

Thanks again,

Jonathan

P.S. Tim-Philipp Müller-2, I am about to try your suggestion, I am a little foggy of how I should redo the queues. Could you write a sample pipeline or change the queues on mine so that I can see what you mean?
Reply | Threaded
Open this post in threaded view
|

Re: Trouble using x264enc with a tee

JonathanHenson
Ok so, I have the above example working with the following pipleline:

command = g_strdup_printf ("gstrtpbin name=rtpbin v4l2src ! video/x-raw-yuv, format=(fourcc)I420, width=%d, height=%d, framerate=(fraction)%d/1 !"
                        " videobalance name=VideoBalance ! textoverlay name=chanNameFilter ! textoverlay name=osdMessageFilter ! textoverlay name=sessionTimerOverlay ! "
                        "tee name=t ! queue max-size-bytes=0 max-size-time=0 max-size-buffers=0 ! appsink name=videoSink t. ! queue max-size-bytes=0 max-size-time=0 max-size-buffers=0 ! "
                        "x264enc name=videoEncoder ! rtph264pay ! rtpbin.send_rtp_sink_0 rtpbin.send_rtp_src_0 !"
                        " multiudpsink name=videoRtpSink rtpbin.send_rtcp_src_0 ! multiudpsink name=videoRtcpSink sync=false async=false udpsrc port=5005 ! rtpbin.recv_rtcp_sink_0 "
                        "alsasrc ! audio/x-raw-int, depth=%d, width=%d, channels=2, endianness=1234, rate=%d, signed=true ! volume name=volumeFilter ! "
                        "tee name=souTee ! queue max-size-bytes=0 max-size-time=0 max-size-buffers=0 ! appsink name=soundSink souTee. ! queue max-size-bytes=0 max-size-time=0 max-size-buffers=0 ! lame ! rtpmpapay  "
                        "! rtpbin.send_rtp_sink_1 rtpbin.send_rtp_src_1 ! multiudpsink name=audioRtpSink "
            "rtpbin.send_rtcp_src_1 ! multiudpsink name=audioRtcpSink sync=false async=false udpsrc port=5007 ! rtpbin.recv_rtcp_sink_1",
                         width, height, fps, bitWidth, bitWidth, audioSampleRate);

I receive the rtp and the rtcp packets on the client side. I load them into a jitter buffer and then decode them. Here in lies the problem. I cannot decode the packets. At first I thought it was because JPEG wasn't supported, so I finally got x264enc to work on the server side, then I tried ffenc_h263p, still the same. Also, vlc can't do anything with the stream. So, I tried the exact example at http://library.gnome.org/devel/gst-plugins-libs/unstable/gst-plugins-good-plugins-gstrtpbin.html with the server in one terminal and the client in the other. Nothing. All I get is:

jonathan@Linus:~$ gst-launch -v gstrtpbin name=rtpbin                                              udpsrc caps="application/x-rtp,media=(string)video,clock-rate=(int)90000,encoding-name=(string)H263-1998"             port=5000 ! rtpbin.recv_rtp_sink_0                                        rtpbin. ! rtph263pdepay ! ffdec_h263 ! xvimagesink                         udpsrc port=5001 ! rtpbin.recv_rtcp_sink_0                                    rtpbin.send_rtcp_src_0 ! udpsink port=5005 sync=false async=false            udpsrc caps="application/x-rtp,media=(string)audio,clock-rate=(int)8000,encoding-name=(string)AMR,encoding-params=(string)1,octet-align=(string)1"             port=5002 ! rtpbin.recv_rtp_sink_1                                        rtpbin. ! rtpamrdepay ! amrnbdec ! alsasink                                udpsrc port=5003 ! rtpbin.recv_rtcp_sink_1                                    rtpbin.send_rtcp_src_1 ! udpsink port=5007 sync=false async=false
Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
/GstPipeline:pipeline0/GstRtpBin:rtpbin.GstGhostPad:send_rtcp_src_0: caps = application/x-rtcp
/GstPipeline:pipeline0/GstRtpBin:rtpbin/GstRtpSession:rtpsession0.GstPad:send_rtcp_src: caps = application/x-rtcp
/GstPipeline:pipeline0/GstUDPSink:udpsink0.GstPad:sink: caps = application/x-rtcp
/GstPipeline:pipeline0/GstRtpBin:rtpbin.GstGhostPad:send_rtcp_src_0.GstProxyPad:proxypad2: caps = application/x-rtcp
/GstPipeline:pipeline0/GstRtpBin:rtpbin.GstGhostPad:send_rtcp_src_1: caps = application/x-rtcp
/GstPipeline:pipeline0/GstRtpBin:rtpbin/GstRtpSession:rtpsession1.GstPad:send_rtcp_src: caps = application/x-rtcp
/GstPipeline:pipeline0/GstUDPSink:udpsink1.GstPad:sink: caps = application/x-rtcp
/GstPipeline:pipeline0/GstRtpBin:rtpbin.GstGhostPad:send_rtcp_src_1.GstProxyPad:proxypad5: caps = application/x-rtcp
^CCaught interrupt -- handling interrupt.
Interrupt: Stopping pipeline ...
Execution ended after 4990457302 ns.
Setting pipeline to PAUSED ...
Setting pipeline to READY ...
/GstPipeline:pipeline0/GstUDPSink:udpsink1.GstPad:sink: caps = NULL
/GstPipeline:pipeline0/GstUDPSink:udpsink0.GstPad:sink: caps = NULL
/GstPipeline:pipeline0/GstRtpBin:rtpbin.GstGhostPad:send_rtcp_src_1: caps = NULL
/GstPipeline:pipeline0/GstRtpBin:rtpbin.GstGhostPad:send_rtcp_src_0: caps = NULL
/GstPipeline:pipeline0/GstRtpBin:rtpbin/GstRtpSession:rtpsession1.GstPad:send_rtcp_src: caps = NULL
/GstPipeline:pipeline0/GstRtpBin:rtpbin/GstRtpSession:rtpsession0.GstPad:send_rtcp_src: caps = NULL
/GstPipeline:pipeline0/GstUDPSrc:udpsrc2.GstPad:src: caps = NULL
/GstPipeline:pipeline0/GstUDPSrc:udpsrc0.GstPad:src: caps = NULL
Setting pipeline to NULL ...
Freeing pipeline ...

No video/ no Audio. Is there something wrong with my Gstreamer setup? I suspect this is why I can't decode the video on the client. I get the data, I just can't decode it.