Hi, You can add buffer probes on pads http://gstreamer.freedesktop.org/data/doc/gstreamer/head/gstreamer/html/GstPad.html#gst-pad-add-buffer-probe It will be called each time a buffer goes through the pad. And choose whether to let buffer go (return TRUE), or not (return FALSE). I do not know if sink will handle gap in buffers properly... What about using a videorate to get less frames from src, and therefore make sure nokia processor can handle the stream ? Aurelien ----- Message d'origine ---- De : Bruno <[hidden email]> À : [hidden email] Envoyé le : Lundi, 18 Août 2008, 17h22mn 09s Objet : [gst-devel] Video processing Hello everyone ! I'm trying to develop an image-processing application for the nokia N810 using gstreamer. I have made the structure of this application using an example which display the image of the camera on the screen. I tweaked a bit this code for my need, and it is working, I can actually display the picture from the camera, start/stop the media pipeline... But I don't find where to put the image processing code (which is working alone on the nokia). What I want is that when the program receive a frame from the camera, it has to do some calculation with the current frame buffer, and during these calculations, I'd like it to ignore the other frame coming from the camera. (I don't think that the processor will be able to handle realtime processing). When the calculation is done, it should start again with the next coming frame. I managed to do it, but I had to press a button each time I wanted to process a frame. I'd like the program to do it continuously. Here is my code : #define VIDEO_SRC "v4l2src" #define VIDEO_SINK "xvimagesink" typedef struct { HildonProgram *program; HildonWindow *window; GstElement *pipeline; GtkWidget *screen; guint buffer_cb_id; } AppData; /* Callback that gets called when user clicks the "START/STOP" button */ static void button1_pressed(GtkWidget *widget,AppData *appdata) { if (GTK_TOGGLE_BUTTON(widget)->active) { /* Display a note to the user */ hildon_banner_show_information(GTK_WIDGET(appdata->window), NULL, "Running ..."); gst_element_set_state(appdata->pipeline, GST_STATE_PLAYING); } else { /* Display a note to the user */ hildon_banner_show_information(GTK_WIDGET(appdata->window), NULL, "Stopped ..."); gst_element_set_state(appdata->pipeline, GST_STATE_PAUSED); } } /* Callback that gets called when user clicks the "Expression ON/OFF" button */ static void button2_pressed(GtkWidget *widget, AppData *appdata) { if (GTK_TOGGLE_BUTTON(widget)->active) { /* Display a note to the user */ hildon_banner_show_information(GTK_WIDGET(appdata->window), NULL, "Expressions ON"); } else { /* Display a note to the user */ hildon_banner_show_information(GTK_WIDGET(appdata->window), NULL, "Expressions OFF"); } } /* Callback that gets called whenever pipeline's message bus has * a message */ static void bus_callback(GstBus *bus, GstMessage *message, AppData *appdata) { gchar *message_str; const gchar *message_name; GError *error; /* Report errors to the console */ if(GST_MESSAGE_TYPE(message) == GST_MESSAGE_ERROR) { gst_message_parse_error(message, &error, &message_str); g_error("GST error: %s\n", message_str); g_free(error); g_free(message_str); } /* Report warnings to the console */ if(GST_MESSAGE_TYPE(message) == GST_MESSAGE_WARNING) { gst_message_parse_warning(message, &error, &message_str); g_warning("GST warning: %s\n", message_str); g_free(error); g_free(message_str); } /* See if the message type is GST_MESSAGE_APPLICATION which means * thet the message is sent by the client code (this program) and * not by gstreamer. */ if(GST_MESSAGE_TYPE(message) == GST_MESSAGE_APPLICATION) { /* Get name of the message's structure */ message_name = gst_structure_get_name(gst_message_get_structure(message)); /* The hildon banner must be shown in here, because the bus callback is * called in the main thread and calling GUI-functions in gstreamer threads * usually leads to problems with X-server */ if(!strcmp(message_name, "anger")) { hildon_banner_show_information( GTK_WIDGET(appdata->window), NULL, "Anger"); } if(!strcmp(message_name, "disgust")) { hildon_banner_show_information( GTK_WIDGET(appdata->window), NULL, "Disgust"); } if(!strcmp(message_name, "fear")) { hildon_banner_show_information( GTK_WIDGET(appdata->window), NULL, "Fear"); } if(!strcmp(message_name, "happy")) { hildon_banner_show_information( GTK_WIDGET(appdata->window), NULL, "Happy"); } if(!strcmp(message_name, "neutral")) { hildon_banner_show_information( GTK_WIDGET(appdata->window), NULL, "Neutral"); } if(!strcmp(message_name, "sad")) { hildon_banner_show_information( GTK_WIDGET(appdata->window), NULL, "Sad"); } if(!strcmp(message_name, "surprise")) { hildon_banner_show_information( GTK_WIDGET(appdata->window), NULL, "Surprise"); } if(!strcmp(message_name, "unknown")) { hildon_banner_show_information( GTK_WIDGET(appdata->window), NULL, "Unknown !"); } } } /* Callback to be called when the screen-widget is exposed */ static gboolean expose_cb(GtkWidget * widget, GdkEventExpose * event, gpointer data) { /* Tell the xvimagesink/ximagesink the x-window-id of the screen * widget in which the video is shown. After this the video * is shown in the correct widget */ gst_x_overlay_set_xwindow_id(GST_X_OVERLAY(data), GDK_WINDOW_XWINDOW(widget->window)); return FALSE; } /* Initialize the the Gstreamer pipeline. Below is a diagram * of the pipeline that will be created: * * * - * |Camera| |CSP | |Screen| |Screen| |Image | * |src |->|Filter|->|queue |->|sink |-> |processing|-> Display */ static gboolean initialize_pipeline(AppData *appdata, int *argc, char ***argv) { GstElement *pipeline, *camera_src, *screen_sink; GstElement *screen_queue; GstElement *csp_filter; GstCaps *caps; GstBus *bus; /* Initialize Gstreamer */ gst_init(argc, argv); /* Create pipeline and attach a callback to it's * message bus */ pipeline = gst_pipeline_new("test-camera"); bus = gst_pipeline_get_bus(GST_PIPELINE(pipeline)); gst_bus_add_watch(bus, (GstBusFunc)bus_callback, appdata); gst_object_unref(GST_OBJECT(bus)); /* Save pipeline to the AppData structure */ appdata->pipeline = pipeline; /* Create elements */ /* Camera video stream comes from a Video4Linux driver */ camera_src = gst_element_factory_make(VIDEO_SRC, "camera_src"); /* Colorspace filter is needed to make sure that sinks understands * the stream coming from the camera */ csp_filter = gst_element_factory_make("ffmpegcolorspace", "csp_filter"); /* Queue creates new thread for the stream */ screen_queue = gst_element_factory_make("queue", "screen_queue"); /* Sink that shows the image on screen. Xephyr doesn't support XVideo * extension, so it needs to use ximagesink, but the device uses * xvimagesink */ screen_sink = gst_element_factory_make(VIDEO_SINK, "screen_sink"); /* Check that elements are correctly initialized */ if(!(pipeline && camera_src && screen_sink && csp_filter && screen_queue)) { g_critical("Couldn't create pipeline elements"); return FALSE; } /* Add elements to the pipeline. This has to be done prior to * linking them */ gst_bin_add_many(GST_BIN(pipeline), camera_src, csp_filter, screen_queue, screen_sink, NULL); /* Specify what kind of video is wanted from the camera */ caps = gst_caps_new_simple("video/x-raw-rgb", "width", G_TYPE_INT, 640, "height", G_TYPE_INT, 480, "framerate", GST_TYPE_FRACTION, 25, 1, NULL); /* Link the camera source and colorspace filter using capabilities * specified */ if(!gst_element_link_filtered(camera_src, csp_filter, caps)) { return FALSE; } gst_caps_unref(caps); /* Connect Colorspace Filter -> Screen Queue -> Screen Sink * This finalizes the initialization of the screen-part of the pipeline */ if(!gst_element_link_many(csp_filter, screen_queue, screen_sink, NULL)) { return FALSE; } /* As soon as screen is exposed, window ID will be advised to the sink */ g_signal_connect(appdata->screen, "expose-event", G_CALLBACK(expose_cb), screen_sink); gst_element_set_state(pipeline, GST_STATE_PAUSED); return TRUE; } /* Destroy the pipeline on exit */ static void destroy_pipeline(GtkWidget *widget, AppData *appdata) { /* Free the pipeline. This automatically also unrefs all elements * added to the pipeline */ gst_element_set_state(appdata->pipeline, GST_STATE_NULL); gst_object_unref(GST_OBJECT(appdata->pipeline)); } int main(int argc, char **argv) { // variables for face detection // main structure for vjdetect pdata = (mainstruct*) calloc(1, sizeof(mainstruct)); // Allocate memory for array of face detections returned by facedetector (VjDetect). pdata->pFaceDetections = (FLY_Rect *)calloc(MAX_NUMBER_OF_FACE_DETECTIONS, sizeof(FLY_Rect)); init(pdata); AppData appdata; GtkWidget *hbox, *vbox_button, *vbox, *button1, *button2; /* Initialize and create the GUI */ example_gui_initialize( &appdata.program, &appdata.window, &argc, &argv, "Expression Detector"); vbox = gtk_vbox_new(FALSE, 0); hbox = gtk_hbox_new(FALSE, 0); vbox_button = gtk_vbox_new(FALSE, 0); gtk_box_pack_start(GTK_BOX(hbox), vbox, FALSE, FALSE, 0); gtk_box_pack_start(GTK_BOX(hbox), vbox_button, FALSE, FALSE, 0); appdata.screen = gtk_drawing_area_new(); gtk_widget_set_size_request(appdata.screen, 500, 380); gtk_box_pack_start(GTK_BOX(vbox), appdata.screen, FALSE, FALSE, 0); button1 = gtk_toggle_button_new_with_label("Run/Stop"); gtk_widget_set_size_request(button1, 170, 75); gtk_box_pack_start(GTK_BOX(vbox_button), button1, FALSE, FALSE, 0); button2 = gtk_toggle_button_new_with_label("Expressions ON/OFF"); gtk_widget_set_size_request(button2, 170, 75); gtk_box_pack_start(GTK_BOX(vbox_button), button2, FALSE, FALSE, 0); g_signal_connect(G_OBJECT(button1), "clicked", G_CALLBACK(button1_pressed), &appdata); g_signal_connect(G_OBJECT(button2), "clicked", G_CALLBACK(button2_pressed), &appdata); gtk_container_add(GTK_CONTAINER(appdata.window), hbox); /* Initialize the GTK pipeline */ if(!initialize_pipeline(&appdata, &argc, &argv)) { hildon_banner_show_information( GTK_WIDGET(appdata.window), "gtk-dialog-error", "Failed to initialize pipeline"); } g_signal_connect(G_OBJECT(appdata.window), "destroy", G_CALLBACK(destroy_pipeline), &appdata); /* Begin the main application */ example_gui_run(appdata.program, appdata.window); /* Free the gstreamer resources. Elements added * to the pipeline will be freed automatically */ return 0; } I removed the image processing functions to have a better clarity. I tried to put a printf in the exposecb functions, in the main and in the pipeline, with the hope that it will display the text each time a new frame is displayed on the screen. That didn't work, it appears only once. I read tutorials about gstreamer but couldn't find how to do something continously. Maybe I should write a image_processing callback, and put in my main a g_signal_connect call that start this callback each time a new frame is displayed ? But what would be the correct signal to use ? Any idea welcome. Thanks in advance ! Bruno Envoyé avec Yahoo! Mail. Une boite mail plus intelligente. ------------------------------------------------------------------------- This SF.Net email is sponsored by the Moblin Your Move Developer's challenge Build the coolest Linux based applications with Moblin SDK & win great prizes Grand prize is a trip for two to an Open Source event anywhere in the world http://moblin-contest.org/redirect.php?banner_id=100&url=/ _______________________________________________ gstreamer-devel mailing list [hidden email] https://lists.sourceforge.net/lists/listinfo/gstreamer-devel |
Hi Aurélien,
Thanks you for your answer. I got it to work, so if some people where wondering about the same problem : /* Initialize the the Gstreamer pipeline. Below is a diagram * of the pipeline that will be created: * - * |Camera| |CSP | |Screen| |Screen| |Image | * |src |->|Filter|->|queue |->|sink |-> |processing|-> Display */ static gboolean initialize_pipeline(AppData *appdata, int *argc, char ***argv) { GstElement *pipeline, *camera_src, *screen_sink; GstElement *screen_queue; GstElement *csp_filter; GstCaps *caps; GstBus *bus; GstPad *sinkpad; /* Initialize Gstreamer */ gst_init(argc, argv); /* Create pipeline and attach a callback to it's * message bus */ pipeline = gst_pipeline_new("test-camera"); bus = gst_pipeline_get_bus(GST_PIPELINE(pipeline)); gst_bus_add_watch(bus, (GstBusFunc)bus_callback, appdata); gst_object_unref(GST_OBJECT(bus)); /* Save pipeline to the AppData structure */ appdata->pipeline = pipeline; /* Create elements */ /* Camera video stream comes from a Video4Linux driver */ camera_src = gst_element_factory_make(VIDEO_SRC, "camera_src"); /* Colorspace filter is needed to make sure that sinks understands * the stream coming from the camera */ csp_filter = gst_element_factory_make("ffmpegcolorspace", "csp_filter"); /* Queue creates new thread for the stream */ screen_queue = gst_element_factory_make("queue", "screen_queue"); /* Sink that shows the image on screen. Xephyr doesn't support XVideo * extension, so it needs to use ximagesink, but the device uses * xvimagesink */ screen_sink = gst_element_factory_make(VIDEO_SINK, "screen_sink"); sinkpad = gst_element_get_static_pad(screen_sink,"sink"); gst_pad_add_buffer_probe(sinkpad,G_CALLBACK(process_frame), appdata); /* Check that elements are correctly initialized */ if(!(pipeline && camera_src && screen_sink && csp_filter && screen_queue)) { g_critical("Couldn't create pipeline elements"); return FALSE; } /* Add elements to the pipeline. This has to be done prior to * linking them */ gst_bin_add_many(GST_BIN(pipeline), camera_src, csp_filter, screen_queue, screen_sink, NULL); /* Specify what kind of video is wanted from the camera */ caps = gst_caps_new_simple("video/x-raw-yuv", "width", G_TYPE_INT, IMAGE_WIDTH, "height", G_TYPE_INT, IMAGE_HEIGHT, "framerate", GST_TYPE_FRACTION, FRAMERATE, 1, NULL); /* Link the camera source and colorspace filter using capabilities * specified */ if(!gst_element_link_filtered(camera_src, csp_filter, caps)) { return FALSE; } gst_caps_unref(caps); /* Connect Colorspace Filter -> Screen Queue -> Screen Sink * This finalizes the initialization of the screen-part of the pipeline */ if(!gst_element_link_many(csp_filter, screen_queue, screen_sink, NULL)) { return FALSE; } /* As soon as screen is exposed, window ID will be advised to the sink */ g_signal_connect(appdata->screen, "expose-event", G_CALLBACK(expose_cb), screen_sink); gst_element_set_state(pipeline, GST_STATE_PLAYING); return TRUE; } so now a function "frame_process" is called each time data flows through the pad I created on the screen_sink. Thanks again ! Bruno 2008/8/18, Aurelien Grimaud <[hidden email]>:
------------------------------------------------------------------------- This SF.Net email is sponsored by the Moblin Your Move Developer's challenge Build the coolest Linux based applications with Moblin SDK & win great prizes Grand prize is a trip for two to an Open Source event anywhere in the world http://moblin-contest.org/redirect.php?banner_id=100&url=/ _______________________________________________ gstreamer-devel mailing list [hidden email] https://lists.sourceforge.net/lists/listinfo/gstreamer-devel |
Hello all,
I still have some questions about gstreamer. Actually I'd like to change the way my program work, in order to display a frame from the camera only once the image processing has been done, and with a rectangle over the face of the person. So I changed my pipeline (removed the screen_sink element), and I'd like to send a buffer from my image processing function to the gtk drawing area where the camera image was displayed before. I tried to do it with gtk drawing area (and with gtk image too with no sucess), but I can't find the way to change the image contained in the drawing area. Here is my code : ///// IMAGE PROCESSING CALLBACK /* Callback to be called when data goes through the pad */ static gboolean process_frame(GstElement *video_sink, GstBuffer *buffer, GstPad *pad, AppData *appdata) { int x, y; // getting the pointer to camera buffer unsigned char *data_photo = (unsigned char *) GST_BUFFER_DATA(buffer); // REMOVED PART WHERE THE COORDINATES OF THE POSITION OF THE FACE IS CALCULATED // // THIS PART IS WHAT I TRIED, BUT I HAVE A SEGMENTATION FAULT WHEN CREATING PIXBUF // GdkPixbuf *newscreen; //newscreen = gdk_pixbuf_new_from_data(data_photo, //GDK_COLORSPACE_RGB, /* RGB-colorspace */ //FALSE, /* No alpha-channel */ //8, /* Bits per RGB-component */ //IMAGE_WIDTH, IMAGE_HEIGHT, /* Dimensions */ //3*IMAGE_WIDTH, /* Number of bytes between lines (ie stride) */ //NULL, NULL); /* Callbacks */ gdk_draw_pixmap(GDK_DRAWABLE(appdata->screen), appdata->screen->style->black_gc, GDK_DRAWABLE(newscreen), 0, 0, 0, 0, -1, -1); return TRUE; } /////// PIPELINE /* Initialize the the Gstreamer pipeline. Below is a diagram * of the pipeline that will be created: * * |Camera| |CSP | |Screen| |Screen| |Image | * |src |->|Filter|->|queue |->|sink |-> |processing|-> Display */ static gboolean initialize_pipeline(AppData *appdata, int *argc, char ***argv) { GstElement *pipeline, *camera_src, *screen_sink; GstElement *screen_queue; GstElement *csp_filter; GstCaps *caps; GstBus *bus; GstPad *sinkpad; /* Initialize Gstreamer */ gst_init(argc, argv); /* Create pipeline and attach a callback to it's * message bus */ pipeline = gst_pipeline_new("test-camera"); bus = gst_pipeline_get_bus(GST_PIPELINE(pipeline)); gst_bus_add_watch(bus, (GstBusFunc)bus_callback, appdata); gst_object_unref(GST_OBJECT(bus)); /* Save pipeline to the AppData structure */ appdata->pipeline = pipeline; /* Create elements */ /* Camera video stream comes from a Video4Linux driver */ camera_src = gst_element_factory_make(VIDEO_SRC, "camera_src"); /* Colorspace filter is needed to make sure that sinks understands * the stream coming from the camera */ csp_filter = gst_element_factory_make("ffmpegcolorspace", "csp_filter"); /* Queue creates new thread for the stream */ screen_queue = gst_element_factory_make("queue", "screen_queue"); /* Sink that shows the image on screen. Xephyr doesn't support XVideo * extension, so it needs to use ximagesink, but the device uses * xvimagesink */ //screen_sink = gst_element_factory_make(VIDEO_SINK, "screen_sink"); sinkpad = gst_element_get_static_pad(screen_queue,"sink"); gst_pad_add_buffer_probe(sinkpad,G_CALLBACK(process_frame), appdata); /* Check that elements are correctly initialized */ if(!(pipeline && camera_src /*&& screen_sink*/ && csp_filter && screen_queue)) { g_critical("Couldn't create pipeline elements"); return FALSE; } /* Add elements to the pipeline. This has to be done prior to * linking them */ gst_bin_add_many(GST_BIN(pipeline), camera_src, csp_filter, screen_queue, /*screen_sink,*/ NULL); /* Specify what kind of video is wanted from the camera */ caps = gst_caps_new_simple("video/x-raw-rgb", "width", G_TYPE_INT, IMAGE_WIDTH, "height", G_TYPE_INT, IMAGE_HEIGHT, "framerate", GST_TYPE_FRACTION, FRAMERATE, 1, NULL); /* Link the camera source and colorspace filter using capabilities * specified */ if(!gst_element_link_filtered(camera_src, csp_filter, caps)) { return FALSE; } gst_caps_unref(caps); /* Connect Colorspace Filter -> Screen Queue -> Screen Sink * This finalizes the initialization of the screen-part of the pipeline */ if(!gst_element_link_many(csp_filter, screen_queue, /*screen_sink, */NULL)) { return FALSE; } gst_element_set_state(pipeline, GST_STATE_PAUSED); return TRUE; } /////// MAIN FUNCTION int main(int argc, char **argv) { // variables for face detection // main structure for vjdetect pdata = (mainstruct*) calloc(1, sizeof(mainstruct)); // Allocate memory for array of face detections returned by facedetector (VjDetect). pdata->pFaceDetections = (FLY_Rect *)calloc(MAX_NUMBER_OF_FACE_DETECTIONS, sizeof(FLY_Rect)); init(pdata); AppData appdata; appdata.expression = 0; GtkWidget *hbox, *vbox_button, *vbox, *button1, *button2; /* Initialize and create the GUI */ example_gui_initialize( &appdata.program, &appdata.window, &argc, &argv, "Expression Detector"); vbox = gtk_vbox_new(FALSE, 0); hbox = gtk_hbox_new(FALSE, 0); vbox_button = gtk_vbox_new(FALSE, 0); gtk_box_pack_start(GTK_BOX(hbox), vbox, FALSE, FALSE, 0); gtk_box_pack_start(GTK_BOX(hbox), vbox_button, FALSE, FALSE, 0); appdata.screen = gtk_drawing_area_new(); gtk_widget_set_size_request(appdata.screen, 500, 380); gtk_box_pack_start(GTK_BOX(vbox), appdata.screen, FALSE, FALSE, 0); button1 = gtk_toggle_button_new_with_label("Run/Stop"); gtk_widget_set_size_request(button1, 170, 75); gtk_box_pack_start(GTK_BOX(vbox_button), button1, FALSE, FALSE, 0); button2 = gtk_toggle_button_new_with_label("Expressions ON/OFF"); gtk_widget_set_size_request(button2, 170, 75); gtk_box_pack_start(GTK_BOX(vbox_button), button2, FALSE, FALSE, 0); appdata.anger = gtk_image_new_from_file("./smileys/anger.jpg"); gtk_widget_set_size_request(appdata.anger, 160, 180); appdata.disgust = gtk_image_new_from_file("./smileys/disgust.jpg"); gtk_widget_set_size_request(appdata.disgust, 160, 180); appdata.fear = gtk_image_new_from_file("./smileys/fear.jpg"); gtk_widget_set_size_request(appdata.fear, 160, 180); appdata.happy = gtk_image_new_from_file("./smileys/happy.jpg"); gtk_widget_set_size_request(appdata.happy, 160, 180); appdata.neutral = gtk_image_new_from_file("./smileys/neutral.jpg"); gtk_widget_set_size_request(appdata.neutral, 160, 180); appdata.sad = gtk_image_new_from_file("./smileys/sad.jpg"); gtk_widget_set_size_request(appdata.sad, 160, 180); appdata.surprise = gtk_image_new_from_file("./smileys/surprise.jpg"); gtk_widget_set_size_request(appdata.surprise, 160, 180); appdata.unknown = gtk_image_new_from_file("./smileys/unknown.jpg"); gtk_widget_set_size_request(appdata.unknown, 160, 180); appdata.smiley = gtk_image_new_from_file("./smileys/unknown.jpg"); gtk_widget_set_size_request(appdata.smiley, 160, 180); gtk_box_pack_start(GTK_BOX(vbox_button), appdata.smiley, FALSE, FALSE, 0); g_signal_connect(G_OBJECT(button1), "clicked", G_CALLBACK(button1_pressed), &appdata); g_signal_connect(G_OBJECT(button2), "clicked", G_CALLBACK(button2_pressed), &appdata); gtk_container_add(GTK_CONTAINER(appdata.window), hbox); /* Initialize the GTK pipeline */ if(!initialize_pipeline(&appdata, &argc, &argv)) { hildon_banner_show_information( GTK_WIDGET(appdata.window), "gtk-dialog-error", "Failed to initialize pipeline"); } g_signal_connect(G_OBJECT(appdata.window), "destroy", G_CALLBACK(destroy_pipeline), &appdata); /* Begin the main application */ example_gui_run(appdata.program, appdata.window); /* Free the gstreamer resources. Elements added * to the pipeline will be freed automatically */ return 0; } What I'd like to do is to modify the data_photo buffer to draw a rectangle in it (in the process_frame function), and draw the content in the appdata.screen GtkWidget. (by the way screen is declared as a GtkWidget * in the appdata structure). Thanks in advance for your help ! Bruno ------------------------------------------------------------------------- This SF.Net email is sponsored by the Moblin Your Move Developer's challenge Build the coolest Linux based applications with Moblin SDK & win great prizes Grand prize is a trip for two to an Open Source event anywhere in the world http://moblin-contest.org/redirect.php?banner_id=100&url=/ _______________________________________________ gstreamer-devel mailing list [hidden email] https://lists.sourceforge.net/lists/listinfo/gstreamer-devel |
Hi, gstreamer-devel:
If you only want to `ximagesink' or `xvimagesink' draws images in your GtkDrawingArea, there is a very simple way to achieve this: Just connect the `expose-event' signal of GtkDrawingArea and pass the window ID to the sink element: // Drawing on our drawing area g_signal_connect(G_OBJECT(area), "expose-event", G_CALLBACK(expose_cb), NULL); /* Callback to be called when the drawing area is exposed */ static gboolean expose_cb(GtkWidget * widget, GdkEventExpose * event, gpointer data) { // `play-videosink' is your video sink element gst_x_overlay_set_xwindow_id(GST_X_OVERLAY(play->videosink), GDK_WINDOW_XWINDOW(widget->window)); return FALSE; } That's it. If you want to draw the image yourself while not using `xvimagesink' or `ximagesink', then I think this is a Gtk+ problem, not a gstreamer issue. Eric Zhang Bruno wrote: > Hello all, > > I still have some questions about gstreamer. > > Actually I'd like to change the way my program work, in order to > display a frame from the camera only once the image processing has > been done, and with a rectangle over the face of the person. > > So I changed my pipeline (removed the screen_sink element), and I'd > like to send a buffer from my image processing function to the gtk > drawing area where the camera image was displayed before. I tried to > do it with gtk drawing area (and with gtk image too with no sucess), > but I can't find the way to change the image contained in the drawing > area. > > Here is my code : > > > > ///// IMAGE PROCESSING CALLBACK > > /* Callback to be called when data goes through the pad */ > static gboolean process_frame(GstElement *video_sink, > GstBuffer *buffer, GstPad *pad, AppData *appdata) > { > int x, y; > // getting the pointer to camera buffer > unsigned char *data_photo = (unsigned char *) > GST_BUFFER_DATA(buffer); > > > // REMOVED PART WHERE THE COORDINATES OF THE POSITION OF THE FACE IS > CALCULATED // > > > // THIS PART IS WHAT I TRIED, BUT I HAVE A SEGMENTATION FAULT WHEN > CREATING PIXBUF // > GdkPixbuf *newscreen; > //newscreen = gdk_pixbuf_new_from_data(data_photo, > //GDK_COLORSPACE_RGB, /* RGB-colorspace */ > //FALSE, /* No alpha-channel */ > //8, /* Bits per RGB-component */ > //IMAGE_WIDTH, IMAGE_HEIGHT, /* Dimensions */ > //3*IMAGE_WIDTH, /* Number of bytes between lines > (ie stride) */ > //NULL, NULL); /* Callbacks */ > > > gdk_draw_pixmap(GDK_DRAWABLE(appdata->screen), > appdata->screen->style->black_gc, GDK_DRAWABLE(newscreen), 0, 0, 0, 0, > -1, -1); > > return TRUE; > } > > > > > > /////// PIPELINE > > > /* Initialize the the Gstreamer pipeline. Below is a diagram > * of the pipeline that will be created: > * > * |Camera| |CSP | |Screen| |Screen| |Image | > * |src |->|Filter|->|queue |->|sink |-> |processing|-> Display > */ > static gboolean initialize_pipeline(AppData *appdata, > int *argc, char ***argv) > { > GstElement *pipeline, *camera_src, *screen_sink; > GstElement *screen_queue; > GstElement *csp_filter; > GstCaps *caps; > GstBus *bus; > GstPad *sinkpad; > > /* Initialize Gstreamer */ > gst_init(argc, argv); > > /* Create pipeline and attach a callback to it's > * message bus */ > pipeline = gst_pipeline_new("test-camera"); > > bus = gst_pipeline_get_bus(GST_PIPELINE(pipeline)); > gst_bus_add_watch(bus, (GstBusFunc)bus_callback, appdata); > gst_object_unref(GST_OBJECT(bus)); > > /* Save pipeline to the AppData structure */ > appdata->pipeline = pipeline; > > /* Create elements */ > /* Camera video stream comes from a Video4Linux driver */ > camera_src = gst_element_factory_make(VIDEO_SRC, "camera_src"); > /* Colorspace filter is needed to make sure that sinks understands > * the stream coming from the camera */ > csp_filter = gst_element_factory_make("ffmpegcolorspace", > "csp_filter"); > /* Queue creates new thread for the stream */ > screen_queue = gst_element_factory_make("queue", "screen_queue"); > /* Sink that shows the image on screen. Xephyr doesn't support XVideo > * extension, so it needs to use ximagesink, but the device uses > * xvimagesink */ > //screen_sink = gst_element_factory_make(VIDEO_SINK, "screen_sink"); > > sinkpad = gst_element_get_static_pad(screen_queue,"sink"); > gst_pad_add_buffer_probe(sinkpad,G_CALLBACK(process_frame), appdata); > > > /* Check that elements are correctly initialized */ > if(!(pipeline && camera_src /*&& screen_sink*/ && csp_filter && > screen_queue)) > { > g_critical("Couldn't create pipeline elements"); > return FALSE; > } > > > /* Add elements to the pipeline. This has to be done prior to > * linking them */ > gst_bin_add_many(GST_BIN(pipeline), camera_src, csp_filter, > screen_queue, /*screen_sink,*/ NULL); > > /* Specify what kind of video is wanted from the camera */ > caps = gst_caps_new_simple("video/x-raw-rgb", > "width", G_TYPE_INT, IMAGE_WIDTH, > "height", G_TYPE_INT, IMAGE_HEIGHT, > "framerate", GST_TYPE_FRACTION, FRAMERATE, 1, > NULL); > > > /* Link the camera source and colorspace filter using capabilities > * specified */ > if(!gst_element_link_filtered(camera_src, csp_filter, caps)) > { > return FALSE; > } > gst_caps_unref(caps); > > /* Connect Colorspace Filter -> Screen Queue -> Screen Sink > * This finalizes the initialization of the screen-part of the > pipeline */ > if(!gst_element_link_many(csp_filter, screen_queue, /*screen_sink, > */NULL)) > { > return FALSE; > } > > gst_element_set_state(pipeline, GST_STATE_PAUSED); > > return TRUE; > } > > > > > > > > /////// MAIN FUNCTION > > > int main(int argc, char **argv) > { > // variables for face detection > // main structure for vjdetect > > pdata = (mainstruct*) calloc(1, sizeof(mainstruct)); > // Allocate memory for array of face detections returned by > facedetector (VjDetect). > pdata->pFaceDetections = (FLY_Rect > *)calloc(MAX_NUMBER_OF_FACE_DETECTIONS, sizeof(FLY_Rect)); > init(pdata); > > AppData appdata; > appdata.expression = 0; > GtkWidget *hbox, *vbox_button, *vbox, *button1, *button2; > > > /* Initialize and create the GUI */ > > example_gui_initialize( > &appdata.program, &appdata.window, > &argc, &argv, "Expression Detector"); > > vbox = gtk_vbox_new(FALSE, 0); > hbox = gtk_hbox_new(FALSE, 0); > vbox_button = gtk_vbox_new(FALSE, 0); > > gtk_box_pack_start(GTK_BOX(hbox), vbox, FALSE, FALSE, 0); > gtk_box_pack_start(GTK_BOX(hbox), vbox_button, FALSE, FALSE, 0); > > appdata.screen = gtk_drawing_area_new(); > gtk_widget_set_size_request(appdata.screen, 500, 380); > gtk_box_pack_start(GTK_BOX(vbox), appdata.screen, FALSE, FALSE, 0); > > button1 = gtk_toggle_button_new_with_label("Run/Stop"); > gtk_widget_set_size_request(button1, 170, 75); > gtk_box_pack_start(GTK_BOX(vbox_button), button1, FALSE, FALSE, 0); > > button2 = gtk_toggle_button_new_with_label("Expressions ON/OFF"); > gtk_widget_set_size_request(button2, 170, 75); > gtk_box_pack_start(GTK_BOX(vbox_button), button2, FALSE, FALSE, 0); > > > appdata.anger = gtk_image_new_from_file("./smileys/anger.jpg"); > gtk_widget_set_size_request(appdata.anger, 160, 180); > appdata.disgust = gtk_image_new_from_file("./smileys/disgust.jpg"); > gtk_widget_set_size_request(appdata.disgust, 160, 180); > appdata.fear = gtk_image_new_from_file("./smileys/fear.jpg"); > gtk_widget_set_size_request(appdata.fear, 160, 180); > appdata.happy = gtk_image_new_from_file("./smileys/happy.jpg"); > gtk_widget_set_size_request(appdata.happy, 160, 180); > appdata.neutral = gtk_image_new_from_file("./smileys/neutral.jpg"); > gtk_widget_set_size_request(appdata.neutral, 160, 180); > appdata.sad = gtk_image_new_from_file("./smileys/sad.jpg"); > gtk_widget_set_size_request(appdata.sad, 160, 180); > appdata.surprise = gtk_image_new_from_file("./smileys/surprise.jpg"); > gtk_widget_set_size_request(appdata.surprise, 160, 180); > appdata.unknown = gtk_image_new_from_file("./smileys/unknown.jpg"); > gtk_widget_set_size_request(appdata.unknown, 160, 180); > > appdata.smiley = gtk_image_new_from_file("./smileys/unknown.jpg"); > gtk_widget_set_size_request(appdata.smiley, 160, 180); > gtk_box_pack_start(GTK_BOX(vbox_button), appdata.smiley, FALSE, > FALSE, 0); > > g_signal_connect(G_OBJECT(button1), "clicked", > G_CALLBACK(button1_pressed), &appdata); > > g_signal_connect(G_OBJECT(button2), "clicked", > G_CALLBACK(button2_pressed), &appdata); > > > gtk_container_add(GTK_CONTAINER(appdata.window), hbox); > > /* Initialize the GTK pipeline */ > if(!initialize_pipeline(&appdata, &argc, &argv)) > { > hildon_banner_show_information( > GTK_WIDGET(appdata.window), > "gtk-dialog-error", > "Failed to initialize pipeline"); > } > > > > g_signal_connect(G_OBJECT(appdata.window), "destroy", > G_CALLBACK(destroy_pipeline), &appdata); > > > /* Begin the main application */ > example_gui_run(appdata.program, appdata.window); > > /* Free the gstreamer resources. Elements added > * to the pipeline will be freed automatically */ > > return 0; > } > > > What I'd like to do is to modify the data_photo buffer to draw a > rectangle in it (in the process_frame function), and draw the content > in the appdata.screen GtkWidget. (by the way screen is declared as a > GtkWidget * in the appdata structure). > > Thanks in advance for your help ! > Bruno > > ------------------------------------------------------------------------ > > ------------------------------------------------------------------------- > This SF.Net email is sponsored by the Moblin Your Move Developer's challenge > Build the coolest Linux based applications with Moblin SDK & win great prizes > Grand prize is a trip for two to an Open Source event anywhere in the world > http://moblin-contest.org/redirect.php?banner_id=100&url=/ > ------------------------------------------------------------------------ > > _______________________________________________ > gstreamer-devel mailing list > [hidden email] > https://lists.sourceforge.net/lists/listinfo/gstreamer-devel > ------------------------------------------------------------------------- This SF.Net email is sponsored by the Moblin Your Move Developer's challenge Build the coolest Linux based applications with Moblin SDK & win great prizes Grand prize is a trip for two to an Open Source event anywhere in the world http://moblin-contest.org/redirect.php?banner_id=100&url=/ _______________________________________________ gstreamer-devel mailing list [hidden email] https://lists.sourceforge.net/lists/listinfo/gstreamer-devel |
Yep that is what I was doing before. Do you think I can draw rectangles over the cam image when using xvimagesink ?
2008/8/26, Eric Zhang <[hidden email]>: Hi, gstreamer-devel: ------------------------------------------------------------------------- This SF.Net email is sponsored by the Moblin Your Move Developer's challenge Build the coolest Linux based applications with Moblin SDK & win great prizes Grand prize is a trip for two to an Open Source event anywhere in the world http://moblin-contest.org/redirect.php?banner_id=100&url=/ _______________________________________________ gstreamer-devel mailing list [hidden email] https://lists.sourceforge.net/lists/listinfo/gstreamer-devel |
Hi, gstreamer-devel:
Oh, I think you can use a buffer/data probe to achieve this. Modify the buffer, add a rectangle on every frame. Or, modify the xvimagesink. :) Eric Zhang
2008/8/26 Bruno <[hidden email]> Yep that is what I was doing before. Do you think I can draw rectangles over the cam image when using xvimagesink ? ------------------------------------------------------------------------- This SF.Net email is sponsored by the Moblin Your Move Developer's challenge Build the coolest Linux based applications with Moblin SDK & win great prizes Grand prize is a trip for two to an Open Source event anywhere in the world http://moblin-contest.org/redirect.php?banner_id=100&url=/ _______________________________________________ gstreamer-devel mailing list [hidden email] https://lists.sourceforge.net/lists/listinfo/gstreamer-devel |
Could you be a little more precise please ?
How do I modify the buffer ? How do I draw it on the screen after ? should I still declare my screen widget from appdata struct as a drawing area ? Thanks for help 2008/8/26, Eric Zhang <[hidden email]>:
------------------------------------------------------------------------- This SF.Net email is sponsored by the Moblin Your Move Developer's challenge Build the coolest Linux based applications with Moblin SDK & win great prizes Grand prize is a trip for two to an Open Source event anywhere in the world http://moblin-contest.org/redirect.php?banner_id=100&url=/ _______________________________________________ gstreamer-devel mailing list [hidden email] https://lists.sourceforge.net/lists/listinfo/gstreamer-devel |
Hi, gstreamer-devel:
Hello, Bruno, actually I have not many experiences on this topic and I was just going to give you some clews. OK, AFAIK, you can add a buffer probe before the xvimagesink and check out the GstBuffer which is going to flow into it and add your rectangle on this GstBuffer, that's it. You don't need to draw the buffer yourself because xvimagesink will do it for you(just give your drawing area's window id to it, which you have achieved before). You can refer to Chapter 18 of gstreamer application develop manual, there is an example which inverts the image by adding a buffer probe and modify the buffer on the fly. Maybe this can help you a little. Eric Zhang 2008/8/26 Bruno <[hidden email]> Could you be a little more precise please ? ------------------------------------------------------------------------- This SF.Net email is sponsored by the Moblin Your Move Developer's challenge Build the coolest Linux based applications with Moblin SDK & win great prizes Grand prize is a trip for two to an Open Source event anywhere in the world http://moblin-contest.org/redirect.php?banner_id=100&url=/ _______________________________________________ gstreamer-devel mailing list [hidden email] https://lists.sourceforge.net/lists/listinfo/gstreamer-devel |
In reply to this post by Bruno-58
Hi all
> Could you be a little more precise please ? > How do I modify the buffer ? How do I draw it on the screen after ? > should I still declare my screen widget from appdata struct as a > drawing area ? FYI, I am planning (do not know exactly when) to develop a svgoverlay plugin, which would allow to overlay a SVG content over a buffer, for the same kind of usage (drawing shapes over the video). I have already developped such a filter for vlc, and the structure of gstreamer should make it easy. I have also tried another way (using videomixer and feeding it the output of gdkpixbufdec (which can decode SVG), but I did not manage (yet) to make it work. I have tried variations around the following pipeline: videotestsrc ! video/x-raw-yuv,width=320, height=240 ! videomixer name=mix ! ffmpegcolorspace ! xvimagesink filesrc location=/tmp/a.svg ! gdkpixbufdec ! videoscale ! video/x-raw-rgb,width=320,height=240 ! ffmpegcolorspace ! alpha alpha=0.5 ! mix. Olivier ------------------------------------------------------------------------- This SF.Net email is sponsored by the Moblin Your Move Developer's challenge Build the coolest Linux based applications with Moblin SDK & win great prizes Grand prize is a trip for two to an Open Source event anywhere in the world http://moblin-contest.org/redirect.php?banner_id=100&url=/ _______________________________________________ gstreamer-devel mailing list [hidden email] https://lists.sourceforge.net/lists/listinfo/gstreamer-devel |
In reply to this post by Eric Zhang-6
Ok thanks a lot Eric, I didn't know the buffer can be modified on the fly this way.
But what I don't really understand is how exactly the pipeline works. If I do that way, will the pipeline take one frame from the camera, call the buffer probe related function, doing the image processing calculation, modify the buffer, then send it to the screen ? Because that processing takes a long time, and lot of frame are coming from the cam during the calculation. I'd like them to be dropped, then when the calculation from the first frame is finished, the image processing callback has to take the next coming frame. (or the one just before, but not the second one). Actually it seems that all frame are showed, like it does the calculation for the first, then keep the next coming in memory, and when the first is processed, it comes to the second. (or at least it seems like this when I start the prog). Maybe it's due to the "queue" element ? (I took this from the maemo-camera example, and I don't get what the queue element is for) Thanks again for explanations ! Kind regards, Bruno 2008/8/27, Eric Zhang <[hidden email]>:
------------------------------------------------------------------------- This SF.Net email is sponsored by the Moblin Your Move Developer's challenge Build the coolest Linux based applications with Moblin SDK & win great prizes Grand prize is a trip for two to an Open Source event anywhere in the world http://moblin-contest.org/redirect.php?banner_id=100&url=/ _______________________________________________ gstreamer-devel mailing list [hidden email] https://lists.sourceforge.net/lists/listinfo/gstreamer-devel |
Envoyé avec Yahoo! Mail. Une boite mail plus intelligente. ------------------------------------------------------------------------- This SF.Net email is sponsored by the Moblin Your Move Developer's challenge Build the coolest Linux based applications with Moblin SDK & win great prizes Grand prize is a trip for two to an Open Source event anywhere in the world http://moblin-contest.org/redirect.php?banner_id=100&url=/ _______________________________________________ gstreamer-devel mailing list [hidden email] https://lists.sourceforge.net/lists/listinfo/gstreamer-devel |
In reply to this post by Bruno-58
hi,
you can directly draw on the xvideo drawable. set autopaint-colorkey=FALSE on xvimagesink and read out the colorkey from that property. Then you can have to paint the drawable with the colorkey. The video will shine-thru on areas painted with the color key. If you paint a rect-angle of a different color it will be over the video. This needs latest gst-plugins-base (maybe even cvs). Check if your xvimagsink has those properties. Stefan Bruno schrieb: > Yep that is what I was doing before. Do you think I can draw > rectangles over the cam image when using xvimagesink ? > > > 2008/8/26, Eric Zhang <[hidden email] > <mailto:[hidden email]>>: > > Hi, gstreamer-devel: > > If you only want to `ximagesink' or `xvimagesink' draws images in > your GtkDrawingArea, there is a very simple way to achieve this: > > Just connect the `expose-event' signal of GtkDrawingArea and pass > the window ID to the sink element: > > // Drawing on our drawing area > g_signal_connect(G_OBJECT(area), "expose-event", > G_CALLBACK(expose_cb), > NULL); > > /* Callback to be called when the drawing area is exposed */ > > static gboolean expose_cb(GtkWidget * widget, GdkEventExpose * event, > gpointer data) > { > > // `play-videosink' is your video sink element > gst_x_overlay_set_xwindow_id(GST_X_OVERLAY(play->videosink), > > GDK_WINDOW_XWINDOW(widget->window)); > return FALSE; > } > > > That's it. If you want to draw the image yourself while not using > `xvimagesink' or `ximagesink', then I think this is a Gtk+ > problem, not > a gstreamer issue. > > > Eric Zhang > > > > > Bruno wrote: > > Hello all, > > > > I still have some questions about gstreamer. > > > > Actually I'd like to change the way my program work, in order to > > display a frame from the camera only once the image processing has > > been done, and with a rectangle over the face of the person. > > > > So I changed my pipeline (removed the screen_sink element), and I'd > > like to send a buffer from my image processing function to the gtk > > drawing area where the camera image was displayed before. I tried to > > do it with gtk drawing area (and with gtk image too with no sucess), > > but I can't find the way to change the image contained in the drawing > > area. > > > > Here is my code : > > > > > > > > ///// IMAGE PROCESSING CALLBACK > > > > /* Callback to be called when data goes through the pad */ > > static gboolean process_frame(GstElement *video_sink, > > GstBuffer *buffer, GstPad *pad, AppData *appdata) > > { > > int x, y; > > // getting the pointer to camera buffer > > unsigned char *data_photo = (unsigned char *) > > GST_BUFFER_DATA(buffer); > > > > > > // REMOVED PART WHERE THE COORDINATES OF THE POSITION OF THE FACE IS > > CALCULATED // > > > > > > // THIS PART IS WHAT I TRIED, BUT I HAVE A SEGMENTATION FAULT WHEN > > CREATING PIXBUF // > > GdkPixbuf *newscreen; > > //newscreen = gdk_pixbuf_new_from_data(data_photo, > > //GDK_COLORSPACE_RGB, /* RGB-colorspace */ > > //FALSE, /* No alpha-channel */ > > //8, /* Bits per RGB-component */ > > //IMAGE_WIDTH, IMAGE_HEIGHT, /* Dimensions */ > > //3*IMAGE_WIDTH, /* Number of bytes between lines > > (ie stride) */ > > //NULL, NULL); /* Callbacks */ > > > > > > gdk_draw_pixmap(GDK_DRAWABLE(appdata->screen), > > appdata->screen->style->black_gc, GDK_DRAWABLE(newscreen), 0, 0, > 0, 0, > > -1, -1); > > > > return TRUE; > > } > > > > > > > > > > > > /////// PIPELINE > > > > > > /* Initialize the the Gstreamer pipeline. Below is a diagram > > * of the pipeline that will be created: > > * > > * |Camera| |CSP | |Screen| |Screen| |Image | > > * |src |->|Filter|->|queue |->|sink |-> |processing|-> Display > > */ > > static gboolean initialize_pipeline(AppData *appdata, > > int *argc, char ***argv) > > { > > GstElement *pipeline, *camera_src, *screen_sink; > > GstElement *screen_queue; > > GstElement *csp_filter; > > GstCaps *caps; > > GstBus *bus; > > GstPad *sinkpad; > > > > /* Initialize Gstreamer */ > > gst_init(argc, argv); > > > > /* Create pipeline and attach a callback to it's > > * message bus */ > > pipeline = gst_pipeline_new("test-camera"); > > > > bus = gst_pipeline_get_bus(GST_PIPELINE(pipeline)); > > gst_bus_add_watch(bus, (GstBusFunc)bus_callback, appdata); > > gst_object_unref(GST_OBJECT(bus)); > > > > /* Save pipeline to the AppData structure */ > > appdata->pipeline = pipeline; > > > > /* Create elements */ > > /* Camera video stream comes from a Video4Linux driver */ > > camera_src = gst_element_factory_make(VIDEO_SRC, "camera_src"); > > /* Colorspace filter is needed to make sure that sinks > understands > > * the stream coming from the camera */ > > csp_filter = gst_element_factory_make("ffmpegcolorspace", > > "csp_filter"); > > /* Queue creates new thread for the stream */ > > screen_queue = gst_element_factory_make("queue", "screen_queue"); > > /* Sink that shows the image on screen. Xephyr doesn't > support XVideo > > * extension, so it needs to use ximagesink, but the device uses > > * xvimagesink */ > > //screen_sink = gst_element_factory_make(VIDEO_SINK, > "screen_sink"); > > > > sinkpad = gst_element_get_static_pad(screen_queue,"sink"); > > gst_pad_add_buffer_probe(sinkpad,G_CALLBACK(process_frame), > appdata); > > > > > > /* Check that elements are correctly initialized */ > > if(!(pipeline && camera_src /*&& screen_sink*/ && csp_filter && > > screen_queue)) > > { > > g_critical("Couldn't create pipeline elements"); > > return FALSE; > > } > > > > > > /* Add elements to the pipeline. This has to be done prior to > > * linking them */ > > gst_bin_add_many(GST_BIN(pipeline), camera_src, csp_filter, > > screen_queue, /*screen_sink,*/ NULL); > > > > /* Specify what kind of video is wanted from the camera */ > > caps = gst_caps_new_simple("video/x-raw-rgb", > > "width", G_TYPE_INT, IMAGE_WIDTH, > > "height", G_TYPE_INT, IMAGE_HEIGHT, > > "framerate", GST_TYPE_FRACTION, FRAMERATE, 1, > > NULL); > > > > > > /* Link the camera source and colorspace filter using > capabilities > > * specified */ > > if(!gst_element_link_filtered(camera_src, csp_filter, caps)) > > { > > return FALSE; > > } > > gst_caps_unref(caps); > > > > /* Connect Colorspace Filter -> Screen Queue -> Screen Sink > > * This finalizes the initialization of the screen-part of the > > pipeline */ > > if(!gst_element_link_many(csp_filter, screen_queue, > /*screen_sink, > > */NULL)) > > { > > return FALSE; > > } > > > > gst_element_set_state(pipeline, GST_STATE_PAUSED); > > > > return TRUE; > > } > > > > > > > > > > > > > > > > /////// MAIN FUNCTION > > > > > > int main(int argc, char **argv) > > { > > // variables for face detection > > // main structure for vjdetect > > > > pdata = (mainstruct*) calloc(1, sizeof(mainstruct)); > > // Allocate memory for array of face detections returned by > > facedetector (VjDetect). > > pdata->pFaceDetections = (FLY_Rect > > *)calloc(MAX_NUMBER_OF_FACE_DETECTIONS, sizeof(FLY_Rect)); > > init(pdata); > > > > AppData appdata; > > appdata.expression = 0; > > GtkWidget *hbox, *vbox_button, *vbox, *button1, *button2; > > > > > > /* Initialize and create the GUI */ > > > > example_gui_initialize( > > &appdata.program, &appdata.window, > > &argc, &argv, "Expression Detector"); > > > > vbox = gtk_vbox_new(FALSE, 0); > > hbox = gtk_hbox_new(FALSE, 0); > > vbox_button = gtk_vbox_new(FALSE, 0); > > > > gtk_box_pack_start(GTK_BOX(hbox), vbox, FALSE, FALSE, 0); > > gtk_box_pack_start(GTK_BOX(hbox), vbox_button, FALSE, FALSE, 0); > > > > appdata.screen = gtk_drawing_area_new(); > > gtk_widget_set_size_request(appdata.screen, 500, 380); > > gtk_box_pack_start(GTK_BOX(vbox), appdata.screen, FALSE, > FALSE, 0); > > > > button1 = gtk_toggle_button_new_with_label("Run/Stop"); > > gtk_widget_set_size_request(button1, 170, 75); > > gtk_box_pack_start(GTK_BOX(vbox_button), button1, FALSE, > FALSE, 0); > > > > button2 = gtk_toggle_button_new_with_label("Expressions ON/OFF"); > > gtk_widget_set_size_request(button2, 170, 75); > > gtk_box_pack_start(GTK_BOX(vbox_button), button2, FALSE, > FALSE, 0); > > > > > > appdata.anger = gtk_image_new_from_file("./smileys/anger.jpg"); > > gtk_widget_set_size_request(appdata.anger, 160, 180); > > appdata.disgust = > gtk_image_new_from_file("./smileys/disgust.jpg"); > > gtk_widget_set_size_request(appdata.disgust, 160, 180); > > appdata.fear = gtk_image_new_from_file("./smileys/fear.jpg"); > > gtk_widget_set_size_request(appdata.fear, 160, 180); > > appdata.happy = gtk_image_new_from_file("./smileys/happy.jpg"); > > gtk_widget_set_size_request(appdata.happy, 160, 180); > > appdata.neutral = > gtk_image_new_from_file("./smileys/neutral.jpg"); > > gtk_widget_set_size_request(appdata.neutral, 160, 180); > > appdata.sad = gtk_image_new_from_file("./smileys/sad.jpg"); > > gtk_widget_set_size_request(appdata.sad, 160, 180); > > appdata.surprise = > gtk_image_new_from_file("./smileys/surprise.jpg"); > > gtk_widget_set_size_request(appdata.surprise, 160, 180); > > appdata.unknown = > gtk_image_new_from_file("./smileys/unknown.jpg"); > > gtk_widget_set_size_request(appdata.unknown, 160, 180); > > > > appdata.smiley = > gtk_image_new_from_file("./smileys/unknown.jpg"); > > gtk_widget_set_size_request(appdata.smiley, 160, 180); > > gtk_box_pack_start(GTK_BOX(vbox_button), appdata.smiley, FALSE, > > FALSE, 0); > > > > g_signal_connect(G_OBJECT(button1), "clicked", > > G_CALLBACK(button1_pressed), &appdata); > > > > g_signal_connect(G_OBJECT(button2), "clicked", > > G_CALLBACK(button2_pressed), &appdata); > > > > > > gtk_container_add(GTK_CONTAINER(appdata.window), hbox); > > > > /* Initialize the GTK pipeline */ > > if(!initialize_pipeline(&appdata, &argc, &argv)) > > { > > hildon_banner_show_information( > > GTK_WIDGET(appdata.window), > > "gtk-dialog-error", > > "Failed to initialize pipeline"); > > } > > > > > > > > g_signal_connect(G_OBJECT(appdata.window), "destroy", > > G_CALLBACK(destroy_pipeline), &appdata); > > > > > > /* Begin the main application */ > > example_gui_run(appdata.program, appdata.window); > > > > /* Free the gstreamer resources. Elements added > > * to the pipeline will be freed automatically */ > > > > return 0; > > } > > > > > > What I'd like to do is to modify the data_photo buffer to draw a > > rectangle in it (in the process_frame function), and draw the content > > in the appdata.screen GtkWidget. (by the way screen is declared as a > > GtkWidget * in the appdata structure). > > > > Thanks in advance for your help ! > > Bruno > > > > > ------------------------------------------------------------------------ > > > > > ------------------------------------------------------------------------- > > This SF.Net email is sponsored by the Moblin Your Move > Developer's challenge > > Build the coolest Linux based applications with Moblin SDK & win > great prizes > > Grand prize is a trip for two to an Open Source event anywhere in > the world > > http://moblin-contest.org/redirect.php?banner_id=100&url=/ > <http://moblin-contest.org/redirect.php?banner_id=100&url=/> > > ------------------------------------------------------------------------ > > > > _______________________________________________ > > gstreamer-devel mailing list > > [hidden email] > <mailto:[hidden email]> > > https://lists.sourceforge.net/lists/listinfo/gstreamer-devel > > > > > ------------------------------------------------------------------------- > This SF.Net email is sponsored by the Moblin Your Move Developer's > challenge > Build the coolest Linux based applications with Moblin SDK & win > great prizes > Grand prize is a trip for two to an Open Source event anywhere in > the world > http://moblin-contest.org/redirect.php?banner_id=100&url=/ > <http://moblin-contest.org/redirect.php?banner_id=100&url=/> > _______________________________________________ > gstreamer-devel mailing list > [hidden email] > <mailto:[hidden email]> > https://lists.sourceforge.net/lists/listinfo/gstreamer-devel > > > ------------------------------------------------------------------------ > > ------------------------------------------------------------------------- > This SF.Net email is sponsored by the Moblin Your Move Developer's challenge > Build the coolest Linux based applications with Moblin SDK & win great prizes > Grand prize is a trip for two to an Open Source event anywhere in the world > http://moblin-contest.org/redirect.php?banner_id=100&url=/ > ------------------------------------------------------------------------ > > _______________________________________________ > gstreamer-devel mailing list > [hidden email] > https://lists.sourceforge.net/lists/listinfo/gstreamer-devel > ------------------------------------------------------------------------- This SF.Net email is sponsored by the Moblin Your Move Developer's challenge Build the coolest Linux based applications with Moblin SDK & win great prizes Grand prize is a trip for two to an Open Source event anywhere in the world http://moblin-contest.org/redirect.php?banner_id=100&url=/ _______________________________________________ gstreamer-devel mailing list [hidden email] https://lists.sourceforge.net/lists/listinfo/gstreamer-devel |
In reply to this post by Bruno-58
Hi, Bruno:
I think that what you mentioned is not a gstreamer related topic. What you are worrying about is the performance of image processing and there is a lot of ways to improve it, such as re-consider the algorithm of image processing, try to parallel the processing and so on. The `queue' element in gstreamer is just a container of buffers. It just receive the buffer and push it out until `limitation' reaches(max-bytes, max-time, max-buffers). We usually use this element to cache the buffers and implement `buffering' features which is useful in a media player. Eric Zhang 2008/8/27 Bruno <[hidden email]> Ok thanks a lot Eric, I didn't know the buffer can be modified on the fly this way. ------------------------------------------------------------------------- This SF.Net email is sponsored by the Moblin Your Move Developer's challenge Build the coolest Linux based applications with Moblin SDK & win great prizes Grand prize is a trip for two to an Open Source event anywhere in the world http://moblin-contest.org/redirect.php?banner_id=100&url=/ _______________________________________________ gstreamer-devel mailing list [hidden email] https://lists.sourceforge.net/lists/listinfo/gstreamer-devel |
Okay thanks a lot for these precious informations.
With a pad probe I was able to draw on the xvimagesink buffer, so that part is okay now. But I still can't find how to drop the frame sent by camera during the image processing, I found that the queue element has a property called "leaky" which, seeing the description of the element : "Where the queue leaks, if at all. Default value: Not Leaky" It should be what I'm looking for. If I change its value to 1, the queue element should drop the coming frames until the processing of the next element is not finished, isn't it ? I tried to change the value and didn't see any difference, so maybe I'm wrong. Thanks again. Bruno 2008/8/28, Eric Zhang <[hidden email]>:
------------------------------------------------------------------------- This SF.Net email is sponsored by the Moblin Your Move Developer's challenge Build the coolest Linux based applications with Moblin SDK & win great prizes Grand prize is a trip for two to an Open Source event anywhere in the world http://moblin-contest.org/redirect.php?banner_id=100&url=/ _______________________________________________ gstreamer-devel mailing list [hidden email] https://lists.sourceforge.net/lists/listinfo/gstreamer-devel |
Hi,
Queue starts to leak when full If you want one buffer at a time, try and make your queue 1 buffer long. What about this ? gst-launch -v videotestsrc is-live=TRUE ! queue max-size-buffers=1 leaky=2 ! identity sleep-time=1000000 ! fakesink identity sleeps 1 sec, simulating 1 sec video processing. Queue leaks downstream (old buffers) and is 1 buffer long, so it keeps only the last one. The problem might come from the wrong timestamps received by the imagesink. There will be gaps, I do not know how it will handle it. Aurelien Bruno a écrit : > Okay thanks a lot for these precious informations. > > With a pad probe I was able to draw on the xvimagesink buffer, so that > part is okay now. > > But I still can't find how to drop the frame sent by camera during the > image processing, > I found that the queue element has a property called "leaky" which, > seeing the description of the element : > > "Where the queue leaks, if at all. > > Default value: Not Leaky" > > It should be what I'm looking for. If I change its value to 1, the queue > element should drop the coming frames until the processing of the next > element is not finished, isn't it ? > > I tried to change the value and didn't see any difference, so maybe I'm > wrong. > > Thanks again. > > Bruno > > > > > > > 2008/8/28, Eric Zhang <[hidden email] > <mailto:[hidden email]>>: > > Hi, Bruno: > > I think that what you mentioned is not a gstreamer related > topic. What you are worrying about is the performance of image > processing and there is a lot of ways to improve it, such as > re-consider the algorithm of image processing, try to parallel the > processing and so on. > > The `queue' element in gstreamer is just a container of buffers. > It just receive the buffer and push it out until `limitation' > reaches(max-bytes, max-time, max-buffers). We usually use this > element to cache the buffers and implement `buffering' features > which is useful in a media player. > > Eric Zhang > > > 2008/8/27 Bruno <[hidden email] <mailto:[hidden email]>> > > Ok thanks a lot Eric, I didn't know the buffer can be modified > on the fly this way. > > But what I don't really understand is how exactly the pipeline > works. If I do that way, will the pipeline take one frame from > the camera, call the buffer probe related function, doing the > image processing calculation, modify the buffer, then send it to > the screen ? Because that processing takes a long time, and lot > of frame are coming from the cam during the calculation. I'd > like them to be dropped, then when the calculation from the > first frame is finished, the image processing callback has to > take the next coming frame. (or the one just before, but not the > second one). > > Actually it seems that all frame are showed, like it does the > calculation for the first, then keep the next coming in memory, > and when the first is processed, it comes to the second. (or at > least it seems like this when I start the prog). Maybe it's due > to the "queue" element ? (I took this from the maemo-camera > example, and I don't get what the queue element is for) > > Thanks again for explanations ! > > Kind regards, > Bruno > > > 2008/8/27, Eric Zhang <[hidden email] > <mailto:[hidden email]>>: > > Hi, gstreamer-devel: > > Hello, Bruno, actually I have not many experiences on > this topic and I was just going to give you some clews. OK, > AFAIK, you can add a buffer probe before the xvimagesink and > check out the GstBuffer which is going to flow into it and > add your rectangle on this GstBuffer, that's it. You don't > need to draw the buffer yourself because xvimagesink will do > it for you(just give your drawing area's window id to it, > which you have achieved before). > > You can refer to Chapter 18 of gstreamer application > develop manual, there is an example which inverts the image > by adding a buffer probe and modify the buffer on the fly. > Maybe this can help you a little. > > > Eric Zhang > > 2008/8/26 Bruno <[hidden email] > <mailto:[hidden email]>> > > Could you be a little more precise please ? > How do I modify the buffer ? How do I draw it on the > screen after ? should I still declare my screen widget > from appdata struct as a drawing area ? > > Thanks for help > > > 2008/8/26, Eric Zhang <[hidden email] > <mailto:[hidden email]>>: > > Hi, gstreamer-devel: > > Oh, I think you can use a buffer/data probe to > achieve this. Modify the buffer, add a rectangle on > every frame. Or, modify the xvimagesink. :) > > Eric Zhang > > 2008/8/26 Bruno <[hidden email] > <mailto:[hidden email]>> > > Yep that is what I was doing before. Do you > think I can draw rectangles over the cam image > when using xvimagesink ? > > > 2008/8/26, Eric Zhang <[hidden email] > <mailto:[hidden email]>>: > > Hi, gstreamer-devel: > > If you only want to `ximagesink' or > `xvimagesink' draws images in > your GtkDrawingArea, there is a very simple > way to achieve this: > > Just connect the `expose-event' signal > of GtkDrawingArea and pass > the window ID to the sink element: > > // Drawing on our drawing area > g_signal_connect(G_OBJECT(area), > "expose-event", G_CALLBACK(expose_cb), > NULL); > > /* Callback to be called when the drawing > area is exposed */ > > static gboolean expose_cb(GtkWidget * > widget, GdkEventExpose * event, > gpointer data) > { > > // `play-videosink' is your video sink > element > gst_x_overlay_set_xwindow_id(GST_X_OVERLAY(play->videosink), > > > GDK_WINDOW_XWINDOW(widget->window)); > return FALSE; > } > > > That's it. If you want to draw the image > yourself while not using > `xvimagesink' or `ximagesink', then I think > this is a Gtk+ problem, not > a gstreamer issue. > > > Eric Zhang > > > > > Bruno wrote: > > Hello all, > > > > I still have some questions about gstreamer. > > > > Actually I'd like to change the way my > program work, in order to > > display a frame from the camera only once > the image processing has > > been done, and with a rectangle over the > face of the person. > > > > So I changed my pipeline (removed the > screen_sink element), and I'd > > like to send a buffer from my image > processing function to the gtk > > drawing area where the camera image was > displayed before. I tried to > > do it with gtk drawing area (and with gtk > image too with no sucess), > > but I can't find the way to change the > image contained in the drawing > > area. > > > > Here is my code : > > > > > > > > ///// IMAGE PROCESSING CALLBACK > > > > /* Callback to be called when data goes > through the pad */ > > static gboolean process_frame(GstElement > *video_sink, > > GstBuffer *buffer, GstPad *pad, > AppData *appdata) > > { > > int x, y; > > // getting the pointer to camera buffer > > unsigned char *data_photo = > (unsigned char *) > > GST_BUFFER_DATA(buffer); > > > > > > // REMOVED PART WHERE THE COORDINATES OF > THE POSITION OF THE FACE IS > > CALCULATED // > > > > > > // THIS PART IS WHAT I TRIED, BUT I HAVE A > SEGMENTATION FAULT WHEN > > CREATING PIXBUF // > > GdkPixbuf *newscreen; > > //newscreen = > gdk_pixbuf_new_from_data(data_photo, > > //GDK_COLORSPACE_RGB, > /* RGB-colorspace */ > > //FALSE, /* No > alpha-channel */ > > //8, /* Bits per > RGB-component */ > > //IMAGE_WIDTH, > IMAGE_HEIGHT, /* Dimensions */ > > //3*IMAGE_WIDTH, /* > Number of bytes between lines > > (ie stride) */ > > //NULL, NULL); /* > Callbacks */ > > > > > > gdk_draw_pixmap(GDK_DRAWABLE(appdata->screen), > > appdata->screen->style->black_gc, > GDK_DRAWABLE(newscreen), 0, 0, 0, 0, > > -1, -1); > > > > return TRUE; > > } > > > > > > > > > > > > /////// PIPELINE > > > > > > /* Initialize the the Gstreamer pipeline. > Below is a diagram > > * of the pipeline that will be created: > > * > > * |Camera| |CSP | |Screen| |Screen| > |Image | > > * |src |->|Filter|->|queue |->|sink |-> > |processing|-> Display > > */ > > static gboolean > initialize_pipeline(AppData *appdata, > > int *argc, char ***argv) > > { > > GstElement *pipeline, *camera_src, > *screen_sink; > > GstElement *screen_queue; > > GstElement *csp_filter; > > GstCaps *caps; > > GstBus *bus; > > GstPad *sinkpad; > > > > /* Initialize Gstreamer */ > > gst_init(argc, argv); > > > > /* Create pipeline and attach a > callback to it's > > * message bus */ > > pipeline = gst_pipeline_new("test-camera"); > > > > bus = > gst_pipeline_get_bus(GST_PIPELINE(pipeline)); > > gst_bus_add_watch(bus, > (GstBusFunc)bus_callback, appdata); > > gst_object_unref(GST_OBJECT(bus)); > > > > /* Save pipeline to the AppData > structure */ > > appdata->pipeline = pipeline; > > > > /* Create elements */ > > /* Camera video stream comes from a > Video4Linux driver */ > > camera_src = > gst_element_factory_make(VIDEO_SRC, > "camera_src"); > > /* Colorspace filter is needed to make > sure that sinks understands > > * the stream coming from the camera */ > > csp_filter = > gst_element_factory_make("ffmpegcolorspace", > > "csp_filter"); > > /* Queue creates new thread for the > stream */ > > screen_queue = > gst_element_factory_make("queue", > "screen_queue"); > > /* Sink that shows the image on screen. > Xephyr doesn't support XVideo > > * extension, so it needs to use > ximagesink, but the device uses > > * xvimagesink */ > > //screen_sink = > gst_element_factory_make(VIDEO_SINK, > "screen_sink"); > > > > sinkpad = > gst_element_get_static_pad(screen_queue,"sink"); > > > gst_pad_add_buffer_probe(sinkpad,G_CALLBACK(process_frame), > appdata); > > > > > > /* Check that elements are correctly > initialized */ > > if(!(pipeline && camera_src /*&& > screen_sink*/ && csp_filter && > > screen_queue)) > > { > > g_critical("Couldn't create > pipeline elements"); > > return FALSE; > > } > > > > > > /* Add elements to the pipeline. This > has to be done prior to > > * linking them */ > > gst_bin_add_many(GST_BIN(pipeline), > camera_src, csp_filter, > > screen_queue, /*screen_sink,*/ > NULL); > > > > /* Specify what kind of video is wanted > from the camera */ > > caps = > gst_caps_new_simple("video/x-raw-rgb", > > "width", G_TYPE_INT, IMAGE_WIDTH, > > "height", G_TYPE_INT, IMAGE_HEIGHT, > > "framerate", GST_TYPE_FRACTION, > FRAMERATE, 1, > > NULL); > > > > > > /* Link the camera source and > colorspace filter using capabilities > > * specified */ > > > if(!gst_element_link_filtered(camera_src, > csp_filter, caps)) > > { > > return FALSE; > > } > > gst_caps_unref(caps); > > > > /* Connect Colorspace Filter -> Screen > Queue -> Screen Sink > > * This finalizes the initialization of > the screen-part of the > > pipeline */ > > if(!gst_element_link_many(csp_filter, > screen_queue, /*screen_sink, > > */NULL)) > > { > > return FALSE; > > } > > > > gst_element_set_state(pipeline, > GST_STATE_PAUSED); > > > > return TRUE; > > } > > > > > > > > > > > > > > > > /////// MAIN FUNCTION > > > > > > int main(int argc, char **argv) > > { > > // variables for face detection > > // main structure for vjdetect > > > > pdata = (mainstruct*) calloc(1, > sizeof(mainstruct)); > > // Allocate memory for array of face > detections returned by > > facedetector (VjDetect). > > pdata->pFaceDetections = (FLY_Rect > > *)calloc(MAX_NUMBER_OF_FACE_DETECTIONS, > sizeof(FLY_Rect)); > > init(pdata); > > > > AppData appdata; > > appdata.expression = 0; > > GtkWidget *hbox, *vbox_button, *vbox, > *button1, *button2; > > > > > > /* Initialize and create the GUI */ > > > > example_gui_initialize( > > &appdata.program, &appdata.window, > > &argc, &argv, "Expression Detector"); > > > > vbox = gtk_vbox_new(FALSE, 0); > > hbox = gtk_hbox_new(FALSE, 0); > > vbox_button = gtk_vbox_new(FALSE, 0); > > > > gtk_box_pack_start(GTK_BOX(hbox), vbox, > FALSE, FALSE, 0); > > gtk_box_pack_start(GTK_BOX(hbox), > vbox_button, FALSE, FALSE, 0); > > > > appdata.screen = gtk_drawing_area_new(); > > > gtk_widget_set_size_request(appdata.screen, > 500, 380); > > gtk_box_pack_start(GTK_BOX(vbox), > appdata.screen, FALSE, FALSE, 0); > > > > button1 = > gtk_toggle_button_new_with_label("Run/Stop"); > > gtk_widget_set_size_request(button1, > 170, 75); > > > gtk_box_pack_start(GTK_BOX(vbox_button), > button1, FALSE, FALSE, 0); > > > > button2 = > gtk_toggle_button_new_with_label("Expressions > ON/OFF"); > > gtk_widget_set_size_request(button2, > 170, 75); > > > gtk_box_pack_start(GTK_BOX(vbox_button), > button2, FALSE, FALSE, 0); > > > > > > appdata.anger = > gtk_image_new_from_file("./smileys/anger.jpg"); > > > gtk_widget_set_size_request(appdata.anger, > 160, 180); > > appdata.disgust = > gtk_image_new_from_file("./smileys/disgust.jpg"); > > > gtk_widget_set_size_request(appdata.disgust, > 160, 180); > > appdata.fear = > gtk_image_new_from_file("./smileys/fear.jpg"); > > > gtk_widget_set_size_request(appdata.fear, > 160, 180); > > appdata.happy = > gtk_image_new_from_file("./smileys/happy.jpg"); > > > gtk_widget_set_size_request(appdata.happy, > 160, 180); > > appdata.neutral = > gtk_image_new_from_file("./smileys/neutral.jpg"); > > > gtk_widget_set_size_request(appdata.neutral, > 160, 180); > > appdata.sad = > gtk_image_new_from_file("./smileys/sad.jpg"); > > > gtk_widget_set_size_request(appdata.sad, > 160, 180); > > appdata.surprise = > gtk_image_new_from_file("./smileys/surprise.jpg"); > > > gtk_widget_set_size_request(appdata.surprise, > 160, 180); > > appdata.unknown = > gtk_image_new_from_file("./smileys/unknown.jpg"); > > > gtk_widget_set_size_request(appdata.unknown, > 160, 180); > > > > appdata.smiley = > gtk_image_new_from_file("./smileys/unknown.jpg"); > > > gtk_widget_set_size_request(appdata.smiley, > 160, 180); > > > gtk_box_pack_start(GTK_BOX(vbox_button), > appdata.smiley, FALSE, > > FALSE, 0); > > > > g_signal_connect(G_OBJECT(button1), > "clicked", > > G_CALLBACK(button1_pressed), > &appdata); > > > > g_signal_connect(G_OBJECT(button2), > "clicked", > > G_CALLBACK(button2_pressed), > &appdata); > > > > > > > gtk_container_add(GTK_CONTAINER(appdata.window), > hbox); > > > > /* Initialize the GTK pipeline */ > > if(!initialize_pipeline(&appdata, > &argc, &argv)) > > { > > hildon_banner_show_information( > > GTK_WIDGET(appdata.window), > > "gtk-dialog-error", > > "Failed to initialize > pipeline"); > > } > > > > > > > > > g_signal_connect(G_OBJECT(appdata.window), > "destroy", > > G_CALLBACK(destroy_pipeline), > &appdata); > > > > > > /* Begin the main application */ > > example_gui_run(appdata.program, > appdata.window); > > > > /* Free the gstreamer resources. > Elements added > > * to the pipeline will be freed > automatically */ > > > > return 0; > > } > > > > > > What I'd like to do is to modify the > data_photo buffer to draw a > > rectangle in it (in the process_frame > function), and draw the content > > in the appdata.screen GtkWidget. (by the > way screen is declared as a > > GtkWidget * in the appdata structure). > > > > Thanks in advance for your help ! > > Bruno > > > > > > ------------------------------------------------------------------------ > > > > > > ------------------------------------------------------------------------- > > This SF.Net email is sponsored by the > Moblin Your Move Developer's challenge > > Build the coolest Linux based applications > with Moblin SDK & win great prizes > > Grand prize is a trip for two to an Open > Source event anywhere in the world > > > http://moblin-contest.org/redirect.php?banner_id=100&url=/ > <http://moblin-contest.org/redirect.php?banner_id=100&url=/> > > > ------------------------------------------------------------------------ > > > > > _______________________________________________ > > gstreamer-devel mailing list > > [hidden email] > <mailto:[hidden email]> > > > https://lists.sourceforge.net/lists/listinfo/gstreamer-devel > > > > > ------------------------------------------------------------------------- > This SF.Net email is sponsored by the Moblin > Your Move Developer's challenge > Build the coolest Linux based applications > with Moblin SDK & win great prizes > Grand prize is a trip for two to an Open > Source event anywhere in the world > http://moblin-contest.org/redirect.php?banner_id=100&url=/ > <http://moblin-contest.org/redirect.php?banner_id=100&url=/> > _______________________________________________ > gstreamer-devel mailing list > [hidden email] > <mailto:[hidden email]> > https://lists.sourceforge.net/lists/listinfo/gstreamer-devel > > > > ------------------------------------------------------------------------- > This SF.Net email is sponsored by the Moblin > Your Move Developer's challenge > Build the coolest Linux based applications with > Moblin SDK & win great prizes > Grand prize is a trip for two to an Open Source > event anywhere in the world > http://moblin-contest.org/redirect.php?banner_id=100&url=/ > <http://moblin-contest.org/redirect.php?banner_id=100&url=/> > _______________________________________________ > gstreamer-devel mailing list > [hidden email] > <mailto:[hidden email]> > https://lists.sourceforge.net/lists/listinfo/gstreamer-devel > > > > ------------------------------------------------------------------------- > This SF.Net email is sponsored by the Moblin Your > Move Developer's challenge > Build the coolest Linux based applications with > Moblin SDK & win great prizes > Grand prize is a trip for two to an Open Source > event anywhere in the world > http://moblin-contest.org/redirect.php?banner_id=100&url=/ > <http://moblin-contest.org/redirect.php?banner_id=100&url=/> > _______________________________________________ > gstreamer-devel mailing list > [hidden email] > <mailto:[hidden email]> > https://lists.sourceforge.net/lists/listinfo/gstreamer-devel > > > > ------------------------------------------------------------------------- > This SF.Net email is sponsored by the Moblin Your Move > Developer's challenge > Build the coolest Linux based applications with Moblin > SDK & win great prizes > Grand prize is a trip for two to an Open Source event > anywhere in the world > http://moblin-contest.org/redirect.php?banner_id=100&url=/ > <http://moblin-contest.org/redirect.php?banner_id=100&url=/> > _______________________________________________ > gstreamer-devel mailing list > [hidden email] > <mailto:[hidden email]> > https://lists.sourceforge.net/lists/listinfo/gstreamer-devel > > > > ------------------------------------------------------------------------- > This SF.Net email is sponsored by the Moblin Your Move > Developer's challenge > Build the coolest Linux based applications with Moblin SDK & > win great prizes > Grand prize is a trip for two to an Open Source event > anywhere in the world > http://moblin-contest.org/redirect.php?banner_id=100&url=/ > <http://moblin-contest.org/redirect.php?banner_id=100&url=/> > _______________________________________________ > gstreamer-devel mailing list > [hidden email] > <mailto:[hidden email]> > https://lists.sourceforge.net/lists/listinfo/gstreamer-devel > > > > ------------------------------------------------------------------------- > This SF.Net email is sponsored by the Moblin Your Move > Developer's challenge > Build the coolest Linux based applications with Moblin SDK & win > great prizes > Grand prize is a trip for two to an Open Source event anywhere > in the world > http://moblin-contest.org/redirect.php?banner_id=100&url=/ > <http://moblin-contest.org/redirect.php?banner_id=100&url=/> > _______________________________________________ > gstreamer-devel mailing list > [hidden email] > <mailto:[hidden email]> > https://lists.sourceforge.net/lists/listinfo/gstreamer-devel > > > > ------------------------------------------------------------------------- > This SF.Net email is sponsored by the Moblin Your Move Developer's > challenge > Build the coolest Linux based applications with Moblin SDK & win > great prizes > Grand prize is a trip for two to an Open Source event anywhere in > the world > http://moblin-contest.org/redirect.php?banner_id=100&url=/ > <http://moblin-contest.org/redirect.php?banner_id=100&url=/> > _______________________________________________ > gstreamer-devel mailing list > [hidden email] > <mailto:[hidden email]> > https://lists.sourceforge.net/lists/listinfo/gstreamer-devel > > > > ------------------------------------------------------------------------ > > ------------------------------------------------------------------------- > This SF.Net email is sponsored by the Moblin Your Move Developer's challenge > Build the coolest Linux based applications with Moblin SDK & win great prizes > Grand prize is a trip for two to an Open Source event anywhere in the world > http://moblin-contest.org/redirect.php?banner_id=100&url=/ > > > ------------------------------------------------------------------------ > > _______________________________________________ > gstreamer-devel mailing list > [hidden email] > https://lists.sourceforge.net/lists/listinfo/gstreamer-devel ------------------------------------------------------------------------- This SF.Net email is sponsored by the Moblin Your Move Developer's challenge Build the coolest Linux based applications with Moblin SDK & win great prizes Grand prize is a trip for two to an Open Source event anywhere in the world http://moblin-contest.org/redirect.php?banner_id=100&url=/ _______________________________________________ gstreamer-devel mailing list [hidden email] https://lists.sourceforge.net/lists/listinfo/gstreamer-devel |
In reply to this post by Bruno-58
Hi, Bruno:
If you wanna just to drop the frames in your probe callback function, that's very simple -- just return FALSE in your probe function. Refer to the API manual of `gst_pad_add_data_probe' function for more details. About the `leak' property of queue, Aurelien Grimaud already gives a good explanation. Eric Zhang 2008/8/29 Bruno <[hidden email]> Okay thanks a lot for these precious informations. ------------------------------------------------------------------------- This SF.Net email is sponsored by the Moblin Your Move Developer's challenge Build the coolest Linux based applications with Moblin SDK & win great prizes Grand prize is a trip for two to an Open Source event anywhere in the world http://moblin-contest.org/redirect.php?banner_id=100&url=/ _______________________________________________ gstreamer-devel mailing list [hidden email] https://lists.sourceforge.net/lists/listinfo/gstreamer-devel |
Free forum by Nabble | Edit this page |