Dear Gstreamer Developers, I'm developing gstreamer filters that either use general neural network models as media filters or help such filters (transforming, mux/demuxing, or converting tensors). https://github.com/nnsuite/nnstreamer One concern is that we have a lot of usage cases with heavy neural networks (e.g., latencies of over 100ms and fluctuating) on live video streams from cameras and we want to drop old-pending video frames if there is a new video frame is coming while the filter is still processing previous frame. (but not dropping already-being-processed frames) In other words, in a stream like this: Camera(v4l2) --> Neural Network (tensor_converter + tensor_filter) --> sink , let's assume that Camera is operating at 60FPS and Neural Network is processing at 1FPS (although it's not realistic enough to say "xxFPS" on these networks as they fluctuate a lot) Then, we want to process 0th camera frame and 60th camera frame, and then 120th camera frame, .. and so on. With common configurations, with large queues, it processes 0th, 1st, 2nd frame and drops newer frames, not older frames if the queue is full. Could you please enlightenme on which document to look at or which part to implement for this matter? Cheers, MyungJoo _______________________________________________ gstreamer-devel mailing list [hidden email] https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel |
Hi, have you tried a queue with max-size-buffers=1,max-size-bytes=0,max-size-time=0,leaky=2
-----Ursprüngliche Nachricht----- Von: gstreamer-devel [mailto:[hidden email]] Im Auftrag von MyungJoo Ham Gesendet: Montag, 30. Juli 2018 02:54 An: [hidden email] Cc: JIJOONG MOON <[hidden email]>; Geunsik Lim <[hidden email]>; Wook Song <[hidden email]>; Jaeyun Jung <[hidden email]>; Sangjung Woo <[hidden email]>; Hyoungjoo Ahn <[hidden email]>; Jinhyuck Park <[hidden email]> Betreff: Drop frames if a filter (neural networks) is too slow Dear Gstreamer Developers, I'm developing gstreamer filters that either use general neural network models as media filters or help such filters (transforming, mux/demuxing, or converting tensors). https://github.com/nnsuite/nnstreamer One concern is that we have a lot of usage cases with heavy neural networks (e.g., latencies of over 100ms and fluctuating) on live video streams from cameras and we want to drop old-pending video frames if there is a new video frame is coming while the filter is still processing previous frame. (but not dropping already-being-processed frames) In other words, in a stream like this: Camera(v4l2) --> Neural Network (tensor_converter + tensor_filter) --> sink , let's assume that Camera is operating at 60FPS and Neural Network is processing at 1FPS (although it's not realistic enough to say "xxFPS" on these networks as they fluctuate a lot) Then, we want to process 0th camera frame and 60th camera frame, and then 120th camera frame, .. and so on. With common configurations, with large queues, it processes 0th, 1st, 2nd frame and drops newer frames, not older frames if the queue is full. Could you please enlightenme on which document to look at or which part to implement for this matter? Cheers, MyungJoo _______________________________________________ gstreamer-devel mailing list [hidden email] https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel _______________________________________________ gstreamer-devel mailing list [hidden email] https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel |
Hi.
Oh.. it appears that with an additional queue element in front of a neural network filter, it's going to work! Thanks! However, is there a way to do this without adding a queue between elements? Maybe because we need a new thread to do this, it's going to be "NO" if we want simple solutions, I guess. Thanks so much! Cheers, MyungJoo --------- Original Message --------- Sender : Thornton, Keith <[hidden email]> Date : 2018-07-30 15:28 (GMT+9) Title : AW: Drop frames if a filter (neural networks) is too slow Hi, have you tried a queue with max-size-buffers=1,max-size-bytes=0,max-size-time=0,leaky=2 -----Ursprüngliche Nachricht----- Von: gstreamer-devel [mailto:[hidden email]] Im Auftrag von MyungJoo Ham Gesendet: Montag, 30. Juli 2018 02:54 An: [hidden email] Cc: JIJOONG MOON <[hidden email]>; Geunsik Lim <[hidden email]>; Wook Song <[hidden email]>; Jaeyun Jung <[hidden email]>; Sangjung Woo <[hidden email]>; Hyoungjoo Ahn <[hidden email]>; Jinhyuck Park <[hidden email]> Betreff: Drop frames if a filter (neural networks) is too slow Dear Gstreamer Developers, I'm developing gstreamer filters that either use general neural network models as media filters or help such filters (transforming, mux/demuxing, or converting tensors). https://github.com/nnsuite/nnstreamer One concern is that we have a lot of usage cases with heavy neural networks (e.g., latencies of over 100ms and fluctuating) on live video streams from cameras and we want to drop old-pending video frames if there is a new video frame is coming while the filter is still processing previous frame. (but not dropping already-being-processed frames) In other words, in a stream like this: Camera(v4l2) --> Neural Network (tensor_converter + tensor_filter) --> sink , let's assume that Camera is operating at 60FPS and Neural Network is processing at 1FPS (although it's not realistic enough to say "xxFPS" on these networks as they fluctuate a lot) Then, we want to process 0th camera frame and 60th camera frame, and then 120th camera frame, .. and so on. With common configurations, with large queues, it processes 0th, 1st, 2nd frame and drops newer frames, not older frames if the queue is full. Could you please enlightenme on which document to look at or which part to implement for this matter? Cheers, MyungJoo _______________________________________________ gstreamer-devel mailing list [hidden email] https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel -- MyungJoo Ham (함명주), Ph.D. Autonomous Machine Lab., AI Center, Samsung Research. Cell: +82-10-6714-2858 _______________________________________________ gstreamer-devel mailing list [hidden email] https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel |
I guess you could play around with the QoS features in GStreamer, but
for this, I think relying on queue is actually much simpler. max-size-buffers=1,max-size-bytes=0,max-size-time=0,leaky=2 gives you a queue that stores just one frame, and you anyway don't need more. Why is an additional thread a problem? On 2018-07-30 08:39, MyungJoo Ham wrote: > Hi. > > Oh.. it appears that with an additional queue element in front of a neural network filter, it's going to work! Thanks! > > However, is there a way to do this without adding a queue between elements? > Maybe because we need a new thread to do this, it's going to be "NO" if we want simple solutions, I guess. > > Thanks so much! > > Cheers, > MyungJoo > > --------- Original Message --------- > Sender : Thornton, Keith <[hidden email]> > Date : 2018-07-30 15:28 (GMT+9) > Title : AW: Drop frames if a filter (neural networks) is too slow > > Hi, have you tried a queue with max-size-buffers=1,max-size-bytes=0,max-size-time=0,leaky=2 > > -----Ursprüngliche Nachricht----- > Von: gstreamer-devel [mailto:[hidden email]] Im Auftrag von MyungJoo Ham > Gesendet: Montag, 30. Juli 2018 02:54 > An: [hidden email] > Cc: JIJOONG MOON <[hidden email]>; Geunsik Lim <[hidden email]>; Wook Song <[hidden email]>; Jaeyun Jung <[hidden email]>; Sangjung Woo <[hidden email]>; Hyoungjoo Ahn <[hidden email]>; Jinhyuck Park <[hidden email]> > Betreff: Drop frames if a filter (neural networks) is too slow > > > Dear Gstreamer Developers, > > > I'm developing gstreamer filters that either use general neural network models as media filters or help such filters (transforming, mux/demuxing, or converting tensors). > > https://github.com/nnsuite/nnstreamer > > > One concern is that we have a lot of usage cases with heavy neural networks (e.g., latencies of over 100ms and fluctuating) on live video streams from cameras and we want to drop old-pending video frames if there is a new video frame is coming while the filter is still processing previous frame. (but not dropping already-being-processed > frames) > > > In other words, in a stream like this: > > Camera(v4l2) --> Neural Network (tensor_converter + tensor_filter) --> sink > > , let's assume that Camera is operating at 60FPS and Neural Network is processing at 1FPS (although it's not realistic enough to say "xxFPS" on these networks as they fluctuate a lot) > > Then, we want to process 0th camera frame and 60th camera frame, and then 120th camera frame, .. and so on. > > With common configurations, with large queues, it processes 0th, 1st, 2nd frame and drops newer frames, not older frames if the queue is full. > > Could you please enlightenme on which document to look at or which part to implement for this matter? > > > Cheers, > MyungJoo > _______________________________________________ > gstreamer-devel mailing list > [hidden email] > https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel > > > -- > MyungJoo Ham (함명주), Ph.D. > Autonomous Machine Lab., AI Center, Samsung Research. > Cell: +82-10-6714-2858 > _______________________________________________ > gstreamer-devel mailing list > [hidden email] > https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel _______________________________________________ gstreamer-devel mailing list [hidden email] https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel |
In reply to this post by MyungJoo Ham
Le lundi 30 juillet 2018 à 15:39 +0900, MyungJoo Ham a écrit :
> Hi. > > Oh.. it appears that with an additional queue element in front of a > neural network filter, it's going to work! Thanks! > > However, is there a way to do this without adding a queue between > elements? > Maybe because we need a new thread to do this, it's going to be "NO" > if we want simple solutions, I guess. You could use a videorate element and simply reduce the frame rate. v4l2src ! videorate ! video/x-raw,frame-rate=1/1 ! ... Otherwise you would have to handle this in your element (basically doing what videorate does in your code). > > Thanks so much! > > Cheers, > MyungJoo > > --------- Original Message --------- > Sender : Thornton, Keith <[hidden email]> > Date : 2018-07-30 15:28 (GMT+9) > Title : AW: Drop frames if a filter (neural networks) is too slow > > Hi, have you tried a queue with max-size-buffers=1,max-size- > bytes=0,max-size-time=0,leaky=2 > > -----Ursprüngliche Nachricht----- > Von: gstreamer-devel [mailto: > [hidden email]] Im Auftrag von MyungJo > o Ham > Gesendet: Montag, 30. Juli 2018 02:54 > An: [hidden email] > Cc: JIJOONG MOON <[hidden email]>; Geunsik Lim < > [hidden email]>; Wook Song <[hidden email]>; Jaeyun > Jung <[hidden email]>; Sangjung Woo < > [hidden email]>; Hyoungjoo Ahn <[hidden email]>; Jin > hyuck Park <[hidden email]> > Betreff: Drop frames if a filter (neural networks) is too slow > > > Dear Gstreamer Developers, > > > I'm developing gstreamer filters that either use general neural netwo > rk models as media filters or help such filters (transforming, mux/de > muxing, or converting tensors). > > https://github.com/nnsuite/nnstreamer > > > One concern is that we have a lot of usage cases with heavy neural ne > tworks (e.g., latencies of over 100ms and fluctuating) on live video > streams from cameras and we want to drop old- > pending video frames if there is a new video frame is coming while th > e filter is still processing previous frame. (but not dropping alread > y-being-processed > frames) > > > In other words, in a stream like this: > > Camera(v4l2) --> Neural Network (tensor_converter + tensor_filter) > --> sink > > , let's assume that Camera is operating at 60FPS and Neural Network i > s processing at 1FPS (although it's not realistic enough to say "xxFP > S" on these networks as they fluctuate a lot) > > Then, we want to process 0th camera frame and 60th camera frame, and > then 120th camera frame, .. and so on. > > With common configurations, with large queues, it processes 0th, 1st, > 2nd frame and drops newer frames, not older frames if the queue is f > ull. > > Could you please enlightenme on which document to look at or which pa > rt to implement for this matter? > > > Cheers, > MyungJoo > _______________________________________________ > gstreamer-devel mailing list > [hidden email] > https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel > > > -- > MyungJoo Ham (함명주), Ph.D. > Autonomous Machine Lab., AI Center, Samsung Research. > Cell: +82-10-6714-2858 > _______________________________________________ > gstreamer-devel mailing list > [hidden email] > https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel _______________________________________________ gstreamer-devel mailing list [hidden email] https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel |
Free forum by Nabble | Edit this page |