Hi,
I'm developing an application to capture video (no audio) from multiple cameras simultaneously, encode them, and write them out to segmented files. I was hoping I could describe the design here and what I think it's achieving, so that you good folks could confirm or refute my reasoning. My requirement is that each file contain all video for a given (wall clock) time bucket for a given camera. Say each time bucket is 5s wide. So for 2 cameras started at 12:00:00 exactly, I'd want files 12:00:00-camera1, 12:00:00-camera2, 12:00:05-camera1, 12:00:05-camera2, for the first two buckets. Monospace diagram of pipeline: ``` ----------------------------- camera1src, converters, etc. --> | | --> splitmuxsink | multiqueue | camera2src, converters, etc. --> | max-size-buffers=1 | --> splitmuxsink ^ | sync-by-running-time=true | ^ . | | | | . . etc. | | . etc. . | | . | split-at-running-time | pad probe --------------------------------` ``` Key points and questions
Thanks for looking, any comments on any of this are very much appreciated John-Mark _______________________________________________ gstreamer-devel mailing list [hidden email] https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel |
Free forum by Nabble | Edit this page |