Hi,
I'm developing an application to capture video (no audio) from multiple cameras simultaneously, encode them, and write them out to segmented files. I was hoping I could describe the design here and what I think it's achieving, so that you good folks could confirm or refute my reasoning. My requirement is that each file contain all video for a given (wall clock) time bucket for a given camera. Say each time bucket is 5s wide. So for 2 cameras started at 12:00:00 exactly, I'd want files 12:00:00-camera1, 12:00:00-camera2, 12:00:05-camera1, 12:00:05-camera2, for the first two buckets. Monospace diagram of pipeline: ``` ----------------------------- camera1src, converters, etc. --> | | --> splitmuxsink | multiqueue | camera2src, converters, etc. --> | max-size-buffers=1 | --> splitmuxsink ^ | sync-by-running-time=true | ^ . | | | | . . etc. | | . etc. . | | . | split-at-running-time | pad probe --------------------------------` ``` Key points and questions
I'm hoping this will work properly. My alternative design that I've been considering is to replace the splitmuxsinks with swapping out everything downstream from the multiqueue with new encoder -> filesink elements. But I feel like the design above is simpler to implement? Thanks for looking, any comments on any of this are very much appreciated John-Mark _______________________________________________ gstreamer-devel mailing list [hidden email] https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel |
Just a few comments, no hard advice (sorry).
I recently created a sort of NVR that would receive videos from 2-4 IP cams, show the live view, and record all streams on-demand. When recording, each camera streams to its own file, 1 file per camera per record session. In general, I could replay the videos "synchronized" by just starting to play them all at the same time. The streams were all probably within 50-100 ms of each other (wall-clock time), so if you want something that is pretty much visually synchronized, you don't have to really mess with pipeline clocks or specific gst elements. Anything more seriously synchronized is beyond my experience. Implementation details if you're curious: I created one pipeline per camera, and all pipelines were managed by the same g_main_loop so I could start/stop recordings in one place, which pretty much consisted of linking/unlinking a tee to a sub-pipeline with an encoder and filesink. Of course, each pipeline took 2-5 seconds to initialize as it connected to the cameras, but once that was done, starting and stopping recordings took a negligible amount of time. After receiving and decoding the camera feeds, I had a videorate element to make sure the streams stayed a consistent 30 FPS, even while the connection to the camera connection dropped and restarted (would just repeat the last frame until connection was restored). This forced a uniformity between camera streams that helps keep synchronization during longer recordings (longest I tried was 70 hours long). Hope this helps! -- Sent from: http://gstreamer-devel.966125.n4.nabble.com/ _______________________________________________ gstreamer-devel mailing list [hidden email] https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel |
Free forum by Nabble | Edit this page |