Synchronised video capture from multiple cameras with splitmuxsink

classic Classic list List threaded Threaded
2 messages Options
Reply | Threaded
Open this post in threaded view
|

Synchronised video capture from multiple cameras with splitmuxsink

john-mark
Hi,

I'm developing an application to capture video (no audio) from multiple cameras simultaneously, encode them, and write them out to segmented files. I was hoping I could describe the design here and what I think it's achieving, so that  you good folks could confirm or refute my reasoning.

My requirement is that each file contain all video for a given (wall clock) time bucket for a given camera. Say each time bucket is 5s wide. So for 2 cameras started at 12:00:00 exactly, I'd want files 12:00:00-camera1, 12:00:00-camera2, 12:00:05-camera1, 12:00:05-camera2, for the first two buckets.

Monospace diagram of pipeline:
```
                                 -----------------------------
camera1src, converters, etc. --> |                           | --> splitmuxsink
                                 | multiqueue                |
camera2src, converters, etc. --> | max-size-buffers=1        | --> splitmuxsink
                              ^  | sync-by-running-time=true |      ^
    .                         |  |                           |      |   .
    . etc.                    |                                     |   . etc.
    .                         |                                     |   .
                              |        split-at-running-time        |
                          pad probe --------------------------------`
```

Key points and questions
  • By putting everything in a single pipeline, I'm assuming we get the *start* of capture synchronised (obviously if the frames are syncrhonised or not will depend on hardware solutions)?
  • I've read multiqueue with max-size-buffers is how you synchronise multiple video streams, is this correct? Won't this cause dropped frames if any camera runs any faster than any other? How can I synchronise and protect against dropping frames? Do I need queues  on each pipeline before the multiqueue?
  • I don't really understand the documentation for the sync-by-running-time option. Is this what I want?
  • The docs suggest to call split-at-running-time from a pad probe to prevent race conditions. I think that by doing this from one (any arbitrary one) of the src pads before the multiqueue should be safe, and I can look at the PTS of the buffers to decide when to assign the next split times

I'm hoping this will work properly. My alternative design that I've been considering is to replace the splitmuxsinks with swapping out everything downstream from the multiqueue with new encoder -> filesink elements. But I feel like the design above is simpler to implement?

Thanks for looking, any comments on any of this are very much appreciated

John-Mark
_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
Reply | Threaded
Open this post in threaded view
|

Re: Synchronised video capture from multiple cameras with splitmuxsink

gotsring
Just a few comments, no hard advice (sorry).

I recently created a sort of NVR that would receive videos from 2-4 IP cams,
show the live view, and record all streams on-demand. When recording, each
camera streams to its own file, 1 file per camera per record session. In
general, I could replay the videos "synchronized" by just starting to play
them all at the same time. The streams were all probably within 50-100 ms of
each other (wall-clock time), so if you want something that is pretty much
visually synchronized, you don't have to really mess with pipeline clocks or
specific gst elements. Anything more seriously synchronized is beyond my
experience.

Implementation details if you're curious:
I created one pipeline per camera, and all pipelines were managed by the
same g_main_loop so I could start/stop recordings in one place, which pretty
much consisted of linking/unlinking a tee to a sub-pipeline with an encoder
and filesink. Of course, each pipeline took 2-5 seconds to initialize as it
connected to the cameras, but once that was done, starting and stopping
recordings took a negligible amount of time.

After receiving and decoding the camera feeds, I had a videorate element to
make sure the streams stayed a consistent 30 FPS, even while the connection
to the camera connection dropped and restarted (would just repeat the last
frame until connection was restored). This forced a uniformity between
camera streams that helps keep synchronization during longer recordings
(longest I tried was 70 hours long).

Hope this helps!



--
Sent from: http://gstreamer-devel.966125.n4.nabble.com/
_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel