This post was updated on .
CONTENTS DELETED
The author has deleted this message.
|
This post was updated on .
CONTENTS DELETED
The author has deleted this message.
|
The videorate element doesn't change the rate at which the hardware is capturing, so my guess is that it has something to do with that. It is possible that the muxer was being stalled which in turn would fill the audio pipeline and would eventually lead to sample loss on a live source. However I am not experienced with the windows elements so I can only speculate. One thing you may want to test is adding an audio rate in the pipeline with the videorate. Another would be to check what the actual framerate with the higher exposure is and setting the pipeline with that framerate rather than 30. The muxer may be expecting frames at the target rate but the source can't report that it can capture at a lower rate, leading to the audio piepline being stalled. DimitriosOn Fri, Jan 13, 2017 at 9:14 AM, Brendan Lockhart <[hidden email]> wrote:
_______________________________________________ gstreamer-devel mailing list [hidden email] https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel |
This post was updated on .
CONTENTS DELETED
The author has deleted this message.
|
Maybe the video frame timestamps are not being timestamped correctly, or you need to wait until the first frame is available. You can try changing the behavior of videorate to start from the first received frame using skip-to-first, but the default behavior should be doing what you need for the pipeline to work. Another minor thing is since the camera has a variant rate you can try setting it before the videorate element to explicitly state that the rate is variant. I don't think it will have much of a difference but it is worth taking a shot. you can do that this way: "ksvideosrc device-index=0 ! image/jpeg,framerate=0/1 ! videorate ! image/jpeg,width=1024,height=Also if you haven't done this yet, try debugging the demuxer using the "--gst-debug=flvmux:5" option on gst-launch. The last thing I can think of is that in some cases encoders/muxers need a certain amount of frames to start producing results. Try increasing the size of your queues in the audio pipeline to 5-10 seconds so that the video pipe has enough time to produce enough data. If none of those work then I am kinda stumped. Maybe someone else can give a bit of insight on what we may be missing. On Sat, Jan 14, 2017 at 10:22 AM, Brendan Lockhart <[hidden email]> wrote:
_______________________________________________ gstreamer-devel mailing list [hidden email] https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel |
Free forum by Nabble | Edit this page |