I have been receiving these error messages when processing mkv files to mp4: I am attempting to track them down and at catch them within my python code.0:00:00.304415614 ^[[335m11230^[[00m 0x2e250a0 ^[[31;01mERROR ^[[00m ^[[00m libav :0::^[[00m Missing reference picture 0:00:00.304429988 ^[[335m11230^[[00m 0x2e250a0 ^[[31;01mERROR ^[[00m ^[[00m libav :0::^[[00m decode_slice_header error GObject.threads_init() Gst.init() Gst.debug_set_active(True) Gst.debug_set_default_threshold(1) self.pipeline_front = Gst.Pipeline() self.pipeline_front.get_bus().set_sync_handler(self.check_bus1) ... self.elements['avdec_h264'] = Gst.ElementFactory.make('avdec_h264') ... self.pipeline_front.set_state(Gst.State.PAUSED) ... def check_bus1(self, bus, msg): label = 'Bus1' if msg.type == Gst.MessageType.STREAM_STATUS: LOG.warn('{} {} BUS {}'.format(label, msg.type, msg.parse_stream_status())) elif msg.type == Gst.MessageType.STATE_CHANGED: LOG.warn('{} {} BUS {}'.format(label, msg.type, msg.parse_state_changed())) elif msg.type == Gst.MessageType.WARNING: LOG.warn('{} {} BUS {}'.format(label, msg.type, msg.parse_warning())) elif msg.type == Gst.MessageType.ERROR: LOG.warn('{} {} BUS {}'.format(label, msg.type, msg.parse_error())) else: print 'What happened!', msg.type return Gst.BusSyncReply.PASS _______________________________________________ gstreamer-devel mailing list [hidden email] https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel |
On Wed, 2016-08-03 at 16:53 -0400, Brian Panneton wrote:
Hi, > I have been receiving these error messages when processing mkv files > to mp4: > > ERROR libav Missing reference picture > ERROR libav decode_slice_header error > > I am attempting to track them down and at catch them within my python > code. (snip) > > I do not have a MainLoop thus I am doing it this way. I also am not > running it in PLAYING mode and am stepping frame by frame. > > If anyone has any idea what is causing me to not catch these errors > that would be great. I am able to get all the other > errors/warnings/state changes but nothing from libav. These are not really error messages in the GstBus/GstMessage sense, but simply debug log messages. They are not fatal, and depending on the context might be harmless. They tend to happen if you're using a lossy transport and some packets/data was lost, or if you're streaming and the video data doesn't start with a keyframe, for example. Cheers -Tim -- Tim Müller, Centricular Ltd - http://www.centricular.com _______________________________________________ gstreamer-devel mailing list [hidden email] https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel |
These keep coming up as well and I can't seem to catch the errors within code. I noticed that when these errors come up my processing seems to hang. If these are not critical errors that it might be due to something else within my pipeline.0:00:12.830815243 4277 0x35e9cf0 ERROR libav :0:: get_buffer() failed (-1 2 (nil)) 0:00:12.830839083 4277 0x35e9cf0 ERROR libav :0:: decode_slice_header error 0:00:12.830853090 4277 0x35e9cf0 ERROR libav :0:: mmco: unref short failure On Sat, Aug 6, 2016 at 6:16 AM, Tim Müller <[hidden email]> wrote: On Wed, 2016-08-03 at 16:53 -0400, Brian Panneton wrote: _______________________________________________ gstreamer-devel mailing list [hidden email] https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel |
On Mon, 2016-08-08 at 10:07 -0400, Brian Panneton wrote:
Hi Brian, > These keep coming up as well and I can't seem to catch the errors > within code. > > 0:00:12.830815243 4277 0x35e9cf0 ERROR libav > :0:: get_buffer() failed (-1 2 (nil)) > 0:00:12.830839083 4277 0x35e9cf0 ERROR libav > :0:: decode_slice_header error > 0:00:12.830853090 4277 0x35e9cf0 ERROR libav > :0:: mmco: unref short failure > > I noticed that when these errors come up my processing seems to hang. > If these are not critical errors that it might be due to something > else within my pipeline. You can only capture those by installing a log handler with gst_debug_add_log_function(), but I really don't recommend it, since it might be expected and harmless in some cases. Again, without context it's hard to comment further. It indicates data corruption that should usually be recoverable. It might be better to add a watchdog element that posts an error message after a certain time without any buffers flowing. Not sure why your processing comes to a halt, what does your pipeline look like? Cheers -Tim -- Tim Müller, Centricular Ltd - http://www.centricular.com _______________________________________________ gstreamer-devel mailing list [hidden email] https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel |
It is actually a combination of 3 manually connected pipelines: The first two are involved in just stepping through the frames and allowing them to be used outside of the pipeline at two parts. The last pipeline is taking the output from the first pipeline and converting it into mp4. The first two are in PAUSED mode for trick-mode and the last is in PLAYING mode. I ended up using PLAYING mode because I had trouble closing the MP4 file while in PAUSED mode. They are all connected manually with python code where I am passing the frames between them as needed. self.elements = {} self.elements['filesrc'] = Gst.ElementFactory.make(' self.elements['filesrc'].set_ self.elements['matroskademux'] = Gst.ElementFactory.make(' self.elements['matroskademux'] self.elements['h264parse'] = Gst.ElementFactory.make(' self.elements['raw1queue'] = Gst.ElementFactory.make(' self.elements['innersink'] = Gst.ElementFactory.make(' self.elements['innersrc'] = Gst.ElementFactory.make(' self.elements['avdec_h264'] = Gst.ElementFactory.make(' self.elements['videoconvert'] = Gst.ElementFactory.make(' self.elements['capsfilter'] = Gst.ElementFactory.make(' self.elements['capsfilter']. self.elements['raw2queue'] = Gst.ElementFactory.make(' self.elements['rawsink'] = Gst.ElementFactory.make(' self.pipeline_front.add(self. self.pipeline_front.add(self. self.pipeline_front.add(self. self.pipeline_front.add(self. self.pipeline_front.add(self. self.pipeline_end.add(self. self.pipeline_end.add(self. self.pipeline_end.add(self. self.pipeline_end.add(self. self.pipeline_end.add(self. self.pipeline_end.add(self. .... self.elements = {} self.elements['mp4appsrc'] = Gst.ElementFactory.make(' self.elements['mp4appsrc']. self.elements['mp4appsrc']. self.elements['mp4h264parse'] = Gst.ElementFactory.make(' self.elements['mp4queue'] = Gst.ElementFactory.make(' self.elements['mp4mux'] = Gst.ElementFactory.make(' self.elements['mp4mux'].set_ self.elements['mp4mqueue'] = Gst.ElementFactory.make(' self.elements['mp4filesink'] = Gst.ElementFactory.make(' self.elements['mp4filesink']. self.pipeline.add(self. self.pipeline.add(self. self.pipeline.add(self. self.pipeline.add(self. self.pipeline.add(self. self.pipeline.add(self. Another important note is that I have multiple processes running their own version of the combination of the 3 pipelines on different files. There would be around 16 processes per node all sharing the same disk. I can't seem to make the errors a repeatable example as it seems to occur randomly on random files. It almost seems like a race condition or perhaps gstreamer is using a shared cache space on the file system where they are all competing with each other. I just noticed that it is hanging at the end of some of the files (in the third pipeline) while I wait for the EOS. I know when I am done with frames so I attempt to send the EOS myself like such: self.elements['mp4appsrc'].emit('end-of-stream') bus = self.pipeline.get_bus() #self.check_pipe('endpipe', self.pipeline) while True: message = bus.pop_filtered(Gst.MessageType.ANY) if message == None: continue if message.type == Gst.MessageType.ERROR: LOG.warn('PIPE ERROR G'.format(message.parse_error())) if message and message.type in [Gst.MessageType.EOS, Gst.MessageType.ERROR]: break self.pipeline.set_state(Gst.State.PAUSED) self.pipeline.set_state(Gst.State.NULL) This is the only way I could get the file to close properly (when it doesn't hang). I would be much happier if I could close this in a better way from the PAUSED state. Any thoughts would be helpful. Thanks, Brian On Mon, Aug 8, 2016 at 10:26 AM, Tim Müller <[hidden email]> wrote: On Mon, 2016-08-08 at 10:07 -0400, Brian Panneton wrote: _______________________________________________ gstreamer-devel mailing list [hidden email] https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel |
Continuing to dig deeper, I have tracked down another location where it is hanging which might be the culprit. Is there a way to make sure the buffer has been pushed so that I can preroll it on the 2nd pipeline? I'm not sure why but the pull preroll occasionally gets stuck. The odd thing is that if I run the file by itself it does not get stuck and continues to process correctly. If I run multiiple files in parallel a few end up sometimes getting stuck here.self.pipeline_front.get_state(Gst.CLOCK_TIME_NONE)[0] self.elements['innersink'].send_event(Gst.Event.new_step(Gst.Format.BUFFERS, 1, 1, True, False)) self.inner_sample = self.elements['innersink'].emit('pull-preroll') self.inner_buffer = self.inner_sample.get_buffer() self.elements['innersrc'].set_property('caps', self.inner_sample.get_caps()) self.elements['innersrc'].emit('push-buffer', self.inner_buffer) self.raw_sample = self.elements['rawsink'].emit('pull-preroll') <-------------------- Hangs on this statement self.raw_buffer = self.raw_sample.get_buffer() if (self.pipeline_front.get_state(Gst.CLOCK_TIME_NONE)[0] != Gst.StateChangeReturn.SUCCESS): LOG.warn('Pipeline did not get a success status: {0}'.format(self.pipeline_front.get_state(Gst.CLOCK_TIME_NONE))) On Mon, Aug 8, 2016 at 11:45 AM, Brian Panneton <[hidden email]> wrote:
_______________________________________________ gstreamer-devel mailing list [hidden email] https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel |
Free forum by Nabble | Edit this page |