Hi!
I am working on gstreamer with python bindings. My goal is to save an output file in a file format between a given start and a stop time. My pipeline looks something like this: uridecoden >> nvstreammux >> nvinfer >> nvvideoconv >> nvosd >> capsfilter >>avenc_mpeg4 >> mpeg4videoparse >> qtmuxc >> filesink now, I am trying to write a seek event and send it to the pipeline to restrict the playback between start and stop time. seek_event= Gst.Event.new_seek(1.0, Gst.Format.TIME, Gst.SeekFlags.FLUSH | Gst.SeekFlags.SEGMENT , Gst.SeekType.SET, 20 * Gst.SECOND, Gst.SeekType.NONE, 30*Gst.SECOND) pipeline.send_event(seek_event). However, it does not seem to restrict the saving of file between required time interval. Then i sent the event to the src pad of source_bin from uridecodebin, it restricts the current position to the given end time but it does not move further. If I set the pipeline to a NULL state forcibly, the output file gets corrupted. How can I quit the pipeline without corrupting the output file. Are there any best practices I need to follow? I am new to gstreamer, so any help will be highly appreciated. Thank you in advance, k0342 |
I'm pretty sure that for the file to close properly (and be playable afterwards) you need a EOS (end of stream).
And I think you can get that by removing the SEGMENT flag from the seek: https://gstreamer.freedesktop.org/documentation/additional/design/seeking.html?gi-language=c > Non segment seeking will make the pipeline emit EOS when the configured segment has been played. > Segment seeking (using the GST_SEEK_FLAG_SEGMENT) will not emit an EOS at the end of the playback segment but will post a SEGMENT_DONE message on the bus. |
Thank you so much Marianna for replying.
So, i omitted the Gst.SeekFlags.SEGMENT from the seek event. Then, i send it to the pipeline, but it is saving the whole 51 seconds of the input video which i do not want, If i send it to srcpad of the source_bin like this: seek_event= Gst.Event.new_seek(1.0, Gst.Format.TIME, Gst.SeekFlags.FLUSH , Gst.SeekType.SET, 25 * Gst.SECOND, Gst.SeekType.SET, 30*Gst.SECOND) srcpad= source_bin.get_static_pad("src") srcpad.send_event(seek_event) It blocks the source pad of uridecodebin and which is fine to some extent but it does not proceed any further. The current position is stuck at 30 seconds. My code looks like this: if playing==True: print("Pipeline Already Playing") return else: print("Reached HERE") ret= pipeline_rgb.set_state(Gst.State.PLAYING) playing=True seek_enabled=True duration=Gst.CLOCK_TIME_NONE if ret==Gst.StateChangeReturn.FAILURE: print("ERROR: Unable to set the pipeline to the playing state") sys.exit(1) i=0 try: bus=pipeline_rgb.get_bus() while True: msg = bus.timed_pop_filtered( 100 * Gst.MSECOND, (Gst.MessageType.STATE_CHANGED| Gst.MessageType.ERROR | Gst.MessageType.EOS | Gst.MessageType.ASYNC_DONE| Gst.MessageType.DURATION_CHANGED |Gst.MessageType.SEGMENT_DONE ) ) if i>150: print("here") print(msg) if msg: t = msg.type #print(f"MSG TYPE:{t}") if t == Gst.MessageType.ERROR: err, dbg = msg.parse_error() print("ERROR:", msg.src.get_name(), ":", err) if dbg: print("Debug info:", dbg) terminate = True elif t == Gst.MessageType.EOS: print("End-Of-Stream reached") terminate = True elif t == Gst.MessageType.STATE_CHANGED: old_state, new_state, pending_state = msg.parse_state_changed() if msg.src==sink: print("Pipeline state changed from '{0:s}' to '{1:s}'".format( Gst.Element.state_get_name(old_state), Gst.Element.state_get_name(new_state))) # remember whether we are in the playing state or not playing = new_state == Gst.State.PLAYING if playing: # we just moved to the playing state query = Gst.Query.new_seeking(Gst.Format.TIME) if pipeline_rgb.query(query): fmt, seek_enabled, start, end = query.parse_seeking() if seek_enabled: print( "Seeking is ENABLED (from {0} to {1})".format( format_ns(start), format_ns(end))) else: print("Seeking is DISABLED for this stream") else: print("ERROR: Seeking query failed") else: # pipeline still in playing set and null message received if playing==True: current=-1 # initialize current # query the current position of the stream ret, current = pipeline_rgb.query_position( Gst.Format.TIME) if not ret: print("ERROR: Could not query current position") # if we don't know it yet, query the stream duration if duration==Gst.CLOCK_TIME_NONE: (ret, duration) = pipeline_rgb.query_duration( Gst.Format.TIME) if not ret: print("ERROR: Could not query stream duration") # print current position and total duration print("position:::{0} / {1} / seek enabled::: {2} / seek done::: {3} / counter::: {4} ".format(format_ns(current), format_ns(duration), seek_enabled, seek_done, i+1)) i+=1 if seek_enabled and not seek_done and current > 25 * Gst.SECOND: print("Reached 25s, performing seek...") seek_event= Gst.Event.new_seek(1.0, Gst.Format.TIME, Gst.SeekFlags.FLUSH , Gst.SeekType.SET, 25 * Gst.SECOND, Gst.SeekType.SET, 30*Gst.SECOND) srcpad.send_event(seek_event) seek_done = True if terminate: break finally: pipeline_rgb.set_state(Gst.State.NULL) Could you please suggest where I am doing wrong. Also, if I am reading from a rtsp live source, is there any alternative way to achieve the same thing because we are not going to encounter EOS. In that case, I would like to restart my pipeline after receiving a trigger, save output for say 10 seconds into mp4 file and then close. |
Hello,
I might not know what's currently wrong with your pipeline but I may be able to assist when you want to do on-demand recording on a live feed on a specific interval. I asked the question myself on stackoverflow here https://stackoverflow.com/questions/68066985/how-to-write-specific-time-interval-of-gstsamples-rtp-over-udp-h264-packets And after some research I was able to piece a solution together. (I even answered it) Pretty much as the link explains, The basic idea is to use an appsink to store the data frames (preferably encoded and with a max capacity of your choice) in the application. you might want to attach some meta data of your own choice to mark the frame timestamp (just like the struct mentioned in the C code). That's your first pipeline with its own live source meant to be running forever (or for as long as you want the recording)! Then whenever you request a specific interval, you will need to grab the GstSamples/GstBuffers that correspond to your interval. But make sure that the very first frame you pick is a key frame using the method mention in the link. once you got the samples/buffers right, spin a new pipeline and feed those samples using an appsrc and once you run out of samples, let the appsrc send an EOS. and that's your second on-demand pipeline. Hope this helps! Also I am curious, what caps did you use on your caps filter? I know that nvstreammux batches multiple streams but how can you demux them afterwords? say for example that you want stream 0 or 1 etc.? using nvstreamdemux maybe? |
Free forum by Nabble | Edit this page |