Hi, I have a simple pipeline that mixes subtitles over a pre-made video:
5 gst-launch-1.0 \
6 filesrc location=sample_videos/my-video.mp4 ! decodebin ! mixer.sink_0 \
7 filesrc location=subtitles.srt ! subparse ! textrender ! mixer.sink_1 \
8 videomixer name=mixer sink_0::zorder=2 sink_1::zorder=3 sink_1::ypos=-25 sink_1::alpha=1 \
9 ! video/x-raw, height=540 \
My client wants the subtitles to fade in and fade out.
I have spent the past few days reading Gstreamer docs (I'm a newbie to Gstreamer and to video processing) and am still not sure of how best to implement this.
The videomixer element has a sink pad with an alpha parameter that I believe can be modified at runtime. Therefore, I could implement a control source (see section 16.2 of gstreamer-manual.pdf) that can independently parse the SRT file. Then I can connect that to the appropriate mixer sink pad to "slide" its alpha parameter when a subtitle first appears to achieve a fade-in effect. Ditto for the fade-out effect when the duration of the subtitle is about to end. (I can easily parameterize the length of the fade effects.)
But I am wondering if there might be a better approach. For example, the subparse element already parses the SRT file so I could clone it and write a new plugin that also creates the above-mentioned control-source, which I can then access from my app and connect to the mixer sink pad.
Or, I could clone the textrender element to add a fade-in fade-out effect when subtitles change. This seems like it might be the cleanest and most sensible way to achieve my goal, but I don't have enough experience to know for sure.
Does anyone have an opinion on the pros and cons of the different approaches?
Thanks,
_______________________________________________
gstreamer-devel mailing list
[hidden email]
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel