Interesting, but not what I expected, which was something along the lines of...
The project was started by Fabrice Bellard (using the pseudonym "Gérard Lantau") in 2000, and was led by Michael Niedermayer from 2004 until 2015. Some FFmpeg developers were also part of the MPlayer project.
The name of the project is inspired by the MPEG video standards group, together with "FF" for "fast forward". The logo uses a zigzag pattern that shows how MPEG video codecs handle entropy encoding.[0]
And I always thought FF was for fast Fourier transform.[1]
Every now and then, I've tried to do some basic video editing. Nothing super fancy, just clipping out part of a long video, maybe cropping and re-encoding to lower resolution or adjusting the volume. I've never been able to find a GUI app for that which is free and I can actually figure out the UI for. Last time, I thought I'd try ffmpeg command line. It actually worked really well and was easier to figure out than the UIs for any of the apps I tried. Just made up a cheat sheet of a dozen or so options combinations, and I get what I need done. So cheers to ffmpeg!
Avisynth is still great for processing video and doing non-interactive editing. It has built in filters for the basic operations you might want to do (such as trimming and concatenating video files). It's basically a scripting language for video. I find it handy for processing interlaced video, and there are tons of external filters that can be installed. Double framerate deinterlacers, even motion-compensated frame rate conversion. I've used those type of filters to get very nice and extremely watchable smooth 60fps video from interlaced PAL DVDs (I have a personal conversion of the PAL Blackadder DVDs into 60fps and it looks wonderful).
I think sentiments such as "monoliths are bad" often start out well informed, but as time passes on and it's passed along between generations of developers, aspects such as when and why monoliths are bad gets lost until it's just some sort of universal truth that nobody bothers to question.
ffmpeg is one of the most hacky command line tool and library. The way the library is made has so many leaky abstractions from the fact it was ripped out of the command line tool.
Making a replacement that had a well designed library and a separate CLI that uses that library would be much better than what monolithic has offered.
out of curiosity, do you know about any project that deals with so many functionalities across so many formats, exposed mainly thru CLI and it's not hacky and leaky at places?
Most of ffmpeg functionality is extremely hard to use from a command line; my frequent kind of a custom development project is "help me enhance/optimise this PHP application using ffmpeg" (shows something that spits ffmpeg command line, two screens worth of it) => ending up writing a C app that does same stuff with libav* directly, sometimes app ending up smaller than original PHP code...
Gstreamer vs ffmpeg is like microservices vs monolith. With similar problems. Microservices look nice in theory. Unfortunately they turn out to be highly complex to manage. Gstreamer is way more complex to use than it appears at first glance and connecting various filters often fails for reasons hard to understand and requires hours of fiddling to get it to work.
While we're here, I'm looking for a decent .NET Core library for wrapping or p/invoking FFMpeg. Preferably the latter, because I'd like to operate on .NET IO streams to be able to fit it into my general media decoding pipeline.
Unfortunately, all the ones I've found so far have their own, mostly-undocumented abstractions and they only operate on files on disk. The only input parameter they provide is file path as a string, and I'd rather not have to round trip a temporary file. It's all through static methods on a static class, so I can't even create an extension method to accept a FileInfo object.
Maybe I just need to use the CLI tool and pipe standard IO.
Jellyfin also uses it that way as far as I know. I think the reason a lot of projects opt to just invoke the CLI is two-fold: licensing and installation (if dynamic linking).
I recently wrote a TV show intro detection CLI tool in Rust and the FFmpeg library was fairly painless to use (thanks to [1]). Documentation for the library is lacking, though.
I'd recommend subprocess execution for being able to monitor progress and abort early alone. You can't cleanly kill a thread in a process, but killing a whole process is bread and butter to an OS.
In production scenarios you'd want that process to run in an isolated jail. Video formats are very complex and I don't have faith that fuzzers have found every exploit.
This would be for transcoding of input design assets to suitable web-standard output formats in a trusted environment, i.e. not a public user-facing tool, and I'm actually only really using it for audio, but I think your point still makes sense.
Imagine if a user optionally could compile ffmpeg with only the code that the user will actually need. For example, a user might only use one or a few formats in practice and only a handful of commands for a limited number of tasks. Configure scripts often allow users to accomplish this to some extent, e.g., excluding libraries we know we'll never use. Another idea is that ffmpeg could output C source code that would assist with compiling a "single purpose" executable. Something similar to tcpdump -dd or curl --libcurl. (How many folks actually use those options, not sure, but they still exist.)
It's common to see comments from ffmpeg users where they have devised some certain command line that accomplishes some specific task but who otherwise do not routinely use ffmpeg for much else. If these folks were able to compile an application that performed only this task using only the required ffmpeg libraries, would this be smaller and less complicated than an ffmpeg static binary.
Forgive me if I missed some sort of joke but you can do exactly that. Don't need the hevc decoder? disable it. Only need the hevc decoder? disable everything and enable only it.
That is a compile time option but if you're not using the system ffmpeg libs then I guess you'd be compiling it anyway.
I feel like asking people to compile it to have fewer options would be a lot more complicated than just having them run it with the options they need, no?
[0] https://en.wikipedia.org/wiki/FFmpeg#History
[1] https://en.wikipedia.org/wiki/Fast_Fourier_transform