I'm a university professor who uses whisper.cpp for video lecture transcriptions, so I'll chime in here.
The thing about whisper.cpp compared to pretty well every other option is that whisper.cpp is really really really really really good.
Like the accuracy is almost always completely 100% (and that's just on the 'medium' model. The 'large' model is probably even better)
There is only one problem with whisper that I've found, which is that if you use a low quantization model (I believe I'm using a 4-bit quantization model), whisper can get stuck into a "no punctuation mode" if that happens your transcription will suddenly start to look like this there will be no punctuation or capitalization it's quite annoying once it gets into this mode it can't get back out again
The way to get around that is to segment your audio.
I use ffmpeg's silence detector to segment the audio whenever there's a >1 second pause in the audio (so that I don't accidentally segment in the middle of a sentence or the middle of a word).
Break the audio up into roughly 10-minute segments and you should not see no-punctuation mode happening.
The other nice thing about Whisper is it'll tag fragments with confidence level and starting- and ending times.
I use the confidence level so that I can quickly jump through low-confidence transcription points to see if it made a mistake (though it usually doesn't).
I use the starting- and ending times to automatically generate an .srt subtitle file.
Then I use ffmpeg to bake in hardsubs for the students.
So far it's been working very smoothly and quickly.
Even on my crappy old GTX1060, I can get subtitles at about 2-3x real time.
And with almost no manual intervention.
From what I've heard they're competitive for English but I've never used Deepspeech myself. Whisper has much more community support so it's probably easier to use overall.