I was just trying out Jitsi Meet with the transcriber in Jigasi and thought of using an open source alternative of Google Speech-to-text API, because of the costs. I was wondering if someone is already working on that or not? If not, then which one do you think is the best (Mozilla DeepSpeech, Kaldi or CMU Sphinx) and also how much time do you think it would take to implement it?
I saw that @Nik_V worked on implementing IBM Watson’s solution. Any word of advice you can give me Nik?