I’m new to working with WebRTC, and I’m trying to figure out whether I can hack Jitsi to play back audio (and possibly video) after a time delay, rather than as soon as it’s received. This is for a group singing app where we need tight control over latency; we’ve got a working prototype that uses XHR combined with audio worklets (not WebRTC) to do this, but it’d be nice if we could get the robustness of a real videoconferencing app.
Is this the kind of thing that Jitsi could be made to do? If so, can someone point me to the place in the code where something like this could be inserted? (I’m imagining something like, take the existing stream from the network to the
<video> element, and instead stream the input from the network into a buffer, and then have a separate stream that reads from the buffer and that the
<video> reads from. But I could be confused about how this works.)