We have lipsync issues when we use jitsi stack (on meet.jit.si too). The problem occurs on unstable network connection, when the bandwidth changes. When the connection improves, the audio begins to lag behind the video by 2-3 seconds for ten of seconds.
The webrtc-internals shows, that jitsi uses separate RTCMediaStream for audio and video tracks, by default. How does tracks synchronization work in this case?
I’ve tried to enable LipSyncHack in jicofo (https://github.com/jitsi/jicofo/blob/master/src/main/java/org/jitsi/jicofo/LipSyncHack.java#L34-L62), which merges audio and video tracks into one RTCMediaStream. It works better: the audio lag became smaller and converges faster, but a small constant lag between audio and video remains after network problems (looks like a timestamp shift between tracks).
Do I understand correctly that tracks merging is the right solution to solve lipsync issues but there is a problem with timestamp rewriting on jvb? Do you have more details about the problem? Can we contribute?