We are using screen share to stream in music from another tab, and have been trying to improve the sound quality that everyone else hears.
I’ve been scouring the forums, and I’m pretty certain we’ve exhausted all the configuration options (noted at the bottom). With these, we’ve improved the audio quality quite a bit, but it’s still not great.
One noticeable area is when the music contains vocals and instruments at the same time. When the vocals start, we usually hear the instruments get a lot quieter, almost muted at times, as if they are being filtered out. This is interesting considering we’ve disabled all the audio processing and noise cancellation.
I’m thinking that the next place to try might be to start playing around with Opus codec settings. Based on what I’ve been reading about Opus, there is a setting within the codec where you can choose ‘voice’ or ‘music’. I would assume that Jitsi set it to ‘voice’, but I’m wondering if maybe that is the reason that instruments are not as audible when the vocals come on.
So my question is, how would I go about changing the Opus codec settings? Specifically, we would want to set it to ‘music’ for one user and keep it as ‘voice’ for everyone else.
For reference, here are the configs we are passing in the URL:
There is a new audio sharing option, you can try on beta, will soon release it on meet.jit.si
Can you say a bit more about it? Will it modify things over and above the configs I listed?
Not directly related but you may want to try Jitas
Likely just a typo, but the flag should be
Please note that some of the audio related options (such as the max average bitrate and stereo) don’t work in p2p.
Thanks, that looks super promising!!
We’ll def want a solution that doesn’t require any installation, but ultimately we may also use this in parallel to offer an even better experience if they user is willing to download + install an app. Thanks for pointing it out!
Definitely a typo, thanks for catching it!
After fixing it though, not noticing a discernible difference. Maybe the Opus codec isn’t actually being used, in which case this config wouldn’t do anything?
Fantastic - thank you!!
Will check this out and test it in beta, compare the sound quality, and report back
Most probably that will be on meet.jit.si by tomorrow…
Okay, here are my testing results
- Two laptops
- Laptop A is sending the stream via either regular screen share (and checking ‘share audio’ box) or the new beta ‘share audio’ feature
- Laptop B is receiving the audio stream
- Microphones are muted on both laptops to eliminate extraneous noise (for Laptop A I needed to keep mic on to make the audio stream work, but I flipped the device to ManyCam which wasn’t running so it effectively muted the mic input)
- Laptop B is recording the audio streams using Audacity recording directly from the sound card (more details here https://www.howtogeek.com/217348/how-to-record-the-sound-coming-from-your-pc-even-without-stereo-mix/)
- All recordings use the exact same track from the same Soundcloud URL, and the same time section of that track (for reference, this is the track SaQi - Temples In The Sky Ft. Mel Semé by Jumpsuit Records | Free Listening on SoundCloud)
- Generated a total of five recordings:
- Baseline: this is without Jitsi, just recording the track on Laptop B directly from Soundcloud. In an ideal world, the other recordings would match this recording in quality.
- Our product: this is a forked version of Jitsi from mid-January of this year, using regular screen share plus the config settings I outlined above
- Jitsi production: using regular screen share on https://meet.jit.si/
- Jitsi beta, regular screen share: using regular screen share on https://beta.meet.jit.si/
- Jitsi beta, new ‘share audio’ feature: using the new ‘share audio’ option on https://beta.meet.jit.si/
All five recordings can be found here if you’d like to give them a listen to see what I mean and agree / disagree with my takeaways:
- The Baseline recording sounds considerably better than the other four recordings(there remains a sizeable gap)
- Although they don’t match the Baseline, all three tests on Jitsi sound considerably better than our product
- All three tests on Jitsi sound very similar to each other, hard to discern the difference. In other words, the ‘share audio’ feature doesn’t seem to improve the quality of the audio stream under these conditions versus just using the regular screen share feature. One benefit it does bring is that it only sends audio and not also video from the screen share, which may be desirable. There may be other quality benefits to it under different conditions which I did not test for (if others are talking during the stream, for example).
For #1, the gap in quality may never be fully closed as long as we are screen sharing from a browser. For example, I ran across this presentation the other day where they talk about streaming music over webRTC and how if you are streaming from a browser you are going to lose quality. FOSDEM 2021 - Can WebRTC help musicians?
That being said, it may be possible to still close some of the gap. Exploring the Opus codec setting of voice vs music might be a good next step (which takes me to my original question). Is this something you guys have looked into?
For takeaway #2, this was surprising to me, as we did already optimize all our configs (already fixed the typo that was pointed out earlier, tests were after that fix). Either we are missing some config, or the audio quality on Jitsi improved since mid-January and we should do a code merge to get the benefit of it. Are you guys aware of any changes in the past 3-4 months which may have improved the quality further?
For #3, it is an interesting finding, and maybe some of the benefits that the ‘share audio’ feature brings only really come to life when there is background noise and people talking and echo to be canceled and all that. Since I had both the microphones muted, that may be the reason that the results came in the same or very similar to the regular screen share. Further testing may be required here, I’m just limited to how much I can do in terms of simulating people talking when I have both laptops sitting next to each other.
If anyone is curious for a visual version of the five tests, here are the sound wave profiles from Audacity:
Jitsi beta, regular screen share:
Jitsi beta, new ‘share audio’ feature:
Did you make sure to test in a non-p2p setting, i.e., media streams are routed via the bridge?
The actual audio bitrate can be estimated by looking at the audio streams in
chrome://webrtc-internals, do you see an increased bitrate there after setting
@Shmoop that functionality is on meet.jit.si already. But I got to talk with Mihai, who added that functionality. And still you need some URL params for it, maybe you can redo your testing by adding this to the URL:
#config.audioQuality.opusMaxAverageBitrate:510000&config.audioQuality.stereo=true for the participant that will be audio sharing.
We may add those by default in the future so no URL params will be needed, as we were discussing and more options on extending that functionality.
Got it. Will re-test shortly with those configs in the URL, as well as P2P disabled
Okay, updated test results
- Same setup as listed previously
- Ran two additional tests:
- Jitsi production (5-7-21), using regular screen share with share audio checkbox
- Jitsi production, using new ‘share audio’ feature
- In both tests, added the following to the URL config: #config.audioQuality.opusMaxAverageBitrate:510000&config.audioQuality.stereo=true&config.p2p.enabled=false
The two new recordings, along with the Baseline recording, can be found in this folder: Testing on 5-7-21 - Google Drive
- The two new tests sound pretty identical to each other (to me at least) and have identical-looking wave patterns (see appendix)
- Adding the configs to the URL brought a significant improvement in the audio quality, and both tests now sound much closer to the Baseline. There is still a gap, but it is much smaller. If I had to give this a grading, I’d say it’s capturing and relaying about 95%+ of the original sound. If you closely look at the wave patterns (appendix) between the two tests and the Baseline, you can see that in certain areas the Baseline waves are a bit ‘fuller’, so there is still a little bit of a loss, and that difference is also discernible to the ear if you listen closely.
That being said, this is a huge improvement over anything I’ve tested previously. Really outstanding work!
It seems that the next step for us is to merge the latest Jitsi code into our product
Sound wave profiles from Audacity:
Jitsi production, regular screen share:
Jitsi production, new ‘share audio’ feature: