maybe a stupid question.
How does Jitsi select the video quality during a call?
Does it check bandwidth, CPU or what else?
I’m asking this because doing some benchmarks I couldn’t find the point.
Moving computers or smartphones in slow and fast networks I see no difference.
Using weak or powerful computers I see no difference.
I see H264 works better, but some guests have problems with their browser or computer, so I can’t enforce it.
Also, I can’t set minimal resolution to 720 as some weak computers are not enough powerful.
There is a feedback mechanism from the client to the JVB, where the client reports the sequence numbers of packets it has received. This means that the JVB is aware of which packets it sent that were not received. From this, it can calculate a % of packet loss. A higher % of packet loss indicates that the client’s available bandwidth is being exceeded (it can also indicate that the client’s CPU is overloaded for very low-performance clients).
When the packet loss exceeds a threshold, JVB will reduce the resolution/framerate that it’s sending, and keep doing so in steps, eventually even stopping sending video entirely for the least-recently-dominant endpoints, until the packet loss returns to normal levels. Once it reaches a steady state, it will try to slowly increase the bandwidth usage again (probing), increasing resolution/framerate and bringing back suspended streams, in case the loss was temporary, and as long as loss is not encountered it will move all the way back to the maximum bandwidth.
The defaults should generally be fine, so it’s no problem that the lines were commented.
The speed the Internet connection is marketed at has no bearing on its quality for realtime audio/video. Even normal browser-based speed-tests are not very useful, because they mostly measure TCP performance, which is a good indicator of your speed when downloading a file, but can hide levels of packet loss and jitter that would be catastrophic for realtime communication.
You would need to measure the packet loss and jitter when sending a fixed bitrate between the client and server using a tool like iperf in UDP mode.
If you see no difference no matter the client connection you use, then you probably have a bandwidth or CPU constraint at the server side.
This looks fine. No loss, reasonably low jitter. I would have run the test a bit longer to rule out some intermittent connectivity issue but apart from that, it looks fine. You can try increasing the bandwidth in subsequent tests to figure out what your connection is really capable of, but for a small call 5Mbit is plenty.
Next would be to check that your server is not overloaded during a call, run top or similar on the server during a call and look at CPU usage.
I have some doubts about the accuracy of iperf3 when using low volume of data. I just checked and on my 1Gbits/s link iperf3 returns 40 Mbits/s for b=50M and 950 Mbits/s for b=5000. This said I have not found a really good bandwidth test for UDP.
@gpatel-fr What about those numbers are you doubting? You asked iperf3 to send 50Mbit/s, and it reported 40, there is some overhead so a slightly lower number is not surprising (20% lower is quite a lot though, I guess you had some packet loss).
The metrics you should be looking at are loss and jitter, not the bandwidth values (which will just be approximately whatever you asked it to send, up to the capacity of your link, minus any loss). In UDP mode with a bitrate specified, it’s not a “speed test”, it’s measuring how reliably you can send a fixed throughput.
Yes that was it and that’s what I can’t understand about iperf3, why I had packet loss when I was asking for a speed far below the hardware capability, and none when I was asking for actually far more than the hardware allows.
You are certainly right about iperf3. Other tools are claiming to measure bandwidth using UDP and none are doing it reliably. That’s sad because most people plagued by firewalls won’t open TCP just to do a test so UDP is the only option readily available with Jitsi-meet servers.
That’s precisely why testing packet loss and jitter are so important. Many times I’ve seen connections advertised as supporting 1Gbit/s that can transfer something close to that with a TCP bulk transfer, but have unacceptable loss when sending only a few tens of Mbit/s with UDP.
As for your test results though, it might be worth repeating them a few times, or for longer periods. It may have been a transient issue. For example, if you run this test on a 2.4GHz Wi-Fi and then turn on a nearby microwave, you will often see packet loss and jitter spike very high while the microwave is running! Or it can be congestion that goes unnoticed on bulk transfers but spikes packet loss high enough to affect UDP. I once had a router in an outdoor cabinet, that would spike to 20% packet loss for just about 10 minutes every afternoon. Turned out the sun would be directly on the cabinet for just those 10 minutes, and the CPU temperature would get high enough that it would throttle and start dropping packets. Ruined realtime activities like video calls, but hardly noticed it with general web usage.
My memory tells me that I tried a few times (well, at least 2) but I tried it again this morning and you are right on the money, I got more logical results. Maybe I should not have been such a miser and forked out for shielded ethernet after all.