I notice that my linux servers network card have a default TX queuelen of 1000
you can check your systems tx queuelen by running ifconfig
This value of 1000 looks a bit low to push the performance limits for modern gigabit ethernet. *
jitsi-meet uses about 125 UDP packets/s for each user
Have someone experimented if the server can handle more users by increating the TX queuelength to say 20000 ?
ifconfig eth0 txqueuelen 20000
I would be happy if someone who are experiencing performance issues with many users can test and report if your server can handle more users when using a larger txqueuelen, a side effect of larger txqueuelen may be increased latency.
Interesting question. I’d also be curious to see test results. While I don’t fully understand how queueing works in linux, I’m sceptical it will help because on our deployments the sendd throughput is not the bottleneck. Increased latency is problematic for an application like jitsi-videobridge, especially if it is not constant (since that can mess with bandwidth estimation).
it appears the txqueuelen is the outgoing kernel buffer
while the netdev_max_backlog is the incoming buffer
According to this reference: http://www.hep.ucl.ac.uk/~ytl/tcpip/linux/txqueuelen/datatag-tcp/
a txqueuelen of at least 10000 is required to get near gigabit performance out of the network card. (the first image)
a txqueuelen above 5000 appears to increase the average RTT with some ms, this we want to avoid. (the last image)
we need to setup and run similar performance tests!