Linux server, packet TX queuelen, gigabit ethernet optimization

I notice that my linux servers network card have a default TX queuelen of 1000
you can check your systems tx queuelen by running
ifconfig
This value of 1000 looks a bit low to push the performance limits for modern gigabit ethernet. *

jitsi-meet uses about 125 UDP packets/s for each user

Have someone experimented if the server can handle more users by increating the TX queuelength to say 20000 ?

ifconfig eth0 txqueuelen 20000

I would be happy if someone who are experiencing performance issues with many users can test and report if your server can handle more users when using a larger txqueuelen, a side effect of larger txqueuelen may be increased latency.

reference: * https://wiki.geant.org/display/public/EK/InterfaceQueueLength

1 Like

Thank you. TX queuelen 20000 can of course be combined with net.core.netdev_max_backlog=100000 and net.core.rmem_max=10485760

Interesting question. I’d also be curious to see test results. While I don’t fully understand how queueing works in linux, I’m sceptical it will help because on our deployments the sendd throughput is not the bottleneck. Increased latency is problematic for an application like jitsi-videobridge, especially if it is not constant (since that can mess with bandwidth estimation).

Regards,
Boris

1 Like

it appears the txqueuelen is the outgoing kernel buffer
while the netdev_max_backlog is the incoming buffer

According to this reference: http://www.hep.ucl.ac.uk/~ytl/tcpip/linux/txqueuelen/datatag-tcp/
a txqueuelen of at least 10000 is required to get near gigabit performance out of the network card. (the first image)
a txqueuelen above 5000 appears to increase the average RTT with some ms, this we want to avoid. (the last image)

we need to setup and run similar performance tests!

@Boris_Grozev, @xranby Hi, I’ve set txqueuelen to 20000 on my two JVB. Of what parameters I need to watch out during this test? Thank you!

Kind regards,

Milan