Non linear cpu usage

Hi all,

i tested my Jitsi infrastrucutre with 4 CPUs and noticed a non linear development of the cpu usage.
I used Jitsi torture with eight users per conference sending video and audio and scaled the conferences up to 15.


Does anyone have an idea how to explain the diagram?
I would be very grateful for help.

Could be a load reducer kicking in if the bridge was over its configured stress threshold, or could just be the server hitting capacity at around 75~80% CPU (are you graphing JVB’s CPU usage or the system CPU usage?)

@jbg Thanks for your reply! The graph shows the system CPU usage. My monitoring shows package rates hitting 100000. Where can I find the configured value with a jvb docker container? There is no such value configured in jvb.conf.

I already repeated the tests with 8 CPUs. They show that it is not a problem with hitting the server capacity.


There is no such value configured in jvb.conf.

Then it uses the default, which I linked to. Note that the load reducer is not enabled by default.

I already repeated the tests with 8 CPUs. They show that it is not a problem with hitting the server capacity.

It’s maybe some serialization either from locking or explicit serialization in JVB code, or from GC, causing contention at higher packet rates. You could try running two JVBs on the same system and see if you can get better utilisation figures. We’ve generally found that on anything larger than 8C/16T, we need to run multiple JVBs (in VMs or containers) to make efficient use of the system resources.

I remember having read somewhere (did not bookmark the thing so I can’t provide you a link) that recent version of Java were more efficient, so did you try a very recent Java version (I don’t know if these versions can work with Jitsi at scale, but on my test system Jdk 16 can run a basic conference) ?

JVB works fine on Java 15 & Java 16 under load (we use 15 in production and 16 in test), but a single instance still struggles to make use of more than 8C/16T.

Using G1GC or Shenandoah helps, but I think JVB itself must have enough synchronisation in the packet path that you still get diminishing returns with adding cores. Running multiple instances solves that.

Thanks at all for your help!

@jbg So if the load reducer is enabled there must be “reducer-enabled = true” in jvb.conf?

Yes, or the equivalent old-style setting (if there is one)

@jbg Do you possibly have a link or so that there are problems with one jvb and eight cores?

what kind of link? it’s just our observation from running quite a few JVBs :slight_smile:

I wouldn’t call them problems, it’s just something to be aware of, and part of the reason why octo exists. Most software requires horizontal scaling beyond a certain point, you can’t expect to just always be able to throw more CPU at a single instance.

@jbg Thanks for the explanation! Background is that I need the information for my bachelor thesis and currently can not test two jvbs myself. So the idea was to work with a reference :slight_smile:

There are quite a few benchmarks floating around on these forums, which along with the data you have collected yourself should show what is possible with a single JVB on various hardware