Jitsi JVB 2 performance testing 2020

I have tested Jitsi JVB 2 performance today.
The test runing only jitsi/jvb:latest docker image on kubernetes using this Jitsi kubernetes deloyment (background 3% cpu).

Here is results:

1. Test case 1:

2. Test case 2: Using 2 JVB instance (2 dedicated cpu 4GB ram Dititalocean). All users sending video and audio to the same room

  • Option: channelLastN: 10.
  • Setting OCTO with strategy: SplitBridgeSelectionStrategy on Jicofo settings to distribue load to both JVBs.


  • The usage of JVBs has dependent linearly on the number of cpu.
  • The load balancer working well on split load between JVBs, add more JVB will help reduce load and get bigger room!
  • The conference going well without any user disconnected.
  • Can make a estimation 1 cpu unit can handle 15 users, 2 can handle 30, 4 for 60-70 etc…
  • The Main Web, Jicofo, Prosody doesn’t have much load (without chat)
    With support region and load balancing JVBs, Jitsi may have room 1000 users with 16 JVB instances (4 cpu 8gb ram) if all open camera and audio, or can be much lesser if only presenter using camera, I think!
    Thanks for this great tools!

Thanks for this. This seems reasonable: at some point we stopped worrying as much about maximizing throughput for a single machine since we scale over Octo instead. Usually when things slow down we take opportunities to do some profiling and find/fix some inefficiencies that have crept up.

1 Like

Hi @congthang, can you please write exact steps how to reproduce your measurements? I’ll try to run that on our setup (13JVBs) to see how far can it really go.

Thank you,



  1. Use this repo Jitsi k8s deploy to deploy Jitsi meet with jvbs scaleable.

  2. Replace this setting


by this


To separate load to all jvb instead of going to 1 jvb for load balancer testing.

On the test machine:

  1. Install maven and download this repo testing repo: Malleus torture test

  2. go to malleus test folder and run this:

I can get only 2 users each command, the org.jitsi.malls.participants here seems not working. So need to run multi commands to get more participants to same room testroom. This end url /testroom# to make multi command going same room for big room test purpose.

If you use ubuntu server dont run it with root user.

Remember this testing is very heavy on the test machine. I need 64 cpus to have 43 users!

Hi, thank you for details. We have functional and heavily optimized setup already with Octo and more than 1000 users online every day. I was only curious for what can be our setup good :slight_smile: and what will be next limit. From our observations are clients computers our limit and there is nothing we can do about that.

Hi @migo, If you have more test can you share it here :slight_smile:

Hi, if I have some results I’ll post it for sure. :wink: I’m afraid that I’ve not enough test hosts that can produce needed load on our Jitsi installation. As you already stated, this will be very heavy on test machine/s.

1 Like

Hi @congthang,

Thank you for the great benchmark =),I have few questions if I may

  • How did you create a performance charts? which tools did you use for the setup?
  • How did you check the status? is it based on data you collect from the conference? or you are part of it to observe the quality and stability?

Thank you

Hello @congthang @migo did you all figure out a way of how to scale shards? We have load tested our K8s cluster and have found what we feel our are the limits of how many conferences/users prosody can handle per shard. Now we need to figure out how to use K8s to scale more shards. Any suggestions?

  • First charts I show is my data enter to google sheets, others is charts from Digitalocean where I placed testing server.
  • I joined the test room with my camera and microphone and was watching the room directly.

On my test as I dont have chat or some other modules so my Prosody server quite no load. See the chart I show above. So prosody can be scale by sticky session on K8S I think.
If you have any benchmark of Prosody can show here that I can work on it as I will need this later :slight_smile:

Well, our prosody is heavily loaded sometimes when lot of users (1000+) join their room at once. I saw prosody to consume 90-100% of CPU (one, it is not multi threaded) So for us prosody can be next limiting factor in some scenarios…

Hi, I’ve not deployed any JVB scaling but here on forum are scripts to do so… try to search for it.

Hi, did you separate Prosody with the Web module and the Jicofo module and check which one is heavy? As I understand Prosody only XMPP for server-server messaging. People will connect to Web and they will be routed to the modules. So web also not the heavy load I think.
Even the websoket is routed to JVB.
The load mostly on Jicofo as it will control the room, people join, leave, place new attendes to a JVB, adding or remove JVBs…

Actually on my kubernetes setup I have multi shards already lol, just need to a proxy in front of it to route same people same room to same shard and each shard has it own Jicofo and JVBs.

But this will not work in case big room, all people need to be same Jicofo. So this case only way is to vertical scale Jicofo to bigger machine.

Hi, yes prosody process consumes that CPU cycles. We have secure domain with ldap auth so not so usual setup here. I’ve only one shard, that is enough for our needs. nginx serves 50-70 req./s at educational block start, that is about 100mbit/s traffic to clients, so no real problem, here are some of today stats to make a picture:
cpu-day prosody_users-day if_enp1s0f0-day nginx_status-day nginx_request-day

1 Like

@migo hey did you figured it out how to auto scale prosody?

hi, I did not. :frowning: I’ve only disabled info logging and let warning and higher levels logging enabled. Did you found something?

Not sure why Jitsi team isn’t moving to EJabberd which offers HA. If they move to Ejabbered then the only remaining concern would be about Jicofo.

Is your setup working fine with websocket? I’m trying to port this project to use websocket but no success so far.