Hi,
I have tested Jitsi JVB 2 performance today.
The test runing only jitsi/jvb:latest docker image on kubernetes using this Jitsi kubernetes deloyment (background 3% cpu).
Here is results:
1. Test case 1:
Specs: 1 JVB, 2 cpu/ 4 GB RAM on cpu optimized Digitalocean.
Option: channelLastN: 10,
All users sending video and audio to the same room and joining one by one using LongLivedTest test from jitsi-meet-torture
The usage of JVBs has dependent linearly on the number of cpu.
The load balancer working well on split load between JVBs, add more JVB will help reduce load and get bigger room!
The conference going well without any user disconnected.
Can make a estimation 1 cpu unit can handle 15 users, 2 can handle 30, 4 for 60-70 etc…
The Main Web, Jicofo, Prosody doesn’t have much load (without chat)
With support region and load balancing JVBs, Jitsi may have room 1000 users with 16 JVB instances (4 cpu 8gb ram) if all open camera and audio, or can be much lesser if only presenter using camera, I think!
Thanks for this great tools!
Thanks for this. This seems reasonable: at some point we stopped worrying as much about maximizing throughput for a single machine since we scale over Octo instead. Usually when things slow down we take opportunities to do some profiling and find/fix some inefficiencies that have crept up.
Hi @congthang, can you please write exact steps how to reproduce your measurements? I’ll try to run that on our setup (13JVBs) to see how far can it really go.
go to malleus test folder and run this:
mvn
-Dthreadcount=1
-Dorg.jitsi.malleus.conferences=1
-Dorg.jitsi.malls.participants=2
-Dorg.jitsi.malleus.senders=2
-Dorg.jitsi.malleus.audio_senders=2
-Dorg.jitsi.malleus.duration=100000
-Dorg.jitsi.malleus.room_name_prefix=“testroom”
-Djitsi-meet.tests.toRun=LongLivedTest
-Dwdm.gitHubTokenName=jitsi-jenkins
-Dremote.resource.path=/usr/share/jitsi-meet-torture
-Djitsi-meet.instance.url=https://meet.yourjitsidomain.com/testroom#
-Dchrome.disable.nosanbox=true
test
I can get only 2 users each command, the org.jitsi.malls.participants here seems not working. So need to run multi commands to get more participants to same room testroom. This end url /testroom# to make multi command going same room for big room test purpose.
If you use ubuntu server dont run it with root user.
Remember this testing is very heavy on the test machine. I need 64 cpus to have 43 users!
Hi, thank you for details. We have functional and heavily optimized setup already with Octo and more than 1000 users online every day. I was only curious for what can be our setup good and what will be next limit. From our observations are clients computers our limit and there is nothing we can do about that.
Hi, if I have some results I’ll post it for sure. I’m afraid that I’ve not enough test hosts that can produce needed load on our Jitsi installation. As you already stated, this will be very heavy on test machine/s.
Hello @congthang@migo did you all figure out a way of how to scale shards? We have load tested our K8s cluster and have found what we feel our are the limits of how many conferences/users prosody can handle per shard. Now we need to figure out how to use K8s to scale more shards. Any suggestions?
On my test as I dont have chat or some other modules so my Prosody server quite no load. See the chart I show above. So prosody can be scale by sticky session on K8S I think.
If you have any benchmark of Prosody can show here that I can work on it as I will need this later
Well, our prosody is heavily loaded sometimes when lot of users (1000+) join their room at once. I saw prosody to consume 90-100% of CPU (one, it is not multi threaded) So for us prosody can be next limiting factor in some scenarios…
Hi, did you separate Prosody with the Web module and the Jicofo module and check which one is heavy? As I understand Prosody only XMPP for server-server messaging. People will connect to Web and they will be routed to the modules. So web also not the heavy load I think.
Even the websoket is routed to JVB.
The load mostly on Jicofo as it will control the room, people join, leave, place new attendes to a JVB, adding or remove JVBs…
Actually on my kubernetes setup I have multi shards already lol, just need to a proxy in front of it to route same people same room to same shard and each shard has it own Jicofo and JVBs.
But this will not work in case big room, all people need to be same Jicofo. So this case only way is to vertical scale Jicofo to bigger machine.
Hi, yes prosody process consumes that CPU cycles. We have secure domain with ldap auth so not so usual setup here. I’ve only one shard, that is enough for our needs. nginx serves 50-70 req./s at educational block start, that is about 100mbit/s traffic to clients, so no real problem, here are some of today stats to make a picture: