Jitsi JVB 2 performance testing 2020

Hi,
I have tested Jitsi JVB 2 performance today.
The test runing only jitsi/jvb:latest docker image on kubernetes using this Jitsi kubernetes deloyment (background 3% cpu).

Here is results:

1. Test case 1:

2. Test case 2: Using 2 JVB instance (2 dedicated cpu 4GB ram Dititalocean). All users sending video and audio to the same room

  • Option: channelLastN: 10.
  • Setting OCTO with strategy: SplitBridgeSelectionStrategy on Jicofo settings to distribue load to both JVBs.

Conclusion:

  • The usage of JVBs has dependent linearly on the number of cpu.
  • The load balancer working well on split load between JVBs, add more JVB will help reduce load and get bigger room!
  • The conference going well without any user disconnected.
  • Can make a estimation 1 cpu unit can handle 15 users, 2 can handle 30, 4 for 60-70 etc…
  • The Main Web, Jicofo, Prosody doesn’t have much load (without chat)
    With support region and load balancing JVBs, Jitsi may have room 1000 users with 16 JVB instances (4 cpu 8gb ram) if all open camera and audio, or can be much lesser if only presenter using camera, I think!
    Thanks for this great tools!
5 Likes

Thanks for this. This seems reasonable: at some point we stopped worrying as much about maximizing throughput for a single machine since we scale over Octo instead. Usually when things slow down we take opportunities to do some profiling and find/fix some inefficiencies that have crept up.

1 Like

Hi @congthang, can you please write exact steps how to reproduce your measurements? I’ll try to run that on our setup (13JVBs) to see how far can it really go.

Thank you,

Milan

Hi,

  1. Use this repo Jitsi k8s deploy to deploy Jitsi meet with jvbs scaleable.

  2. Replace this setting
    org.jitsi.jicofo.BridgeSelector.BRIDGE_SELECTION_STRATEGY=RegionBasedBridgeSelectionStrategy

on
/base/web-base/jicofo-configmap.yaml

by this

org.jitsi.jicofo.BridgeSelector.BRIDGE_SELECTION_STRATEGY=SplitBridgeSelectionStrategy

To separate load to all jvb instead of going to 1 jvb for load balancer testing.

On the test machine:

  1. Install maven and download this repo testing repo: Malleus torture test

  2. go to malleus test folder and run this:
    mvn
    -Dthreadcount=1
    -Dorg.jitsi.malleus.conferences=1
    -Dorg.jitsi.malls.participants=2
    -Dorg.jitsi.malleus.senders=2
    -Dorg.jitsi.malleus.audio_senders=2
    -Dorg.jitsi.malleus.duration=100000
    -Dorg.jitsi.malleus.room_name_prefix=“testroom”
    -Djitsi-meet.tests.toRun=LongLivedTest
    -Dwdm.gitHubTokenName=jitsi-jenkins
    -Dremote.resource.path=/usr/share/jitsi-meet-torture
    -Djitsi-meet.instance.url=https://meet.yourjitsidomain.com/testroom#
    -Dchrome.disable.nosanbox=true
    test

I can get only 2 users each command, the org.jitsi.malls.participants here seems not working. So need to run multi commands to get more participants to same room testroom. This end url /testroom# to make multi command going same room for big room test purpose.

If you use ubuntu server dont run it with root user.

Remember this testing is very heavy on the test machine. I need 64 cpus to have 43 users!

Hi, thank you for details. We have functional and heavily optimized setup already with Octo and more than 1000 users online every day. I was only curious for what can be our setup good :slight_smile: and what will be next limit. From our observations are clients computers our limit and there is nothing we can do about that.

Hi @migo, If you have more test can you share it here :slight_smile:

Hi, if I have some results I’ll post it for sure. :wink: I’m afraid that I’ve not enough test hosts that can produce needed load on our Jitsi installation. As you already stated, this will be very heavy on test machine/s.

1 Like