Jitsi Large Conference without Scalable Setup

Hello again!
I am sorry if i had too much asking, i am searching similiar article but didn’t found any same.

Can Jitsi handle a large conference with minimum 200 users at same room without scalable setup?
If can, what thing that should I configure?


Yes it can. You’d just need to watch out for Prosody because it’s single-threaded. You can search the forum for suggestions on how to improve the performance of prosody.

So it’s in prosody.
Thanks for the answer! Will try work on that!

The maximum handled at meet.jit.si is said to be 300 - and personally I doubt that it involves 300 full HD video streams at the same time. However it involves optimizations by the best Jitsi engineers available, so you may have to work a bit at that.

Small correction here we support up to 500 participants in a call with 300 sending video in HD. This is what we test with and we are still improving some aspects in this scenario.


Right. That means with pagination ? that is, video is enabled but not actually sent to all 500 users for all 300 ?

Exactly, we send from the bridge just those that are seen in tileview or filmstrip. You receive just what is visible in the UI.

Just to ask clarification, it means single server with jitsi, prosody, jicofo and jibri etc on same server?
Can anyone suggest what specs are needed for server running on public cloud say Azure or AWS.

@damencho How beefy of a server do you need for the 500-participants / 300-sending scenario?

We have tested with 100 test participants all sending simulcast vp8 video, and all receiving either 1 720p, 9 360p or 24 180p, and for us, a bare metal 6 cpu (12 thread) 3+ ghz machine runs fine, but if we try the same test on a virtual server – even one with 20 cpus (40 threads), we see periodic audio problems. (We’re using 64kbps opus for audio… Could that be a factor?)

Anything special we should tune on the server to help with these large conferences? We would especially like to move off bare metal but right now we can’t because of the audio problems on virtual servers.


Something like aws m5.xlarge for signalling node and 5 c5.xlarge should be fine for 500participant call, jvbs enabled for octo of course.

1 Like

Are they dedicated vcores ? For a provider (Hetznet), you can see the price difference, it’s here for a reason. For real time you need dedicated cores.

We tried virtual machines on Hetznet, Oracle, and OVH clouds. All claim dedicated CPUs. We saw the same results.

I’m wondering if something is wrong with our implementation because we’re not scaling near as well as @damencho is saying he sees, and I trust @damencho 100%. : )

How many bridges were in the meeting when you were testing?

Just 1. From your question, I assume we’re supposed to be using octo to scale to 100 full participants?

We are scaling to 100 just fine on 1 bridge if we use bare metal.

Is there a writeup somewhere on how to implement 100-person meetings? If there is, I haven’t been able to find it.

I think you are stressing the bridge too much which was causing your issues. If you are monitoring the bridge you will see the stress metric it pushes and especially the rtp delay. Those metrics are useful to tune up the system based on the machines you use.
To be able to use several bridges in a single conference you need to configure octo.

1 Like

@damencho Thank you. I will point my developers towards the stress metric and rtp delay to guide us.

yes, the virtual aspect could be the issue. When everything is taken in account, a virtual machine is just a process in the hypervisor and it must be very difficult - maybe even impossible - to separate inside a vm the process that need to work in real time from the other that don’t need it. And in a commercial hypervisor you can’t just decide that Omatic’s whole VM should be realtime while the others customer VMs are waiting. That’s what is different between VM and bare metal I guess.
The only way is to be smarter about buffering, up to a limit because you can’t delay indefinitely data in a meeting. Did you try RED BTW ? IIRC you said to me once that you would report back on your experience. I think I have seen still recently that there were changes in Chrome to support RED better.

We have not had a window to try RED yet, but that’s a really good point! That could mitigate.

My current plan, based on Damencho’s advice is to do larger conferences on Octo-enabled bridges, opening the potential for VMs to be used for peak needs