Hosting large meeting on Jitsi

I am evaluating Jitsi for hosting an event with about 400-450 participants (meeting with camera) and event youtube stream to about 3000 people.
My concern is, has Jitsi ever been used to host a meeting of this size. If so, what are the experiences, precaution and system design should be used. Should I use load balancer or should I use specific AWS instance etc.

1 Like

bump

Jitsi have been used up to 30-40 people with camera enabled.

Jitsi have been used up to 120 users in the same room, when only a handfull have camera enabled.

Jitsi have been used with 3000+ users on youtube.

Thanks @xranby for the answer. I want to know if Jitsi can handle such event? If I have to setup a test call, what should be the config (no of meet server/no of videobridge) etc.

Out of the box you will not be able to host a meeting event with 400+ participants with camera enabled. The bottleneck is not on the server side, the bottleneck is on the client web-browser side that each client can not handle the network band-with nor the processing of 400 incoming video streams.

A typical client webbrowser can handle up to 10 simultaneous video streams. A high end client webbrowser work up to 30. This is why jitsi work well up to around those numbers but not above.

The jitsi server with additional videobridges can handle several hundred simultaneous participants, typically 100 participants per videobridge, but the participants has then to be grouped 10-30 in different rooms.

Why is Jitsi sending back all the participants feed to everyone? WebRTC support SFU and Simulcast, which can be used to selectively forward feeds to a client browser. We can have one main HD stream and rest 20 odd low quality thumbnail streams. The 20 low quality streams can be switched in batches of 20 by the user…

20 low quality thumbnail stream still consume 4Mbit bandwidth and may be too much for some clients internet connections.

Fullscreen mode would be ideal for jitsi as a SFU you can in optimal conditions only need to send the active speakers video. You can also improve jitsi by configure it to use lastN configuration and work to enable off stage layer suppression.

Only by extending jitsi to contain a MCU mixing server would make it possible to host larger meetings easy.

I did some calculation numbers here:

I have read in this forum that jitsi plans to get to support for very large meetings of 100’s of participants with camera and microphone enabled (I think). Is there roadmap or plan of how to get there and the approximate time to get there or at least the steps to get there? It is embarrassing that there is no FOSS alternative to Zoom right now.

How does Zoom support large meetings. Does it combine media streams into one and stream into one web page or is that only on there Zoom non-web application?

just a ping. how zoom handle this amount of load in client side? as per I know zoom is also not mcu and kinda sfu with own solution. what is the main thing that enabled zoom to accomplish this and we cant in jitsi?
Thanks in advance :heart:

1 Like

4Mbit is too high for some clients in 2021?!? seriously ???

Zoom is an MCU not an SFU. They create a composite from all the video streams and send you a single 1.5Mbps downstream. It’s a 2009 technology and has quality issues and flexibility.

Jitsi uses WebRTC SFU which is not processing the video streams but just act as a router and deliver them to the endpoints.
Less processing is better quality since it’s closer to the original capture by you cam/mic.

I was looking at this thread cause I’m trying to find a solution to 100+ conferences on mobile devices.

Seems like the phones/tablets can’t handle that load. Anyone has experience with this issue?

The main thing is with traditional MCU structure, up speed and down speed would be really low (specially in down speed) for client as only one stream is going to server and one mixed stream is coming to client for showing and the down speed depends on exact the amount of data.

I also thought that zoom use traditional MCU but CPU costing for traditional MCU is lot lot much to handle hundreds of users simultaneously. So they said, “Instead of traditional MCU they use Multimedia Routing” which enables them to host 15x participants with same configuration. This is where I am stuck as I am not quite familiar to this structure. I understand tradition MCU or SFU but what is this multimedia routing which enables them to host 15x more participants even with low cpu costing on server.

you can check out the link does Zoom use MCU architecture
please share your thoughts about this architecture as I couldn’t find much about high level structure rather I found details for fragments of the architecture to serve better quality like equinix or protocols they are using.

Correct Zoom is not a Multipoint Control Unit. It uses a kind of cascading multimedia routing protocol that they developed themselves. https://zoom.us/docs/doc/Zoom%20Connection%20Process%20Whitepaper.pdf

rn1984 is right that Zoom creates a single composite for each endpoint, but that composite is not the same for everyone in the meeting. They have load balancing internal to their cloud.

Does anyone know what Jitsi is planning to do and when to support 100+ users? Do they have a roadmap how to get there?