Support for large (100+ user) conferences: Timeline, and contribution

Hi All,

Thank you in advance for your great work on Jitsi!

I am working with a Cyber Charter K-12 organization that is a heavy user of video classes, to tailor and launch a self hosted Jitsi system for online classes. Currently we’re using conferencing from another vendor.

Most classes are quite small and are a great fit for Jitsi in its current state, however there are a few courses which have over 100 participants. Initially we were planning on modifying the Jitsi-meet client to display a CDN distributed livestream for these large classes, but still allow participation and questions from students. With LastN=1 or 2 of course. More recently, we had been hoping to (mis?)use octo to distribute large single conferences over multiple JVBs even though the users are all geographically nearby. This has only gotten to the small proof of concept stage.

In this thread Damencho mentions:

Currently you cannot create a meeting with 200 people. We have a hard limit of 75 participants, but even more than 35, the experience will suffer. But we are working on adding big meetings with more participants (more than 100).

Could anyone provide an update on the timeline for this feature, and also point us in the direction of a way to try even a rough beta version of large rooms if possible? We are also happy to participate in any way possible on the development and testing of this feature.

Thanks!

Blake

–edit–
I see in [jitsi-dev] JVB scalability from Boris that there isn’t a specific timeline, but we’re still happy to assist in development of this feature in any way possible.

1 Like

Did you get anything out of it ? We are trying to enable a large room (around 250 participants) and followed the same path as yours, tried a very low LastN, tried to disable all webcam and mic but one, but it wasn’t enough, so we are now thinking of trying to (mis? :grinning:)use octo just to be able to distribute a single room across many JVB. What was the first results of your proof of concept ?

Hi Arzar,
Never got to the point of testing real load, but I did successfully get multiple users onto two JVBs, running in docker on a single host, by enabling Octo with

org.jitsi.jicofo.BridgeSelector.BRIDGE_SELECTION_STRATEGY=SplitBridgeSelectionStrategy

so that it tried to push each new user to a different bridge, without having to assign users to different regions.

1 Like

Hi @blivingston, does the dockerized JVB taken from here https://github.com/jitsi/docker-jitsi-meet ? Also importantly, does the docker can be scale able up to more than two JVB on single host (assumed it could create more and more rooms and participant as well since the JVB now become in larger quantity)?

Thanks for the enlightenment :slight_smile:

1 Like

Hi Janto, I was testing out using https://github.com/jitsi/docker-jitsi-meet . To add another JVB, I had to go through a few steps. I copied the videobridge declaration in docker-compose.yml and made another one, and assigned it new ports and a new config directory. Then, after a first run, I edited the generated config for the container to use the new ports.

I don’t think that this would actually provide better performance though - JVB2 at least seems to use as many cores as you have ( I could be wrong though). I was mostly using it to test out Octo configuration without the trouble of multiple VMs

has anybody been able to figure this out? “split bridge selection strategy” is good start for testing purposes. looking for a final solution though.
Jicofo does an excellent job load balancing each conference into different bridges, but not load balancing multiple participants in the same meeting.

We do prefer to put participants in the same meeting on the same bridge, but we won’t go past the thresholds, so if a JVB is overloaded we’ll start putting participants on a new bridge.

1 Like

This is great information that I didn’t know. What is the threshold? I would like to confirm that my configuration acts this way before committing to my leaders that I can host a meeting with 300 people

You can start here to see how it’s determined if a bridge is ‘overloaded’

1 Like

To handle large rooms, we run our videobridges on bare metal with realtime kernels, not containered or virtualized.

1 Like

Hi @rasos - that’s very interesting! If you have time to share, I’d love to hear more about the observations and performance measurements that lead to using bare-metal and a RT kernel. For instance is high volume UDP traffic much happier with an RT kernel or outside of virtualization?

In the last few days, I’ve load tested using JitsiMalleus and Selenium Grid scaled out on AWS Elastic Container Service to add up to 100 observers and 1 real presenter on a 2x8 core m4.2xLarge videobridge setup using Octo & SplitBridgeSelectionStrategy. While the CPU had a lot of headroom on both bridges, the presenter video and audio were visibly a bit choppy.

Our implementation were able to reach 250+ users in a conference room by running octo with SplitBridgeSelectionStrategy, channelLastN = 6 and server sizes of our bridges in AWS is c5.metal. Seems like the issues were facing to reach more than 300 is the client side cpu and bandwith limit.

2 Likes

Hello @Anthony_Garcia ,

This is cool, I’m curious to know more on how did u achieve this. and what about the multi jvb setup too. and where is this “SplitBridgeSelectionStrategy”

Regards,