Who is the original author? https://jitsi.org/jitsi-videobridge-performance-evaluation/
The official jitsi team i guess
Yes, this is an interesting study. I am glad you cited it. But I would like to keep this topic focused on summaries of people’s experience with the number of simultaneous sessions on a variety of hosting provider / # CPUs / RAM combinations, as requested in the original post. Thank you.
I am successfully hosting ~23 simultaneous users in one room on Hetzner using a server with 4 physical cores with HT and 64 Gbytes of RAM. RAM usage never came close to maximum as far as I am aware. Participants did not use video during that test. I’ve also run 10 simultaneous users with video in one room on similar hardware. Bandwidth was never a problem on the server. Maximum CPU usage was never above 190% (Linux, maximum would be 800%)
The CPU used was an older but highly clocked Xeon/i5 (if I compare the core clock of my Xeon to older Xeons with 8+ cores)
Jitsi Meet room size seems to be more limited by the client performance, as the many WebRTC streams can be quite taxing to decode.
As we do not track our users, I do not know if there were any other rooms running while the big rooms were active.
I am trialing a cloud SSD server with 20GB disk, 1000GB bandwidth, 1 CPU (core unspecified) and 2Gb RAM. Yesterday it worked acceptably with 4 people. When we got up to 5 the resolution dropped, and with 6 the person with a Chromebook dropped off. We were using Firefox, Chromium, and Linux, Windows and Android devices. Safari on a Mac could not join and the user changed to Windows 10 and Firefox.
After the event, I discovered that CPU usage peaked at 99%.
I am now investigating a dedicated server as below and will report back.
Intel 8 Core C2750 Atom
Single CPU, Entry Level Budget Server
8 x 2.4Ghz CPU Cores
500GB (HDD) Hard Drive
We are running our own instance of jitsi on a dedicated server at OVH
Intel Xeon E3-1270. 8 cores 16 threads, running at stable speeds over 4ghz (performance governor enabled in linux)
32gb of RAM
SSD boot disk + 2TB storage
1000d/500u Internet link
The setup is configured to always aim at the 1080p resolution by default.
P2P is disabled by default for reason I won’t detail here. (so with 2 participants, it still runs off the server)
Tested So far
We had few meetings with around 100 participants and the server was not even working… we had a 20% load on one core and all others were staying below 2% with some occasional peaks at 8% on a single core.
We are going to have a real crazy test next week, as we anticipate 2500+ participants at an event… I am crossing fingers that the server will hold (and am trying to devise a real test we could run to make sure!)
500 upload is like only 100 participants getting hd streams, very roughly…
Hum… Its time for the F word then!
How did that go?
Summary so far: From the reports above, here’s a rough summary of the number of simultaneous clients a Jitsi server can support:
Garden variety VPS servers or Docker containers: three to maybe a half-dozen simultaneous clients.
Big servers (say, 4 cores, 64GBytes RAM): 20-30 simultaneous clients.
Additional Videobridges: Lots and lots of clients… (see the next item)
Bandwidth is not usually a problem.
Clients may matter. Some people report that Firefox slows things. This was true in April, is it true today?
Request: Please use the format below if you’re going to report your experience. To do this, simply select the text below and click “Quote”
I am successfully hosting X simultaneous users on hosting provider using a server with Y cores and Z Gbytes of RAM. Add other qualifying information, like estimates of bandwidth consumption, the point where it gets overloaded, or whether different clients matter…
Anyone else want to chime in? Thanks.
We ended-up firing-up 6 extra Videobriges (4 cores 16gb RAM, 5Gbs Internet) on temporary VPS servers for the event.
When we “openend the gates” at 12:00, 1 000 users connected in the first five minutes, giving a huge load on the main Jitsi server but it worked flawlessly… Overall, we had 1 700 users, communicating and jumping from room to room at all time, the event lasted 5 hours.
In the end, a couple dozen users experienced issues with the platform and we had a live chat support setup to help them… Most of the issues were related to hardware, software or settings issues on the user’s side. The rest was due to issues we had never encoutered before and could not identify (but given that around 1600 users connected without issue, it was most certainly a user-side issue)
awesome ! I can only imagine the nervous tension…
Did you have time to look at the bandwidth vs video quality effectively delivered to clients ?
We did not have the time to setup complete logging solution, but as far as I can tell, everyone had the optimal experience. (our server is set to default to 1080p) and bandwidth was well under the maximum available on all 7 servers.
Here is a snapshot of the monitoring and control rig!
Fantastic! Nice test and very valuable information, Normand.
So…8 cores at 4Ghz on the main server, 4 cores on 6 extra VPS’s = 32 cores at work for 1700 people at 1080p resolution or whatever their connection could handle, yes? A few questions:
Did you have last_n enabled, and for how many?
Did you use Octo?
Did you happen to notice the memory usage on the main server and VPS’s?
Any other configuration settings/optimizations you’d care to mention?
The main machine is a dedicated server, this way we have more control over ressources and Hyperthreading is available.
For the options you are referring to, I woulk like to know what they are!
last_n? What does it do… Same for Octo?
The only things I did to help were enabling epoll and making sure that filehandles were set to 65000 or more, from the instruction on an obsure source that I can’t find anymore…
As for ressources, the main machine had low RAM useage but one CPU core was hit hard… Probably Jicofo… I don’t think it is much optimised for parallel processing. I/O was also hit hard… For the JVB machines, they were basicaly cruising all the way… Getting equal loads for the most part and barely working except for network activity. Users had reported a “smooth ride” from the early feedbacks we recieved.
I have to pull all the jicofo and atop logs to parse them on charts and make sense of it all next week… I will report on the results. (at least, I had that running!)
Not sure about that. What is certain is that Prosody can’t use more than one core. There was a post about a config limited by that, where Prosody was maxing out one core.
last-n is for setting the number of video streams sent from the videobridge to the client (endpoint). If it’s more than that, the thumbnail or tile for n + 1 will be their profile picture or letter. It cuts down on the outgoing bandwidth for the server, and the incoming bandwidth and cpu load for the client. It’s set in the /etc/jitsi/meet/meet.yourdomain.com-config.js file
// Default value for the channel “last N” attribute. -1 for unlimited.
My understanding of Octo (someone correct me if I’m wrong) is that it’s usually used to direct video between the client and the nearest videobridge geographically, but it also distributes load for each rooom among the videobridges. Normally, if you have a room with 20 people and a room with 5, each will be on a videobridge. With octo, the 25 will be spread across the videobridges evenly within a geographic area.
This is wonderful work.
Is there a documentation available to replicate such an implementation?
Thanks and looking forward.
@Normand_Nadon The earlier posters are all correct - this is great work.
But the specs and configuration for a high-performance server are a bit off-topic for this thread (which was to collect summaries of people’s experience.)
Would you consider starting a new topic that talks about your configuration? Subject might be: Configuring a high performance server. You are of course free to summarize your experience and link to that new topic here. (In fact, I would welcome it.)
I bet the moderators on the forum could then transfer the messages to keep this one focused on people’s reports of their experience. In fact, let me do that now: @moderators - would it be possible to transfer all the messages on @Normand_Nadon configuration to a new topic? Many thanks.