We are running our own instance of jitsi on a dedicated server at OVH Specs:
Intel Xeon E3-1270. 8 cores 16 threads, running at stable speeds over 4ghz (performance governor enabled in linux)
32gb of RAM
SSD boot disk + 2TB storage
1000d/500u Internet link
The setup is configured to always aim at the 1080p resolution by default.
P2P is disabled by default for reason I won’t detail here. (so with 2 participants, it still runs off the server)
Tested So far
We had few meetings with around 100 participants and the server was not even working… we had a 20% load on one core and all others were staying below 2% with some occasional peaks at 8% on a single core.
We are going to have a real crazy test next week, as we anticipate 2500+ participants at an event… I am crossing fingers that the server will hold (and am trying to devise a real test we could run to make sure!)
Clients may matter. Some people report that Firefox slows things. This was true in April, is it true today?
Request: Please use the format below if you’re going to report your experience. To do this, simply select the text below and click “Quote”
I am successfully hosting X simultaneous users on hosting provider using a server with Y cores and Z Gbytes of RAM. Add other qualifying information, like estimates of bandwidth consumption, the point where it gets overloaded, or whether different clients matter…
We ended-up firing-up 6 extra Videobriges (4 cores 16gb RAM, 5Gbs Internet) on temporary VPS servers for the event.
When we “openend the gates” at 12:00, 1 000 users connected in the first five minutes, giving a huge load on the main Jitsi server but it worked flawlessly… Overall, we had 1 700 users, communicating and jumping from room to room at all time, the event lasted 5 hours.
In the end, a couple dozen users experienced issues with the platform and we had a live chat support setup to help them… Most of the issues were related to hardware, software or settings issues on the user’s side. The rest was due to issues we had never encoutered before and could not identify (but given that around 1600 users connected without issue, it was most certainly a user-side issue)
We did not have the time to setup complete logging solution, but as far as I can tell, everyone had the optimal experience. (our server is set to default to 1080p) and bandwidth was well under the maximum available on all 7 servers.
Here is a snapshot of the monitoring and control rig!
The main machine is a dedicated server, this way we have more control over ressources and Hyperthreading is available.
For the options you are referring to, I woulk like to know what they are!
last_n? What does it do… Same for Octo?
The only things I did to help were enabling epoll and making sure that filehandles were set to 65000 or more, from the instruction on an obsure source that I can’t find anymore…
As for ressources, the main machine had low RAM useage but one CPU core was hit hard… Probably Jicofo… I don’t think it is much optimised for parallel processing. I/O was also hit hard… For the JVB machines, they were basicaly cruising all the way… Getting equal loads for the most part and barely working except for network activity. Users had reported a “smooth ride” from the early feedbacks we recieved.
I have to pull all the jicofo and atop logs to parse them on charts and make sense of it all next week… I will report on the results. (at least, I had that running!)
last-n is for setting the number of video streams sent from the videobridge to the client (endpoint). If it’s more than that, the thumbnail or tile for n + 1 will be their profile picture or letter. It cuts down on the outgoing bandwidth for the server, and the incoming bandwidth and cpu load for the client. It’s set in the /etc/jitsi/meet/meet.yourdomain.com-config.js file
// Default value for the channel “last N” attribute. -1 for unlimited.
channelLastN: -1, https://github.com/jitsi/jitsi-videobridge/blob/master/doc/last-n.md
My understanding of Octo (someone correct me if I’m wrong) is that it’s usually used to direct video between the client and the nearest videobridge geographically, but it also distributes load for each rooom among the videobridges. Normally, if you have a room with 20 people and a room with 5, each will be on a videobridge. With octo, the 25 will be spread across the videobridges evenly within a geographic area.
@Normand_Nadon The earlier posters are all correct - this is great work.
But the specs and configuration for a high-performance server are a bit off-topic for this thread (which was to collect summaries of people’s experience.)
Would you consider starting a new topic that talks about your configuration? Subject might be: Configuring a high performance server. You are of course free to summarize your experience and link to that new topic here. (In fact, I would welcome it.)
I bet the moderators on the forum could then transfer the messages to keep this one focused on people’s reports of their experience. In fact, let me do that now: @moderators - would it be possible to transfer all the messages on @Normand_Nadon configuration to a new topic? Many thanks.
Sorry, I thought it was on topic…
In that sense, the main instance, (without additional JVBs) handles 100+ users at once without issue.
We did switch to AWS instead of OVH for our main server… We had a loss on the machine’s performance and access to CPU governors, but the Internet bandwidth and ease of configuration is a lot better… Also, the load balancing done by AWS seems to sometime give some temporary glitches on the feeds (when the server scales up or down as a threshold is met)… This is what you get for using a Juggernaut like Amazon as your provider… You loose part of the control!
We might have a go with Linode in the near future too… (seem to be an in-between AWS and OVH in terms of provider)
We hosted 3 simultaneous users on DigitalOcean using a server with 1 core and 2 Gbytes of RAM, but the resolution was low (180p), similar to this post. When the 4th, 5th and 6th user joined, we experienced a strange ghosting issue, black video from up to 2 participants, and video freezing. The moderator was using Firefox on Windows, and they ended up losing connection, on the same Wi-Fi from which I was connecting fine (Chromium 83 on Ubuntu).
At all times, the server’s RAM utilization never exceeded 700MB, and the CPU utilization never exceeded 30%. The “public bandwidth” as recorded by DigitalOcean was 7.5 - 12 Mbps.
Suggestions on improving the video quality and stability would be welcome (as long as they keep the thread on-topic, otherwise PMs?).
Are you certain that your webcam is capable of more resolution?
Also, 1 core and 2Gb is too low. The recommended base setup is 4 cores 8Gb for the JVB to work at it’s best. Low CPU use might have been a result of JVB lowering the quality to have headroom to process the video… From our installation I can tell that One core get’s hammered hard when “negotiating” user sessions, login etc (Prosody, jicofo, jigasi) and other cores share the load for the video compositing (JVB).
Do you have extensive logs to check CPU load, cpu stall time (time it takes before a thread can access the CPU to be processed) etc?
Does this server have Multi-threading? If not, it might absolutely explain why CPU usage was low as there are no extra pre-fetcher to chew the data for the CPU while it is working on an other task. It tends to give the illusion of low load, but it is in fact low efficiency that you would be seeing!
A vCPU is a unit of processing power corresponding to a single hyperthread on a processor core. A modern, multicore processor has several vCPUs.
[…] you can choose between shared CPU and dedicated CPU plans for dedicated vCPU.
Dedicated CPU Droplets have guaranteed access to the full hyperthread at all times. With shared CPU Droplets, the hyperthread allocated to the Droplet may be shared between multiple other Droplets. […]
However, the amount of CPU cycles available for the hypervisor to allocate depends on the workload of the other Droplets sharing that host. If these neighboring Droplets have high load, a Droplet could receive fractions of hyperthreads instead of dedicated access to the underlying physical processors.
We used a shared CPU at the time. A Dedicated CPU wouldn’t make sense, because we host meetings rather rarely, so an elastic CPU provider would be far more economical (recommendations welcome).