Recommended Server Specs for 2020?

I am trialing a cloud SSD server with 20GB disk, 1000GB bandwidth, 1 CPU (core unspecified) and 2Gb RAM. Yesterday it worked acceptably with 4 people. When we got up to 5 the resolution dropped, and with 6 the person with a Chromebook dropped off. We were using Firefox, Chromium, and Linux, Windows and Android devices. Safari on a Mac could not join and the user changed to Windows 10 and Firefox.

After the event, I discovered that CPU usage peaked at 99%.

I am now investigating a dedicated server as below and will report back.
Intel 8 Core C2750 Atom
Single CPU, Entry Level Budget Server
from£20/mo
8 x 2.4Ghz CPU Cores
8GB Memory
500GB (HDD) Hard Drive
10TB Bandwidth

We are running our own instance of jitsi on a dedicated server at OVH
Specs:
Intel Xeon E3-1270. 8 cores 16 threads, running at stable speeds over 4ghz (performance governor enabled in linux)
32gb of RAM
SSD boot disk + 2TB storage
1000d/500u Internet link

The setup is configured to always aim at the 1080p resolution by default.
P2P is disabled by default for reason I won’t detail here. (so with 2 participants, it still runs off the server)

Tested So far
We had few meetings with around 100 participants and the server was not even working… we had a 20% load on one core and all others were staying below 2% with some occasional peaks at 8% on a single core.

We are going to have a real crazy test next week, as we anticipate 2500+ participants at an event… I am crossing fingers that the server will hold (and am trying to devise a real test we could run to make sure!)

1 Like

500 upload is like only 100 participants getting hd streams, very roughly…

Hum… Its time for the F word then! :frowning:

How did that go?

1 Like

Summary so far: From the reports above, here’s a rough summary of the number of simultaneous clients a Jitsi server can support:

  • Garden variety VPS servers or Docker containers: three to maybe a half-dozen simultaneous clients.

  • Big servers (say, 4 cores, 64GBytes RAM): 20-30 simultaneous clients.

  • Additional Videobridges: Lots and lots of clients… (see the next item)

  • Bandwidth is not usually a problem.

  • Clients may matter. Some people report that Firefox slows things. This was true in April, is it true today?

Request: Please use the format below if you’re going to report your experience. To do this, simply select the text below and click “Quote”

I am successfully hosting X simultaneous users on hosting provider using a server with Y cores and Z Gbytes of RAM. Add other qualifying information, like estimates of bandwidth consumption, the point where it gets overloaded, or whether different clients matter…

Anyone else want to chime in? Thanks.

1 Like

We ended-up firing-up 6 extra Videobriges (4 cores 16gb RAM, 5Gbs Internet) on temporary VPS servers for the event.
When we “openend the gates” at 12:00, 1 000 users connected in the first five minutes, giving a huge load on the main Jitsi server but it worked flawlessly… Overall, we had 1 700 users, communicating and jumping from room to room at all time, the event lasted 5 hours.
In the end, a couple dozen users experienced issues with the platform and we had a live chat support setup to help them… Most of the issues were related to hardware, software or settings issues on the user’s side. The rest was due to issues we had never encoutered before and could not identify (but given that around 1600 users connected without issue, it was most certainly a user-side issue)

awesome ! I can only imagine the nervous tension…
Did you have time to look at the bandwidth vs video quality effectively delivered to clients ?

We did not have the time to setup complete logging solution, but as far as I can tell, everyone had the optimal experience. (our server is set to default to 1080p) and bandwidth was well under the maximum available on all 7 servers.
Here is a snapshot of the monitoring and control rig!

1 Like

Fantastic! Nice test and very valuable information, Normand.

So…8 cores at 4Ghz on the main server, 4 cores on 6 extra VPS’s = 32 cores at work for 1700 people at 1080p resolution or whatever their connection could handle, yes? A few questions:

Did you have last_n enabled, and for how many?

Did you use Octo?

Did you happen to notice the memory usage on the main server and VPS’s?

Thanks,
Sam

Any other configuration settings/optimizations you’d care to mention?

The main machine is a dedicated server, this way we have more control over ressources and Hyperthreading is available.

For the options you are referring to, I woulk like to know what they are!

last_n? What does it do… Same for Octo?

The only things I did to help were enabling epoll and making sure that filehandles were set to 65000 or more, from the instruction on an obsure source that I can’t find anymore…

As for ressources, the main machine had low RAM useage but one CPU core was hit hard… Probably Jicofo… I don’t think it is much optimised for parallel processing. I/O was also hit hard… For the JVB machines, they were basicaly cruising all the way… Getting equal loads for the most part and barely working except for network activity. Users had reported a “smooth ride” from the early feedbacks we recieved.

I have to pull all the jicofo and atop logs to parse them on charts and make sense of it all next week… I will report on the results. (at least, I had that running!)

Not sure about that. What is certain is that Prosody can’t use more than one core. There was a post about a config limited by that, where Prosody was maxing out one core.

last-n is for setting the number of video streams sent from the videobridge to the client (endpoint). If it’s more than that, the thumbnail or tile for n + 1 will be their profile picture or letter. It cuts down on the outgoing bandwidth for the server, and the incoming bandwidth and cpu load for the client. It’s set in the /etc/jitsi/meet/meet.yourdomain.com-config.js file
// Default value for the channel “last N” attribute. -1 for unlimited.
channelLastN: -1,

My understanding of Octo (someone correct me if I’m wrong) is that it’s usually used to direct video between the client and the nearest videobridge geographically, but it also distributes load for each rooom among the videobridges. Normally, if you have a room with 20 people and a room with 5, each will be on a videobridge. With octo, the 25 will be spread across the videobridges evenly within a geographic area.


1 Like

Hello @Normand_Nadon

This is wonderful work.
Is there a documentation available to replicate such an implementation?
Thanks and looking forward.

Sudhir Gandotra
+91-93124-65666

@Normand_Nadon The earlier posters are all correct - this is great work.

But the specs and configuration for a high-performance server are a bit off-topic for this thread (which was to collect summaries of people’s experience.)

Would you consider starting a new topic that talks about your configuration? Subject might be: Configuring a high performance server. You are of course free to summarize your experience and link to that new topic here. (In fact, I would welcome it.)

I bet the moderators on the forum could then transfer the messages to keep this one focused on people’s reports of their experience. In fact, let me do that now: @moderators - would it be possible to transfer all the messages on @Normand_Nadon configuration to a new topic? Many thanks.

Sorry, I thought it was on topic…
In that sense, the main instance, (without additional JVBs) handles 100+ users at once without issue.

We did switch to AWS instead of OVH for our main server… We had a loss on the machine’s performance and access to CPU governors, but the Internet bandwidth and ease of configuration is a lot better… Also, the load balancing done by AWS seems to sometime give some temporary glitches on the feeds (when the server scales up or down as a threshold is met)… This is what you get for using a Juggernaut like Amazon as your provider… You loose part of the control!

We might have a go with Linode in the near future too… (seem to be an in-between AWS and OVH in terms of provider)

1 Like

We hosted 3 simultaneous users on DigitalOcean using a server with 1 core and 2 Gbytes of RAM, but the resolution was low (180p), similar to this post. When the 4th, 5th and 6th user joined, we experienced a strange ghosting issue, black video from up to 2 participants, and video freezing. The moderator was using Firefox on Windows, and they ended up losing connection, on the same Wi-Fi from which I was connecting fine (Chromium 83 on Ubuntu).

At all times, the server’s RAM utilization never exceeded 700MB, and the CPU utilization never exceeded 30%. The “public bandwidth” as recorded by DigitalOcean was 7.5 - 12 Mbps.

Suggestions on improving the video quality and stability would be welcome (as long as they keep the thread on-topic, otherwise PMs?).

Are you certain that your webcam is capable of more resolution?

Also, 1 core and 2Gb is too low. The recommended base setup is 4 cores 8Gb for the JVB to work at it’s best. Low CPU use might have been a result of JVB lowering the quality to have headroom to process the video… From our installation I can tell that One core get’s hammered hard when “negotiating” user sessions, login etc (Prosody, jicofo, jigasi) and other cores share the load for the video compositing (JVB).

Do you have extensive logs to check CPU load, cpu stall time (time it takes before a thread can access the CPU to be processed) etc?

Does this server have Multi-threading? If not, it might absolutely explain why CPU usage was low as there are no extra pre-fetcher to chew the data for the CPU while it is working on an other task. It tends to give the illusion of low load, but it is in fact low efficiency that you would be seeing!

what’s the output of

sudo lshw -C network -sanitize | grep configuration | grep -v -E "veth|tun|bridge"

(if necessary run sudo apt install lshw)

and (while you are hosting a session with 3 users)

sudo sar -u 2 20

(if necessary sudo apt install sysstat)