Recommended Server Specs for 2020?

We did not have the time to setup complete logging solution, but as far as I can tell, everyone had the optimal experience. (our server is set to default to 1080p) and bandwidth was well under the maximum available on all 7 servers.
Here is a snapshot of the monitoring and control rig!


Fantastic! Nice test and very valuable information, Normand.

So…8 cores at 4Ghz on the main server, 4 cores on 6 extra VPS’s = 32 cores at work for 1700 people at 1080p resolution or whatever their connection could handle, yes? A few questions:

Did you have last_n enabled, and for how many?

Did you use Octo?

Did you happen to notice the memory usage on the main server and VPS’s?


Any other configuration settings/optimizations you’d care to mention?

The main machine is a dedicated server, this way we have more control over ressources and Hyperthreading is available.

For the options you are referring to, I woulk like to know what they are!

last_n? What does it do… Same for Octo?

The only things I did to help were enabling epoll and making sure that filehandles were set to 65000 or more, from the instruction on an obsure source that I can’t find anymore…

As for ressources, the main machine had low RAM useage but one CPU core was hit hard… Probably Jicofo… I don’t think it is much optimised for parallel processing. I/O was also hit hard… For the JVB machines, they were basicaly cruising all the way… Getting equal loads for the most part and barely working except for network activity. Users had reported a “smooth ride” from the early feedbacks we recieved.

I have to pull all the jicofo and atop logs to parse them on charts and make sense of it all next week… I will report on the results. (at least, I had that running!)

Not sure about that. What is certain is that Prosody can’t use more than one core. There was a post about a config limited by that, where Prosody was maxing out one core.

last-n is for setting the number of video streams sent from the videobridge to the client (endpoint). If it’s more than that, the thumbnail or tile for n + 1 will be their profile picture or letter. It cuts down on the outgoing bandwidth for the server, and the incoming bandwidth and cpu load for the client. It’s set in the /etc/jitsi/meet/ file
// Default value for the channel “last N” attribute. -1 for unlimited.
channelLastN: -1,

My understanding of Octo (someone correct me if I’m wrong) is that it’s usually used to direct video between the client and the nearest videobridge geographically, but it also distributes load for each rooom among the videobridges. Normally, if you have a room with 20 people and a room with 5, each will be on a videobridge. With octo, the 25 will be spread across the videobridges evenly within a geographic area.


Hello @Normand_Nadon

This is wonderful work.
Is there a documentation available to replicate such an implementation?
Thanks and looking forward.

Sudhir Gandotra

@Normand_Nadon The earlier posters are all correct - this is great work.

But the specs and configuration for a high-performance server are a bit off-topic for this thread (which was to collect summaries of people’s experience.)

Would you consider starting a new topic that talks about your configuration? Subject might be: Configuring a high performance server. You are of course free to summarize your experience and link to that new topic here. (In fact, I would welcome it.)

I bet the moderators on the forum could then transfer the messages to keep this one focused on people’s reports of their experience. In fact, let me do that now: @moderators - would it be possible to transfer all the messages on @Normand_Nadon configuration to a new topic? Many thanks.

Sorry, I thought it was on topic…
In that sense, the main instance, (without additional JVBs) handles 100+ users at once without issue.

We did switch to AWS instead of OVH for our main server… We had a loss on the machine’s performance and access to CPU governors, but the Internet bandwidth and ease of configuration is a lot better… Also, the load balancing done by AWS seems to sometime give some temporary glitches on the feeds (when the server scales up or down as a threshold is met)… This is what you get for using a Juggernaut like Amazon as your provider… You loose part of the control!

We might have a go with Linode in the near future too… (seem to be an in-between AWS and OVH in terms of provider)

1 Like

We hosted 3 simultaneous users on DigitalOcean using a server with 1 core and 2 Gbytes of RAM, but the resolution was low (180p), similar to this post. When the 4th, 5th and 6th user joined, we experienced a strange ghosting issue, black video from up to 2 participants, and video freezing. The moderator was using Firefox on Windows, and they ended up losing connection, on the same Wi-Fi from which I was connecting fine (Chromium 83 on Ubuntu).

At all times, the server’s RAM utilization never exceeded 700MB, and the CPU utilization never exceeded 30%. The “public bandwidth” as recorded by DigitalOcean was 7.5 - 12 Mbps.

Suggestions on improving the video quality and stability would be welcome (as long as they keep the thread on-topic, otherwise PMs?).

Are you certain that your webcam is capable of more resolution?

Also, 1 core and 2Gb is too low. The recommended base setup is 4 cores 8Gb for the JVB to work at it’s best. Low CPU use might have been a result of JVB lowering the quality to have headroom to process the video… From our installation I can tell that One core get’s hammered hard when “negotiating” user sessions, login etc (Prosody, jicofo, jigasi) and other cores share the load for the video compositing (JVB).

Do you have extensive logs to check CPU load, cpu stall time (time it takes before a thread can access the CPU to be processed) etc?

Does this server have Multi-threading? If not, it might absolutely explain why CPU usage was low as there are no extra pre-fetcher to chew the data for the CPU while it is working on an other task. It tends to give the illusion of low load, but it is in fact low efficiency that you would be seeing!

what’s the output of

sudo lshw -C network -sanitize | grep configuration | grep -v -E "veth|tun|bridge"

(if necessary run sudo apt install lshw)

and (while you are hosting a session with 3 users)

sudo sar -u 2 20

(if necessary sudo apt install sysstat)

No, but I could get them if instructed how to.

I tried to find that out from the Digital Ocean documentation but it’s not very clear, or you meant hyper-threading vs. the software concept of multi-threading?. The only relevant result for “threading” from the DO docs is here:

  • A vCPU is a unit of processing power corresponding to a single hyperthread on a processor core. A modern, multicore processor has several vCPUs.

[…] you can choose between shared CPU and dedicated CPU plans for dedicated vCPU.

Dedicated CPU Droplets have guaranteed access to the full hyperthread at all times. With shared CPU Droplets, the hyperthread allocated to the Droplet may be shared between multiple other Droplets. […]
However, the amount of CPU cycles available for the hypervisor to allocate depends on the workload of the other Droplets sharing that host. If these neighboring Droplets have high load, a Droplet could receive fractions of hyperthreads instead of dedicated access to the underlying physical processors.

We used a shared CPU at the time. A Dedicated CPU wouldn’t make sense, because we host meetings rather rarely, so an elastic CPU provider would be far more economical (recommendations welcome).

configuration: driver=virtio-pci latency=0
   configuration: autonegotiation=off broadcast=yes driver=virtio_net driverversion=1.0.0 ip=[REMOVED] link=yes multicast=yes
configuration: driver=virtio-pci latency=0
   configuration: autonegotiation=off broadcast=yes driver=virtio_net driverversion=1.0.0 ip=[REMOVED] link=yes multicast=yes

Will do during out next discussion.

!!! well this explains a lot. This means that your Cpu could be at times 0,9 CPU, even 0.7 or 0.5 CPU. So at times when your server have to send a packet, it simply can’t.

For a web server it’s not vital, it means that the page refresh takes longer.

A video server is not a web server, the resource requirements are higher, a lot higher because it’s a real time server, if the packets are not received or sent timely, the sound is garbled, the image distorted.

I was asking about network because even with 2 dedicated CPU, an advanced network controller is vital for good performance. If you don’t even have ONE dedicated CPU, there is no need to search further. You have absolutely no idea of the necessary resources.

You can resize the droplet if you don’t host meeting rarely. Upgrade before meeting to a more powerfull droplet (4 dedicated CPU / 8 RAM) and donwgrade after meeting.

I don’t know the cost of your instance, but we tried several solutions in the course of the project and here are the results:

Amazon Web Services, 4 Vcores 8gb RAM.
The server was not responsive enough, we swithed because of quality issues and jitter. (couple $ per month, don’t know exactly)

OVH Canada, dedicated server 8cores with hyper-thread (so 16 threads total) 32Gb Ram:
Ran like a champ, the only issue was that this instance did not have enough Public Bandwidth and could not be upgraded (76$ Canadian per month)

We wanted to switch to another service within OVH and time was running-out before the big event. There was a bug and they could not provide the new server for us. It was in the middle of the night and truth is, they respond to calls at night, but they don’t have the staff to really help until the morning… We had to act fast because the event was in a few hours.

AWS, cloud computing with garanteed ressources (can’t remember the exact name of the service)
4 cores with Hyperthread (virtualized?), 8Gb RAM
Runs good, the scaling is not as smooth as with the dedicated instance (meaning that when cores reach a threshold, they are scaled by AWS to fit our load, it causes issues with the video for a few seconds) The pricing was around 56$ Canadian per month.
Of course, the major selling point for AWS is the easy integration of multiple instances on a Vrack with containers and everything… But still, we are looking around for more local solutions because… Amazon, you know! :stuck_out_tongue:

Edit: I can’t tell for sure, but I do believe that SMT or Hyperthreading benefits a lot for that kind of use case as it optimises the data processing

That’s one hacky workaround for elastic computing. In that vein, we could destroy the droplet and restore it from a snapshot a few minutes before the meeting - even cheaper, though the IP probably won’t be preserved, so the DNS records would be off, which could be worked around with a static IP etc. The problem with resizes is that the desired size may not be available in the region of the droplet. For example, c-2 seems to be available only in sfo2, not in sfo1.

You’re right - the first thing I did when I started evaluating Jitsi meet was to try and figure out the system requirements. None were listed in the setup guide.

The first thing I did when I joined the forum was to ask about the system requirements and mention that glaring omission from the setup guide:

No clear answer either. CPU requirements? Crickets.

I’d say that the Xean quad core with 1Gbits/bandwidth in this example is qualified as ‘moderate’ yet looks like an absolutely monster of a system compared to your config. It was cited as 100 $ while your config is costing 10 times less I guess Possibly your quality requirements were higher than the paltry 500 kbits of bandwidth per user that was set in this test.

Our quality requirements are nothing out of the ordinary - 720p would be fine. We get only 180p, and the bandwidth seems irrelevant - please see this post, which has the same problem over gigabit connections and monster configs:

720p means 2,5 Mbits/s - five times more.

It does not seem the same problem at all. Your problem: dropping connections, freezes, low quality (overload I think). Their problem: low quality (config problem I think)