Recommended Server Specs for 2020?

Hello @Normand_Nadon

This is wonderful work.
Is there a documentation available to replicate such an implementation?
Thanks and looking forward.

Sudhir Gandotra
+91-93124-65666

@Normand_Nadon The earlier posters are all correct - this is great work.

But the specs and configuration for a high-performance server are a bit off-topic for this thread (which was to collect summaries of people’s experience.)

Would you consider starting a new topic that talks about your configuration? Subject might be: Configuring a high performance server. You are of course free to summarize your experience and link to that new topic here. (In fact, I would welcome it.)

I bet the moderators on the forum could then transfer the messages to keep this one focused on people’s reports of their experience. In fact, let me do that now: @moderators - would it be possible to transfer all the messages on @Normand_Nadon configuration to a new topic? Many thanks.

Sorry, I thought it was on topic…
In that sense, the main instance, (without additional JVBs) handles 100+ users at once without issue.

We did switch to AWS instead of OVH for our main server… We had a loss on the machine’s performance and access to CPU governors, but the Internet bandwidth and ease of configuration is a lot better… Also, the load balancing done by AWS seems to sometime give some temporary glitches on the feeds (when the server scales up or down as a threshold is met)… This is what you get for using a Juggernaut like Amazon as your provider… You loose part of the control!

We might have a go with Linode in the near future too… (seem to be an in-between AWS and OVH in terms of provider)

1 Like

We hosted 3 simultaneous users on DigitalOcean using a server with 1 core and 2 Gbytes of RAM, but the resolution was low (180p), similar to this post. When the 4th, 5th and 6th user joined, we experienced a strange ghosting issue, black video from up to 2 participants, and video freezing. The moderator was using Firefox on Windows, and they ended up losing connection, on the same Wi-Fi from which I was connecting fine (Chromium 83 on Ubuntu).

At all times, the server’s RAM utilization never exceeded 700MB, and the CPU utilization never exceeded 30%. The “public bandwidth” as recorded by DigitalOcean was 7.5 - 12 Mbps.

Suggestions on improving the video quality and stability would be welcome (as long as they keep the thread on-topic, otherwise PMs?).

Are you certain that your webcam is capable of more resolution?

Also, 1 core and 2Gb is too low. The recommended base setup is 4 cores 8Gb for the JVB to work at it’s best. Low CPU use might have been a result of JVB lowering the quality to have headroom to process the video… From our installation I can tell that One core get’s hammered hard when “negotiating” user sessions, login etc (Prosody, jicofo, jigasi) and other cores share the load for the video compositing (JVB).

Do you have extensive logs to check CPU load, cpu stall time (time it takes before a thread can access the CPU to be processed) etc?

Does this server have Multi-threading? If not, it might absolutely explain why CPU usage was low as there are no extra pre-fetcher to chew the data for the CPU while it is working on an other task. It tends to give the illusion of low load, but it is in fact low efficiency that you would be seeing!

what’s the output of

sudo lshw -C network -sanitize | grep configuration | grep -v -E "veth|tun|bridge"

(if necessary run sudo apt install lshw)

and (while you are hosting a session with 3 users)

sudo sar -u 2 20

(if necessary sudo apt install sysstat)

No, but I could get them if instructed how to.

I tried to find that out from the Digital Ocean documentation but it’s not very clear, or you meant hyper-threading vs. the software concept of multi-threading?. The only relevant result for “threading” from the DO docs is here:

  • A vCPU is a unit of processing power corresponding to a single hyperthread on a processor core. A modern, multicore processor has several vCPUs.

[…] you can choose between shared CPU and dedicated CPU plans for dedicated vCPU.

Dedicated CPU Droplets have guaranteed access to the full hyperthread at all times. With shared CPU Droplets, the hyperthread allocated to the Droplet may be shared between multiple other Droplets. […]
However, the amount of CPU cycles available for the hypervisor to allocate depends on the workload of the other Droplets sharing that host. If these neighboring Droplets have high load, a Droplet could receive fractions of hyperthreads instead of dedicated access to the underlying physical processors.

We used a shared CPU at the time. A Dedicated CPU wouldn’t make sense, because we host meetings rather rarely, so an elastic CPU provider would be far more economical (recommendations welcome).

configuration: driver=virtio-pci latency=0
   configuration: autonegotiation=off broadcast=yes driver=virtio_net driverversion=1.0.0 ip=[REMOVED] link=yes multicast=yes
configuration: driver=virtio-pci latency=0
   configuration: autonegotiation=off broadcast=yes driver=virtio_net driverversion=1.0.0 ip=[REMOVED] link=yes multicast=yes

Will do during out next discussion.

!!! well this explains a lot. This means that your Cpu could be at times 0,9 CPU, even 0.7 or 0.5 CPU. So at times when your server have to send a packet, it simply can’t.

For a web server it’s not vital, it means that the page refresh takes longer.

A video server is not a web server, the resource requirements are higher, a lot higher because it’s a real time server, if the packets are not received or sent timely, the sound is garbled, the image distorted.

I was asking about network because even with 2 dedicated CPU, an advanced network controller is vital for good performance. If you don’t even have ONE dedicated CPU, there is no need to search further. You have absolutely no idea of the necessary resources.

You can resize the droplet if you don’t host meeting rarely. Upgrade before meeting to a more powerfull droplet (4 dedicated CPU / 8 RAM) and donwgrade after meeting.

I don’t know the cost of your instance, but we tried several solutions in the course of the project and here are the results:

Amazon Web Services, 4 Vcores 8gb RAM.
The server was not responsive enough, we swithed because of quality issues and jitter. (couple $ per month, don’t know exactly)

OVH Canada, dedicated server 8cores with hyper-thread (so 16 threads total) 32Gb Ram:
Ran like a champ, the only issue was that this instance did not have enough Public Bandwidth and could not be upgraded (76$ Canadian per month)

We wanted to switch to another service within OVH and time was running-out before the big event. There was a bug and they could not provide the new server for us. It was in the middle of the night and truth is, they respond to calls at night, but they don’t have the staff to really help until the morning… We had to act fast because the event was in a few hours.

AWS, cloud computing with garanteed ressources (can’t remember the exact name of the service)
4 cores with Hyperthread (virtualized?), 8Gb RAM
Runs good, the scaling is not as smooth as with the dedicated instance (meaning that when cores reach a threshold, they are scaled by AWS to fit our load, it causes issues with the video for a few seconds) The pricing was around 56$ Canadian per month.
Of course, the major selling point for AWS is the easy integration of multiple instances on a Vrack with containers and everything… But still, we are looking around for more local solutions because… Amazon, you know! :stuck_out_tongue:

Edit: I can’t tell for sure, but I do believe that SMT or Hyperthreading benefits a lot for that kind of use case as it optimises the data processing

That’s one hacky workaround for elastic computing. In that vein, we could destroy the droplet and restore it from a snapshot a few minutes before the meeting - even cheaper, though the IP probably won’t be preserved, so the DNS records would be off, which could be worked around with a static IP etc. The problem with resizes is that the desired size may not be available in the region of the droplet. For example, c-2 seems to be available only in sfo2, not in sfo1.

You’re right - the first thing I did when I started evaluating Jitsi meet was to try and figure out the system requirements. None were listed in the setup guide.

The first thing I did when I joined the forum was to ask about the system requirements and mention that glaring omission from the setup guide:

No clear answer either. CPU requirements? Crickets.

I’d say that the Xean quad core with 1Gbits/bandwidth in this example is qualified as ‘moderate’ yet looks like an absolutely monster of a system compared to your config. It was cited as 100 $ while your config is costing 10 times less I guess Possibly your quality requirements were higher than the paltry 500 kbits of bandwidth per user that was set in this test.

Our quality requirements are nothing out of the ordinary - 720p would be fine. We get only 180p, and the bandwidth seems irrelevant - please see this post, which has the same problem over gigabit connections and monster configs:

720p means 2,5 Mbits/s - five times more.

It does not seem the same problem at all. Your problem: dropping connections, freezes, low quality (overload I think). Their problem: low quality (config problem I think)

I agree… This other user had config issues.
You have performance issues

I have not found anywhere specific requirements for each X stream so I will ask

I want to host:
160 rooms with 6 users per each (1000 users ~)
the bitrate can down to 600KBps
Comunications throuth TCP

Maybe similar configuration like @Normand_Nadon?
I think in:

SCENARIO1
1 Big Server -> 16 Cores, 32GB RAM , 600d/600u Internet , and 1Gbit Local connection
6 Videobridges -> 4Cores 16RAM 1Gb local connections , and 600d/600u internet

SCENARIO2 (HA)
2 Big Server -> Jitsi Meet and videobridge in both , 16 cores, 32GB, 600/600… 1Gb local
2 small server -> jitsi videobridge , 8 cores, 16GB RAM, 1Gb local

Maybe the 1Gbits on bandwitch on local its a problem?

Local bandwidth is not critical as it is used only for communication between “modules” of jitsi.
Public bandwidth is what will be needed for this many users.

I understand, thank you very much.

Probably in the end we will choose option 2:
SCENARIO2 (HA)
2 Big Server -> Jitsi Meet and videobridge in both , 16 cores, 32GB, 600/600… 1Gb local
2 small server -> jitsi videobridge , 8 cores, 16GB RAM, 1Gb local

do you think there may be problems? 160 rooms with 6 users per each (1000 users ~)
maybe performance issues?

you have a lot of experience about this

regards

The public bandwidth is going to be the limiting factor here… You might want to activate the ChannelLastN function to limit the number of people showing their camera.
There is also the enableLayerSuspension function that might help, but I haven’t test it yet

mm i see… a lot of thanks normand!