Hardware recommendation for local jitsi-meet instance (NUC, for example)?

Hello everyone!

Can someone give me a recommendation for a local server (such as a NUC) that hosts jitsi-meet for about 10-20 participants, 1080p?
I’m looking for a setup which has enough power to spare, so that if any issues arise I can be sure that it’s not due to hardware limitations but a configuration / bandwidth / client issue.

I’m only finding reommendations regarding hosters such as AWS and their offers, but maybe I keep using the wrong search terms; if so, I’d be really grateful for someone pointing me to the right threads!

4CPU/8GB RAM will handle that load like a champ. Your biggest concern would be bandwidth.

1 Like

Thank you for your answer!

Anything I should bear in mind regarding CPU generation possibly? (other than that it’s no ARM :smile:)
In other words, would even a Coffee Lake i3 (which I think is the first i3 Gen with 4 cores) suffice with headroom, possibly even the low power variants with 3.1GHz base?

Jibri and meeting recording will also be used, which iirc feeds the steam into ffmpeg, but I don’t know how taxing the encoding is in this case.

Are you planning on running Jitsi and Jibri on the same server? If so, you’ll need a more powerful server for the default configuration. If you’re just running Jitsi on that one server, you should be fine with the load you’ve stated, CPU generation notwithstanding.

1 Like

Oh, I was under the misconception that Jibri runs as a service of Jitsi - just now had a look at the repository and saw that it actually rather works as a kind of hidden client and is intended to be run on a separate machine. Currently the recording is simply done manually.
My original train of thought was that (in case of non-P2P-meetings) the streams pass through the Jitsi instance anyhow so they’d simply be captured there.
As it stands now, however, it seems like a different machine or VM on one of our hosts would me more sensible for that.

I’ll see what I get in terms of NUC and give a short report back when it has seen some load, in case someone else stumbling over this post is interested in the specs and it’s doing.

Thanks again for your support, Freddie!

You can enable it, but I wouldn’t recommend it. Your user’s computers will melt.

1 Like

The 1080p was more in regards to that the server should be capable for that without problem - I wasn’t sure whether higher resolution would also pose significantly higher demands on the hardware with the SFU nature. I guess it would mostly affect necessary RAM.

The target in our setup currently is 720p, but I also ran some tests with 1080p which seemed to run fine as well; the only thing I noticed is that the mobile devices ran significantly hotter, but I’m not even sure what resolution they negotiated.

What I’d still like to explore is last n, since I wager that receiving and displaying 1080p is more demanding for the clients than sending the 1080p stream, but I’ll have a deeper look into topics regarding that and adaptivity in general before settling on a setup or having relevant questions which I can’t answer myself.
In any case thanks for the heads-up, from what I’ve seen myself before I wouldn’t have given this that much attention right away!

It’s more bandwidth to use, but since we don’t decode the payload it’s not significantly more expensive.

Hum, that’s somewhat odd, since they request 360p. You can long-press on a tile and tap on Connection Info and after a few seconds you should see what resolution and fps you are receiving for a user.

lastN is adaptive based on available bandwidth, so in principle you shouldn’t need to do anything…

1 Like

Good to have that spelled out! I figured that this would probably be the case but wanted to make sure in case we want to take that route later on.

I could have sworn I had at least 480p within the app once. At some point during setup I just varied the settings and checked whether the meetings were still functional, so at 720p and 1080p I didn’t look at the connection information anymore.

I was more thinking in regard of client computation cost, so that (if we try stepping up to 1080p again) forcing to only one visible stream would reduce CPU burden for the clients, in case a client theoretically has enough bandwidth that it would otherwise try to display multiple or all streams.

I guess though that the stream resolution is also chosen with the monitor resolution in mind, so that if f.e. four 1080p streams are available, but the monitor only is 1080p, the client will not choose 1080p for the streams and scale down but instead choose a sensible resolution from the start?
That’s something I wanted to look into before, but I didn’t have the time to do so yet.

That’s correct.

1 Like

Great to hear, thanks for the information!