Jitsi pegging out 1 CPU core

Hello All,

Is there a configuration inside Jitsi that would prevent Jitsi from using all 16 cores on my server?

TIA

Nope, sorry.

TYVM Sir.

Is the Jitisi server’s room limit a “hard” number set at 75 room connections, no matter how much horsepower the server has?
Also, is the terminology for rooms, does it mean the same as a visit session?

No, there used to be a limit on the public instance at meet.jit.si of 75 participants per meeting (but this has been raised to 100 officially and actually, now you can run a meeting with several hundreds of participants). There is no hard limit set if you host your own server, the limitations you encounter will be tied to the resources you make available. Note that available bandwidth is the most crucial resource.

‘Rooms’ = Conferences = Meetings
Every unique meeting is a conference hosted in a room.

On your deployment you can configure jitsi-meet/mod_muc_max_occupants.lua at master · jitsi/jitsi-meet · GitHub, but this is for a number of participants in a room.
There is no such module for number of rooms, though.

And with regards to the number of meetings or rooms, Is there a hard cap on that?

Thank you all for the replies!!

No. All depends on what your server can support.
Note that Prosody is single-threaded and usually the first component to buckle on heavy load.

Freddie TY for the help.

With regards to RAM, is there anyway to “tell” JITSI the server has 24GB of RAM? It seems like the JVB module maxes out at 3GB. It’s like it’s not taking advantage of the full amount of RAM the server has.

It doesn’t need much RAM. And yes, JVB and Jicofo are set to use 3GB of RAM. You can change that, but you don’t need to. Jitsi doesn’t need much RAM.

Freddie,

  Once my server hits anywhere from 88 to 95 rooms, The monitor computers get a message stating "unfortunately something went wrong. We're trying to fix this. Reconnecting in XX seconds....". When we check the application logs, we see tons of these messages:

"No response received within reply timeout. Timeout was 15000ms (~15s). Waited for response using: AndFilter: (StanzaTypeFilter: Presence, OrFilter: (AndFilter: (FromMatchesFilter (ignoreResourcepart): 02294321@conference.meet.tdcj.texas.gov, MUCUserStatusCodeFilter: status=110), AndFilter: (FromMatchesFilter (full): 02294321@MY MEETINGURL@Mine.COM

Is this a network or a configuration issue, please?

seems like prosody crashed

This has been happening for months though. The server gets rebooted weekly.

Has no bearing; if it’s overloaded, it crashes. As mentioned earlier, Prosody is a single-core process. Search the forum for some tips on how to tweak it to support more load.

You may try the followings

JITSI_TMPL=https://raw.githubusercontent.com/jitsi-contrib/installers/main/templates/jitsi

mkdir -p /etc/systemd/system/prosody.service.d
wget -O /etc/systemd/system/prosody.service.d/override.conf \
    $JITSI_TMPL/etc/systemd/system/prosody.service.d/override.conf
wget -O /etc/prosody/conf.avail/network.cfg.lua \
    $JITSI_TMPL/etc/prosody/conf.avail/network.cfg.lua
ln -s ../conf.avail/network.cfg.lua /etc/prosody/conf.d/
sed -i "/rate *=.*kb.s/  s/[0-9]*kb/1024kb/" /etc/prosody/prosody.cfg.lua

systemctl daemon-reload
systemctl restart prosody.service

no but it should be monitored if you have a lot, see here

if a host session gets hung up/locked on the server, how long does it take for the server to clean it up; is that configurable and where, please?

1mn

yes, in Prosody.

**** beware ***
I did NOT test that changing this parameter has NO unfortunate side effects.
**** end of warning ****

– delay in seconds
bosh_max_inactivity = 120;

Is this the bosh timeout? According to the bosh protocol specs the timeout is 60 seconds, even though you can change it on the server, that timeout is hardcoded in the lib that the client uses and cannot be easily changed which indeed can lead to some weird scenarios.

you got your answer then, if not directly: it’s hard coded in lib-jitsi-meet and you can’t change it unless you fork the code.

Many thanks to everyone