OVERVIEW
With the introduction of pagination and hence large meetings (currently, up to 500 participants), there is now a stronger focus on Prosody and its attendant limitation. Prosody, for those who don’t already know, is single-threaded; meaning, no matter how many cores you have available in your signaling node, Prosody can only use one at a time. This leads to the issue of Prosody being overworked when:
- There are a lot of large meetings running concurrently on the shard
- A lot of users try to join the meeting at the same time
The consequences of an overwhelmed Prosody include additional participants not being able to join meetings on the server, Prosody reporting 100% usage of the single core it occupies and ongoing meetings experiencing some lags with some participants being kicked out.
This tutorial here will highlight the steps involved in creating a second Prosody instance on the same server so it can offload some of the work and provide a more stable and more reliable environment.
ASSUMPTION
The tutorial will be high-level as there are a lot of sub-steps involved in each outlined step and deployments vary in architecture. An assumption is made that you are a Developer that is comfortable tinkering with configurations. Knowledge of Prosody and XMPP is beneficial and the ability to read logs and debug errors is crucial to success. Do not try this in an active production server until you have tested it and confirmed success. It is possible to ‘break’ your working installation while doing this, so care must be taken. That said, it is relatively easy to revert whatever changes are made by simply reversing the steps.
GOAL
The ultimate goal is to split Prosody’s work into two:
- Original instance connected to Jicofo dedicated to client authentication and activities
- Second instance connected to Jicofo listening in the Brewery rooms to select a JVB, Jigasi or Jibri when needed (only JVB possible for now)
ARCHITECTURE
STEPS
-
Clone the main Prosody directory (/etc/prosody/) and rename the clone (e.g.
prosody-jvb
) -
In the new prosody-jvb, create a new Virtual Host with authentication for users
‘jvb’
and‘focus’
-
Configure the necessary component that users
‘jvb’
and‘focus’
will connect to -
Add a symlink for the new configuration (
conf.avail —> conf.d
) -
Change the connection port for this
prosody-jvb
instance to another port number e.g. port 15222/tcp (originalprosody
instance connects through 5222/tcp) -
Change the directory for the
log
files and thepidfile
-
Add user
jvb
to the new domain (host) you just created -
Add user focus to the new domain (host) you just created
-
Generate the necessary certificates for the new host(s) created under
prosody-jvb
-
Add the certificate for the authenticated domain to the trusted certificates on the server
-
In
jicofo.conf
, add a second xmpp block to handle the connection to JVB and specify the brewery-jid through which Jicofo will establish control of the bridge:
xmpp {
service {
enabled = true
hostname = “127.0.0.1”
port = 15222
domain = “auth.newdomain”
username = “focus”
password = “password_registered_for_focus_user_in_step8”
}
}
bridge: {
brewery-jid: "JvbBrewery@muc.newdomain"
}
- In
jvb.conf
, add an API block through which thejvb
user can be controlled
Apis {
xmpp-client {
configs {
custom-shard {
hostname = “shard-ip”
port = “15222”
domain = “auth.newdomain”
username = “jvb”
password = “password_registered_for_user_jvb_in_step7”
muc_jids = “JvbBrewery@muc.newdomain”
muc_nickname = “unique-jvb-id”
iq_handler_mode = sync
disable_certificate_verification = true
}
- Make a copy of the original prosody’s
systemd
unit file, pass a new config to it and adapt for the newprosody-jvb
process:
[Unit]
Description=Prosody JVB XMPP Server
Documentation=https://prosody.im/doc
Requires=network-online.target
After=network-online.target network.target mariadb.service mysql.service postgresql.service
Before=biboumi.service
[Service]
# With this configuration, systemd takes care of daemonization
# so Prosody should be configured with daemonize = false
Type=simple
# Start by executing the main executable
# Note: -F option requires Prosody 0.11.5 or later
ExecStart=/usr/bin/prosody --config /etc/prosody-jvb/prosody.cfg.lua -F
ExecReload=/bin/kill -HUP $MAINPID
Restart=on-abnormal
User=prosody
Group=prosody
UMask=0027
RuntimeDirectory=prosody-jvb
ConfigurationDirectory=prosody-jvb
StateDirectory=prosody-jvb
StateDirectoryMode=0750
LogsDirectory=prosody-jvb
WorkingDirectory=~
# Set stdin to /dev/null since Prosody does not need it
StandardInput=null
# Direct stdout/-err to journald for use with log = "*stdout"
StandardOutput=journal
StandardError=inherit
# Allow binding low ports
AmbientCapabilities=CAP_NET_BIND_SERVICE
[Install]
WantedBy=multi-user.target
- Restart all services
You should see two prosody processes running on the server - the original prosody
process and the new prosody-jvb
process you just created.
TEST
A simple test run on a minimalistic dev server hosting Jitsi, JVB, Jicofo, Jibri, Jigasi, Prosody
and Prosody-JVB
shows the following:
Server Specs:
Baremetal
CPU - 2 cores
RAM - 4GB
OS - Ubuntu 20.04 LTS
Scenario
Hosted a 10-party conference with all participants joining at the same time. Set metrics to monitor the two prosody processes.
Observations
Captured metrics confirm two different prosody processes running independently of each other.
Process prosody
Process jvb-prosody
The graph patterns show the impact of client and server activities on each prosody process at different points during the conference lifecycle.
Analysis
All 10 clients joined at the same time, resulting in a spike in the process prosody
handling client activities. Process prosody-jvb
also registered a spike during client join activities, but to a much lesser degree. Once the meeting was in progress however, process prosody-jvb
appeared to take on more work reporting a higher CPU usage than its counterpart.
In the captures above, at 21:40
when the meeting was already in session, process prosody
was registering just half of the CPU usage compared to process prosody-jvb
.
Conclusion
The findings prove that we have effectively been able to split the load on Prosody into two such that it functions like a multithreaded service. The potential impact of this is HUGE, particularly for large conferences. With two prosody instances running and handling all xmpp connections concurrently, deployments are able to tolerate a larger number of participants without overload and also be a bit more resilient in handling spikes associated with client activities.
ACKNOWLEDGMENT
Thanks to the core Jitsi Dev Team for sharing this solution.