TUTORIAL - How to 'Multithread' Prosody


With the introduction of pagination and hence large meetings (currently, up to 500 participants), there is now a stronger focus on Prosody and its attendant limitation. Prosody, for those who don’t already know, is single-threaded; meaning, no matter how many cores you have available in your signaling node, Prosody can only use one at a time. This leads to the issue of Prosody being overworked when:

  1. There are a lot of large meetings running concurrently on the shard
  2. A lot of users try to join the meeting at the same time

The consequences of an overwhelmed Prosody include additional participants not being able to join meetings on the server, Prosody reporting 100% usage of the single core it occupies and ongoing meetings experiencing some lags with some participants being kicked out.

This tutorial here will highlight the steps involved in creating a second Prosody instance on the same server so it can offload some of the work and provide a more stable and more reliable environment.


The tutorial will be high-level as there are a lot of sub-steps involved in each outlined step and deployments vary in architecture. An assumption is made that you are a Developer that is comfortable tinkering with configurations. Knowledge of Prosody and XMPP is beneficial and the ability to read logs and debug errors is crucial to success. Do not try this in an active production server until you have tested it and confirmed success. It is possible to ‘break’ your working installation while doing this, so care must be taken. That said, it is relatively easy to revert whatever changes are made by simply reversing the steps.


The ultimate goal is to split Prosody’s work into two:

  1. Original instance connected to Jicofo dedicated to client authentication and activities
  2. Second instance connected to Jicofo listening in the Brewery rooms to select a JVB, Jigasi or Jibri when needed (only JVB possible for now)



  1. Clone the main Prosody directory (/etc/prosody/) and rename the clone (e.g. prosody-jvb)

  2. In the new prosody-jvb, create a new Virtual Host with authentication for users ‘jvb’ and ‘focus’

  3. Configure the necessary component that users ‘jvb’ and ‘focus’ will connect to

  4. Add a symlink for the new configuration (conf.avail —> conf.d)

  5. Change the connection port for this prosody-jvb instance to another port number e.g. port 15222/tcp (original prosody instance connects through 5222/tcp)

  6. Change the directory for the log files and the pidfile

  7. Add user jvb to the new domain (host) you just created

  8. Add user focus to the new domain (host) you just created

  9. Generate the necessary certificates for the new host(s) created under prosody-jvb

  10. Add the certificate for the authenticated domain to the trusted certificates on the server

  11. In jicofo.conf, add a second xmpp block to handle the connection to JVB and specify the brewery-jid through which Jicofo will establish control of the bridge:

xmpp {
	service {
		enabled = true
		hostname = “”
		port = 15222
		domain = “auth.newdomain”
		username = “focus”
		password = “password_registered_for_focus_user_in_step8”

bridge: {
       brewery-jid: "JvbBrewery@muc.newdomain"
  1. In jvb.conf, add an API block through which the jvb user can be controlled
Apis {
	xmpp-client {
		configs {
			custom-shard {
					hostname = “shard-ip”
					port = “15222”
					domain = “auth.newdomain”
					username = “jvb”
					password = “password_registered_for_user_jvb_in_step7”
					muc_jids = “JvbBrewery@muc.newdomain”
					muc_nickname = “unique-jvb-id”
					iq_handler_mode = sync
					disable_certificate_verification = true
  1. Make a copy of the original prosody’s systemd unit file, pass a new config to it and adapt for the new prosody-jvb process:
Description=Prosody JVB XMPP Server

After=network-online.target network.target mariadb.service mysql.service postgresql.service

# With this configuration, systemd takes care of daemonization
# so Prosody should be configured with daemonize = false

# Start by executing the main executable
# Note: -F option requires Prosody 0.11.5 or later
ExecStart=/usr/bin/prosody --config /etc/prosody-jvb/prosody.cfg.lua -F
ExecReload=/bin/kill -HUP $MAINPID



# Set stdin to /dev/null since Prosody does not need it

# Direct stdout/-err to journald for use with log = "*stdout"

# Allow binding low ports

  1. Restart all services

You should see two prosody processes running on the server - the original prosody process and the new prosody-jvb process you just created.


A simple test run on a minimalistic dev server hosting Jitsi, JVB, Jicofo, Jibri, Jigasi, Prosody and Prosody-JVB shows the following:

Server Specs:
CPU - 2 cores
OS - Ubuntu 20.04 LTS

Hosted a 10-party conference with all participants joining at the same time. Set metrics to monitor the two prosody processes.

Captured metrics confirm two different prosody processes running independently of each other.

Process prosody

Process jvb-prosody

The graph patterns show the impact of client and server activities on each prosody process at different points during the conference lifecycle.

All 10 clients joined at the same time, resulting in a spike in the process prosody handling client activities. Process prosody-jvb also registered a spike during client join activities, but to a much lesser degree. Once the meeting was in progress however, process prosody-jvb appeared to take on more work reporting a higher CPU usage than its counterpart.

In the captures above, at 21:40 when the meeting was already in session, process prosodywas registering just half of the CPU usage compared to process prosody-jvb.

The findings prove that we have effectively been able to split the load on Prosody into two such that it functions like a multithreaded service. The potential impact of this is HUGE, particularly for large conferences. With two prosody instances running and handling all xmpp connections concurrently, deployments are able to tolerate a larger number of participants without overload and also be a bit more resilient in handling spikes associated with client activities.


Thanks to the core Jitsi Dev Team for sharing this solution.


Thanks @Freddie you are always the first to share knowledge.
I was also following other thread and I was pretty sure you will come out with some documented solution.
Thank you again for your time, I know it takes lot of time and efforts to document clearly.

1 Like

Perfect guide
Thank you @Freddie :+1:

I have a question:
What makes them use different cores? When CPU added as an htop column, are they always on different cores?

1 Like

Thanks @emrah!

I haven’t tested long enough to see if they always use different cores, but from my tests so far, htop shows they do.

prosody process

    935   0 lua5.2

prosody-jvb process:

   1914   1 lua5.2

That said, they can even be on the same core - all depends on how the OS assigns the operation based on its priority determination and their core location can change depending on load, so long as the process is not decisively bound to a core. Now, because they are each individual threads, it means they both potentially have the ability to max out a single core - each. So, if for instance, one gets worked to the point that it ravages 100% of a core, the other thread can still be running on another core. This is in contrast to Prosody’s single-threaded residency where once that one thread maxes out a CPU, that’s the end of its operation.

1 Like

It seems possible to select a specific core using CPUAffinity in the systemd unit file.


Yes, you can bind a process to a specific core, if you prefer. But I think the OS does a good job of selecting cores though and it dynamically changes the core based on activity. So far, all my tests have shown them to be on different cores.

1 Like