Add video bridge to load balanced config without restart - dynamically

We have a load balanced Jitsi configuration using a “master” server with all Jitsi components and a “slave” server with just a video bridge. Obviously, we can manually add additional video bridges with the same configuration - just add another component entry to Prosody, etc.

How can we truly scale this with dynamically adding video bridges if we have to manually configure (and restart) Prosody every time?

I have a fully-automated configuration for spinning up a new video bridge instance, but if I restart Prosody, even if I automate the configuration file and restart of the service, I’m likely to drop existing calls.

It doesn’t seem like it’s truly scalable.

1 Like

There is a better way than using component connecitons for the purposes of scalability. We call it using MUCs or “breweries”.

Basically the JVBs open client connections towards the XMPP server and join a MUC. Jicofo then selects the right JVB from the pool.

This way no configuration needs to be changed.

I’m afraid we don’t have much documentation on this, but our Docker setup uses it, here is the commit that changed it from component connections to a MUC:

This is nifty, but I don’t really understand how I would apply this to my configuration. I assume I would need to modify the settings on the master (full jitsi stack) server and then each slave (JVB only) I spin up would need to just be configured to point at the master?

I basically understand what your config is doing here, but it’s unclear how to modify the current config to support this.

Can I ask why there’s little to no documentation on load balancing and scaling and this commit is from 2018?

The closest thing I’ve found as a solution is in this video, which we already thought of but is hardly ideal: This involves having pre-configured instances with JVB already setup and the master already configured for those servers and spinning them up on demand.

That would work, but it doesn’t really “scale” as you would need to have already created the instances.

Correct. Once your Prosody and Jicofo are configued to use MUCs you can spin as many JVBs as you want (assuming they are configured to use MUC connections) and no configuration change is necessary.

You can, but I don’t have an answer, sorry.

It does scale. That’s how we do it on for example. You can have an auto-scale group of N JVBs and scale them up (by creating new instances of the same image) on a bandwidth trigger.

Okay, we can trigger them to turn on, but that still poses the issue that they have to already exist and the config has to already be in the Prosody config.

I’m not clear how else the main Jitsi server would be aware of the new video bridges then?

To be clear the exact example I’m referring to, and assuming you are as well, is that you have preconfigured servers which are simply not running until needed. This does not scale beyond the preconfigured servers and isn’t dynamic; we can’t spin up a new server as-needed; it has to already exist.

When you say, “creating new instances of the same image”, are you referring to using the MUC config, or that you literally create new instances that have the same name/pass as is in the Prosody config? It doesn’t seem like that would work.

The MUC example seems it would solve this issue, but as you said, it’s not documented on how I might configure that.

Is there no desire to support this kind of scaling for the community?

By checking participants in the brewery room.

Yes, it works, we use it in production.

Why we would not support it? This is the recommended way of using the jvb, if you install the latest unstable packages this is the way everything will be configured.
You can install the latest unstable package and see how it is configured or use the example configuration that Saul already posted above.
Any contributions are welcome to add whatever is missing.

Yes, but it’s one single thing that needs cconfiguring for all JVBs, you don’t need to reconfigure as you add more.

Because, as I said earlier, they make client connections to it. Yes, you do need to configure the XMPP server and brewery, but Prosody is running anyway, isn’t it? This is no different than a ccomponent connection, in terms of configuration, except that it’s 1 thing for all JVBs.

Once Prosody and Jicofo are setup all JBVs look the same, all you need to do is spin more “clones”.

I just shared a working config with you, please ask specific questions.

I do not appreciate the tone here, we are trying to help you. Insisting that we don’t know / want to scale up properly when it’s clearly the opposite won’t get you very far.

Yes, it works, we use it in production.

This answer makes no sense - I asked which method you’re using. I think you’re completely missing that there’s two methods being discussed here and it’s unclear which one he was talking about.

There’s a method of manually updating and restarting Prosody with each new server added and/or having the config predefined and spinning up a predefined set of video bridge servers. Separately, there’s the MUC method.

Which one are you using?

And this isn’t documented anywhere I can find. Nothing about his response indicated the unstable version contains this configuration.

We aren’t using Docker, so the example provided, as I’ve stated, is confusing to apply to a non-docker install of Jitsi. If you can point me to where your unstable version configurations are that I can mirror to my own Jitsi server, that would be helpful.

You’re not supporting it because your response is, “this is the recommended way” but it doesn’t seem to be documented anywhere. How/why would I contribute to your documentation if you yourself won’t explain how to use it?

Can you point me to any documentation on configuring the MUC settings? I’ve already linked above to your own video which suggests the manual method and doesn’t mention MUC.

Once Prosody and Jicofo are setup all JBVs look the same, all you need to do is spin more “clones”.

Are you saying my slave servers (video bridge only) can all declare the same name? For example, I have a JVB named “”. Can I just spin up a second one with the exact same name?

My tone reflects the level of support that I see throughout this community. Answers are often left without resolution, or with responses like, “search the forums” where the other answers on the forums are also unanswered.

IIRC yes.

Why do you think you are entitled to anything here? We work really hard on many fronts and can’t cope with all questions, oh well such is life. Try being grateful for once, it goes a long way.

1 Like

We are discussing MUC method, of course.

Have you looked at the file at all, these are simple configurations you need to put your config.

Here is how this is configured on install:
These are the same settings that docket setup uses.

This video is outdated, as it was done before making muc config default.

I just want to point, that I make thousands of replies a week, trying to help people, if you don’t like my responses sorry, but only in your topics I find this tone, and in general I try not to jump in … I really tried to help, sorry it doesn’t work for you. I will stop here.

I’m sorry, I was not intending to imply I feel as though I am entitled to anything, simply that the lack of documentation is frustrating.

If I figure this out and feel that my understanding is sufficient to communicate it to others I will certainly put in a pull request to add documentation for this process.

I understand you guys are doing this for free, but often the responses feel like we’re being blown off; we aren’t all Java developers (or Lua, for that matter) so Jitsi is often (as is my case) only a small part of a much bigger system (our app is a JavaScript / Node based app that only uses part of Jitsi), which means looking through the code or config files isn’t as obvious as it might seem to you.

I appreciate the assistance and I also would think maybe you can see it from a user’s perspective.

Much better, thank you.

I agree that our documentation is bad, there is no denying that. I hope we can make improvements on that front sooner rather than later.

I think this might be a side effect of us knowing things as insiders so we may fail to provide the adecuate answer based on the level of understanding of the system the person asking the question has. Apologies for that.

Back to your original question. Even if you don’t use Docker, the lionk I shared includes the config necessary for each component (Prosody / Jicofo / JVB) so you can start by taking a look at that and ignoring the Docker specifics (which are not that many really).

In addition, if you use our Debian unstable repos you’ll get a MUCs based setup right out of the box. Perhaps you could do that on a test machine and see how to modify your setup to match. There is not much to it, but if you get stuck feel free to come back here to ask more questions.

Docs for configuring the MUC mode have been missing for a long time. I’ve put together a short doc here, feedback would be very welcome:


1 Like

That’s a huge help, both of you. Thank you, and again sorry for the negative tone.

I think I almost have this working but the call disconnects users from each other (the user stays connected but not to each other) and the call refreshes repeatedly.

I’m not at a computer so I can’t paste in errors, but I know it was giving a Strophy error in the JS console.

I’m not clear on this part; where is this config supposed to go? Normally it seems we’d configure this in /etc/jitsi/videobridge/config, but this looks like a JavaScript object, maybe?

It’s helpful to have additional documentation, but it’s confusing if it’s incomplete.

stats {
  # Enable broadcasting stats/presence in a MUC
  enabled = true
  transports = [
    { type = "muc" }
apis {
  xmpp-client {
    configs {
      # Connect to the first XMPP server
      xmpp-server-1 {
        domain = ""
        username = "jvb"
        password = "$PASSWORD"
        muc_jids = ""
        # The muc_nickname must be unique across all jitsi-videobridge instances
        muc_nickname = "unique-instance-id"
        # disable_certificate_verification = true
      # Connect to a second XMPP server
      xmpp-server-2 {
        domain = ""
        username = "jvb"
        password = "$PASSWORD"
        muc_jids = ""
        # The muc_nickname must be unique across all jitsi-videobridge instances
        muc_nickname = "unique-instance-id2"
        # disable_certificate_verification = true

This is for the new configuration scheme described here (though details on setting it up are not there yet):

You can use the instructions in the next section (legacy configuration). I’ll add a note to clarify.


Okay, I got you.

Also for this section:

# XMPP server configuration
The XMPP server needs to be provisioned with a single user account to be shared between all 
jitsi-videobridge instances that connect to it. For prosody this can be done using:

prosodyctl register jvb $DOMAIN $PASSWORD

EDIT: I’m curious if this persists after a reboot? Originally, I had a configuration in /etc/prosody/conf.d/<domain>.cfg.lua which contained MUC-specific configuration I pulled from the link from @saghul.

There’s a suggestion in your pull request docs to use, “…a separate XMPP domain, not accessible by anonymous users” - can you give an example? Does this mean to use a different public DNS-accessible domain, a different subdomain of your domain, (like your domain is and you use and does this need to exist in DNS? The example linked to is this: which really doesn’t provide any useful information apart from a bash script if check.