Jitsi Meet (Jicofo + JVB + Prosody) High Availability and Load Balance

Yes this is possible, this is how meet.jit.si is running right now in 6 or 7 regions in the world.
Take a look at https://jitsi.org/tutorials/

Thanks very Much, I’ve seen them. So, In the end I would like to achieve something like these, which is both Load balancing and HA:


Two complete Jitsi server with every component with the possibility, for each prosody, to balance on the self JVB and the other one.
My question is: how do I configure prosody and Jicofo for clustering? I did not find any docs on how to do that, only for the JVB.
And how do I configure the apache reverse proxy in front of the jitsi server for load balancing and/or HA failover?
Thanks in advance

There is no such thing as clustering. So one conference should happen on the same shard (shard is nginx(jitsi-meet) + prosody + jicofo and the available jvbs), every http request has a parameter room=roomName which is the rule for load balancing so every http requests for the same room will land on the same shard. This is how you need to configure HAProxy.
You also need to implement a monitor which monitors the shard whether its healthy and once it is detected as unhealthy it is removed from the pool of healthy shards in HAProxy.
The user experience will be that the participants will see a reload screen and in few seconds they will land on the new shard with the same room name.
This is how it works today.

Thanks for your reply. So each “shard” is aware of the others? How can I configure each of it?

What do you mean by this? Only HAProxy is aware of all the shards and if you are using octo jvb instance from other shards can be connected to this one.

Thank you @damencho. Having multiple shards is really good.
Now, related to HA on Prosody and Jicofo, have anything changed since 2017?
Is there still no way to cluster Jicofo or Prosody to avoid having them as SPOF inside the shard?

Nope, nothing has changed.

Hi,
For failover between shards, I must use octo or not is not clear for me.
If it is not neccesary, how can I use haproxy?
My goal is to set up an active passive topology.

In the case of shard1 is down and shard2 is up, all the service must accesible on shard2 or vice versa.

What should I use and how can I config ?

@damencho, we want to try and work with enabling the ability to have multiple prosodies or jicofos configured so we can load balance among them.

We’ve figured so far Prosody can use an external database and that probably will give us the chance to have multiple prosodies. However, is there any other plugin that you guys have developed that needs to be changed to add this kind of behavior? In the case of jicofo, what are the things we could look into to add this functionality?

If you could give us an idea on where to begin or what are the pieces that limit us from doing this we could see if this is in our hands

Thank you so much for all your help!

As of jicofo there is nothing to care about, as it knows about the current shard and is a standalone component. What you need is the HAProxy fronting the multiple shards to stick sessions based on URL params, this will ensure that a conference goes to a single shard, this is explained in one of the videos and if you search there were some examples in the forum how to configure that.

Thanks for the reply @damencho!

We already have multiple shards, but we want to be able to offer HA inside the shard itself. Our installation is in Kubernetes, which means that some pods (jicofo or prosody) could be destroyed at some point. So we are looking on what options do we have to work around it.

At the moment we have that configured in such a way that the nodes where those pods are placed are “never” destroyed. But there are a lot of reasons why a node can be destroyed and we are not in control of that, so being able to configure HA for those two pieces is a must for us.

What are the things that make jicofo standalone? Would it be too hard to change? We want to help the community and see if we can do these changes ourselves, but we are trying to look what will be the difficulty of doing it.

Thanks again for your amazing work, it has been such a nice trip working with Jitsi!

If you destroy jicofo in a meeting, clients will reload, and by that time haproxy should have detected that shard as unhealthy and all new clients will reload in a new shard. This is how meet.jit.si works at the moment.

1 Like

Hello,

Can i have the haproxy configuration ?
What configuration do you use for healthcheck with haproxy ?

Can anybody share maximum numbers of servers we can add as video bridge for load balancing.

Well it all depends on server specs, you need to take care to not overload the signalling node. And also it depends on the size of the conferences. With the distribution of participants on meet we have seen that Oracle’s E4 can handle more than 200 participants per bridge … And with signalling nodes as c5.2xlarge can handle up to 6k participants. So to cover those you can run that shard with 30 bridges.

1 Like

We currenty do ALB → nginx → haproxy → prosody (haproxy doing consistent hash based routing to prosody)

My question is
Is ALB → nginx → prosody, a good idea since nginx seem to provide consistent hash based routing similar to haproxy HTTP Load Balancing | NGINX Plus

Also wondering if nginx → prosody has drawbacks, should we do more direct ALB → haproxy

Nginx is not doing anything useful in that chain (assuming you have split out the static frontend to somewhere else). Even haproxy is not really needed since you have the ALB; you can go direct ALB → Prosody if you put the shard name in the path to the websocket.

Nginx also handling frontend apart from handling prosody traffic. By adding shard name, don’t we lose HA that we are trying to accomplish with haproxy?

On a bigger deployment like this it’s a better architecture to split out the frontend. It’s just static files, you can compile it, put it in an S3 bucket and put CloudFront in front of it (for example, or your favourite static host / CDN instead). You also get much better performance on initial load that way. Nginx is just doing some basic SSI to the frontend, nothing complex.

ALB supports HA as well. Set up a health check and it will stop sending traffic to any target that is unhealthy. If you are already familiar with haproxy, by all means keep using it, but you can eliminate nginx. With all else equal, the fewer components you have, the fewer things can go wrong.

we are using haproxy since it supports consistent hashing as proposed here Load balancing Jitsi meeting

Our frontend also has few dynamic endpoints (along with static pages) for authentication and other things thats the reason we are using nginx. but thanks for pointing out alternatives for static pages, that gives us some options to reduce the load on nginx.