Scale Videobridge inside kubernetes

I read a lot about scaling the videobridge but I think it’s not possible to scale it inside kubernetes for some reasons I think but please prove me wrong - this would help a lot.

  1. Each instance must run on a single node or have an own node port
  2. Each instance must have a unique --subdomain argument for the node port
  3. Before you shut down an instance you have to wait until nobody is still using it anymore
  4. You can’t use a single deployment because the service will loadbalance them automatically (relates to 1.)

One idea I had while writing this topic is to have one deployment for each videobridge which are scaled to 0 by default. And one container which will scale them to 1 when needed using the Kubernetes API. So maybe an Kubernetes Operator would be a solution.

1 Like

Are you talking about the jvb argument? This is not needed as using components and scaling has its complexity as you have noticed, that’s why we use muc control rooms (brewery) and this is the default configuration for the docker images.

That’s why there are the graceful shutdown scripts, to handle that.

This single deployment that you mention we call a shard. We run meet.jit.si with several shards for high availability and I believe this can be achieved and with Kubernetes. The load balancing happens with the bridges inside a shard.

1 Like

hi sapkra,

I am looking into the same topic.

I will start with 4)
scaling from 0 will not work as jicofo will not be able to schedule the conference on a JVB. IMO there is no reason to scale from 0.

1+2)
I have multiple JVBs running and they get their individual ports only (I am counting down from 10000). This works fine. I am not touching the --subdomain option

3 )
semi-correct, this is one reason why I built the jitsi-prom-exporter, see also forum post. My plan is to pump these metrics back into k8s metrics for scaling the jvbs back down again once they are empty.

semi-correct because jicofo is able to reschedule the conferences from broken/exiting bridges to healthy/running ones, which works out of the box in my case and in most cases participants dont even notice it. As damencho said, graceful shutdown will help here (I think the jitsi jvb images do this automatically)

Edit:
I forgot about the k8s operator topic:
I agree that scaling jvbs automatically is too complex for a simple HPA or something, so an operator is needed. I will look into building the operator once we got our other issues fixed. I can let you know once I start with this so we can toss some ideas around, maybe, if you want :slight_smile:

@damencho @Kroev Thank you for your information. I thought that the the component approach is the only way to scale JVB because I could only find a tutorial for this one.

Sorry but I think we have an misunderstanding. My idea was to have multiple Kubernetes Deployments so that you can define a unique port for each one. So min. one JVB deployment per region is scaled to 1 and no one should ever scale larger than 1.

Ok interesting. How are you telling each videobridge the right port?

Yeah this would be great. Can you estimate when you will start working on it?

No. We are having multiple other issues with our Jitsi deployment, currently mainly with lipsync. But I am facing a jitsi deployment which will be used by potentially 4 figure numbers of users, so I have to find a way to scale out/back in the jvbs (and their nodes) based on the load on the system to be cost effective.

Ehh yeah I have one Deployment per JVB. Otherwise it is not possible to tell each individual bridge their own port, they also need unique JVB_AUTH_USERs which is why one will not get away with a simple HPA.

The region thing you are mentioning sounds a bit suspicous. If you want your users to use bridges geographically close to them based on their location you have to shard your bridge pool with OCTO.

I don’t know if this helps any, but I you don’t need unique JVB_AUTH_USERs.

Multiple bridges can login to XMPP with the same account, each getting a unique ‘resource’ automatically.

Boris

Hi, @damencho Damian!

Where can I see that scripts?

Is there documentation about shard and how to configure that?

2 Likes

I installed jitsi on my kubernetes Cluster (On-primise) behind a NAT.
I have successfully scaled with several JVBs which all work, but during a conference the number of servers displayed is always equal to 1.
How to have the normal display of the number of server?

You need to configure octo, number of servers is those used in a conference. With octo you can spread the load between bridges.

@adieye can you share more info about your setup?

1 Like

Thank you for your reply,
I did not understand in fact it is normal behavior, the fact that it shows me 1 server while I have 2, since the conference is done on a single bridge.
For the OCTO configuration, I don’t think it’s possible in my use case, I’m on an On-primise kubernetes.

We need a deployment for each component. For the JVB a simple HPA will not work. You will need a deployment by JVB and increment the listening port
For example:
JVB1: JVB_PORT: 10000/UDP
JVB2: JVB_PORT: 10001/UDP

For anyone whos interested I also recently deployed jitsi on kubenetes (with scalable JVBs) using statefullSets. you can find relevant scripts on

Hi! I see you have some selenium-hub stuff over there. Did you manage to get jibri to use selenium-hub for recording?

Hello @Dushmantha_Bandarana,

It is an interesting build.
Which Kubenetes Provider do you use and what machine specification you currently run for the master docker ? Do you develop your own docker from scratch or using jitsi’s docker project from https://github.com/jitsi/docker-jitsi-meet ? Is it automatically scalable or preset with up to 6 JVB ?

Thank you

1 Like

Hello @vkruoso

I use this selenium grid to load test jitsi kube deployment using https://github.com/jitsi/jitsi-meet-torture. sorry I dont use jibri in my jitsi setup.
Thanks

Hellow @Jnito

Which Kubenetes Provider do you use and what machine specification you currently run for the master docker ?

I had deployed this in GKE (google kube engine) but not anymore. Now this is in a local Kubernetes cluster with 4 nodes. It is a somewhat large cluster and has deployments other than jitsi.

Do you develop your own docker from scratch ?

No. Im using https://github.com/jitsi/docker-jitsi-meet. But with one change to jitsi/jvb image. If you have a look at docker file in https://github.com/DushmanthaBandaranayake/jitsi-kubernetes-scalable-service/blob/master/docker/Dockerfile, there im updating original 10-config with a custom one. That is done in order to dynamically assign JVB_PORT whenever new jvb is spawned.

Is it automatically scalable or preset with up to 6 JVB??

you can create a kube horizontal pod scaller on statefullSet to automatically scale up/donw between 0-6 JVBS based on CPU utilization. If you need more JVBs you have to update service.yaml. This is because each JVB needs to expose its UDP port via a kube Nodeport.

Best Regards

Huum. Cool. Did not know that there was a stress test implemented.

Thank you @Dushmantha_Bandarana for the explanation.

I learned that TURN server is the ongoing TO DO item to be developed into the original https://github.com/jitsi/docker-jitsi-meet. Do you think without TURN Server the JITSI docker can works too for/from user that operate behind secure corporate firewall when it comes to TCP connection and to lower the load of JVB for massive peer-to-peer connection (that’s my understanding the use of TURN server for Jitsi ). What did you do or suggest me to do to overcome this if any? Thank you bro