Oracle Cloud Infrastructure(OCI) with Kubernetes(OKE) Network Load Balancing with UDP port/listener

Hello!

Does anyone use Oracle Cloud Infrastructure(OCI) with Kubernetes (OKE) to serve Jitsi?

A few times I already made it work on different cloud (AWS, OCI, other cloud provider) with different techstack (docker, swarm) but I never used load balancer and Kubernetes together with Jitsi. I was struggling last few days with it. Every documentation what I found uses UDP/10000 but I did not find in “OCI Kubernetes (OKE)” how to create “Network Load Balancer”. I use “Ingress Controller” for Load Balancer but it creates “Classic Load Balancer” which does not support UDP protocol. (I also started to check Traefik)

I used this ENV file to make the TCP work, but I cannot fully disable the UDP version. I want it to “always” try the TCP.

  • name: JVB_PORT
    value: “30300”
  • name: JVB_TCP_PORT
    value: “30301”
  • name: JVB_TCP_MAPPED_PORT
    value: “30301”

I use:

  • OCI
  • Kubernetes
  • Node pool

So my question is the following (who uses OCI and Kubernetes (OKE))

  • Which load balancer type (Classic or Network)?
  • Nginx Ingress Controller or Traefik (or any other)?
  • should I worry to fully disable the UDP listener?
  • How to setup listener port in load balancer? via Kubernetes or manually? (I only was able to add the port “30301” manually to the listeners via NGINX ingress)
  • If I create manually Network Load Balancer I cannot attach to Kubernetes to manage it properly. I need to add/remove manually backend instances to the NLB

With this manual configuration I already made it work, but I have 2 problems:
1, it is manual solution…
2, when the meeting starts with 2-3 participants it works fine (camera on), but after 20 minutes sometimes the camera disappear and after 1 minute it comes back. Used to realize this kind of behaviour when was not enough resources. (For testing now I use 1CPU and 15GB memory, with this resource the Jitsi worked perfectly in simple docker and swarm mode. And if I monitor the node resources during the meeting its 30% CPU and 10% Memory, so it does not look resources issue.)

Thank you for helping

What I used as base:

Also checked the

But the problem is that all of them using UDP port with Load Balancer

You might consider using hostPort for the udp/10000 listener, and configuring JVB’s NAT mapping with the private & public IP of the node (you should be able to pass down the node’s public IP as an env var using envFrom: fieldRef:). This way you don’t need to use an LB just to expose the port publicly. Since there would have to be a 1-1 mapping of LB listeners to JVBs, it’s sort of a waste of a LB anyway.

Thank you for your quick reply @jbg
I do not really understand how you mean.
I attached an image of the current state. (still in progress and changing)

Last two days I tried to make it work with Traefik proxy but for some reason I was not able to generate certificate with (Traefik & Cert-manager & Loadbalancer) so I gave up that way and just came back to the nginx ingress controller. Now I costumize the config files and authentications and after that I will check the JVB scaling with the current config.

Loadbalancer open: TCP 80, 443, 30301
JVB service nodeport: 30301/tcp and 30300/udp
Secret:
kubectl create secret generic jitsi-config -n jitsi
–from-literal=PUBLIC_URL=‘https://XXXXXXX
–from-literal=DOCKER_HOST_ADDRESS=${LOAD_BALANCER_IP}
–from-literal=JVB_TCP_HARVESTER_DISABLED=“false”
–from-literal=JICOFO_COMPONENT_SECRET="$(openssl rand -hex 16)"
–from-literal=JICOFO_AUTH_PASSWORD="$(openssl rand -hex 16)"
–from-literal=JVB_AUTH_PASSWORD="$(openssl rand -hex 16)"

So for the DOCKER_HOST_ADDRESS I use the loadbalancer IP and the PUBLIC_URL DNS A record points to the LOAD_BALANCER_IP. As my last deployment the LOAD_BALANCER_IP =132.226.129.236 and XXXXXXX => A: 132.226.129.236

An Ingress is suitable for HTTP services (the static files if you are serving them from inside the cluster and the XMPP websocket or BOSH endpoint) but it doesn’t make much sense to try to funnel JVB traffic through it.

It’s also not necessary (nor sensible) to put JVB behind a load balancer, because Jicofo assigns a particular JVB to each participant, and they need to connect to that specific JVB. So if you deploy the JVBs behind a LB, you would have to have one LB listener per JVB anyway – so the LB is not doing anything. In your case you seem to have a single jvb-tcp-udp NodePort service. This will not work as soon as you have more than one JVB, because the load balancer may direct the traffic to a different JVB than Jicofo assigned channels for them on.

The easiest way to handle JVB traffic within a k8s cluster is to use hostPort for the JVB pods. It means that each JVB has a unique IP so clients can connect to the correct one assigned to them by Jicofo, and it ensures that the return packets from the JVB also have that IP (which can be a problem when exposing JVB using Service). You can learn about hostPort in the Kubernetes documentation.

1 Like

Thank you for your reply @jbg

To be honest I had to read it about 20x that you wrote :slight_smile:
I was confused and mixed up the “hostport” and “nodeport”.
Now it starts to make sense to me.
Thank you for the guide I will check by the “hostport” the services and I will get back to you once it’s done at the weekend.

Have a nice day and weekend

Hi @jbg,

It looks finally solved. :slight_smile:
Thank you for the guide.

I had two issues:

1, I used my webserver terraform cluster script where all the nodepools are in a private subnet. So, I had to change the subnet to a public one. After that, I got Public IP and the JVB was able to connect.
2, Your guide also was a big help. It guided me in the right direction and how to configure.

From now I can start to check the autoscaling of the JVB and after that the OKTO.
I hope after this “nonsense” everything is going to work :slight_smile:

1 Like