Azure kubernetes setup issue

hi we have jitsi standalone setup configured in Ubuntu16 which is working fine. now we wanted to setup in azure kubernetes, i have followed https://github.com/jitsi/docker-jitsi-meet/tree/master/examples/kubernetes for setup. we are facing issue in configuration.

we are getting “bridge channel send: no opened channel” error i have searched in community where i found this issue is related to DOCKER_HOST_ADDRESS in deployment file. but in azure kubernetes there will no host for master(it works as service), so how can i set docker host address, i tried adding public ip but still no luck

also any setup articles available for azure kubernetes

I am running the cluster on Oracle Cloud, so I am not sure that has the same behavior as AWS, but the dashboard in Kubernetes shows the cluster IP for the service. and copied the cluster IP from there in the deployment.yaml and applied the deployment again.

Do you see the cluster IP in the kubernetes dashboard?

i can see cluster ip in dashboard. i tried that too… but not working. is it working for you

this is for AKS(azure kubernetes service). can you give me your deployment file, i will cross check if i miss something

I am having the same issue, let me elaborate:
The error from the console shows “bridge channel send: no opened channel” , furthermore, the call is blocked from audio and video when more than two users and in the same room. The ports have been exposed using the azure load balancer and configuring udp forwarding on nginx-ingress. Any idea might could be missing from the config?

Ok, found out the answer after a while by myself. For azure users the steps require a little bit more of elaboration:

  1. On the DOCKER_HOST_ADDRESS place the ip address of your load balancer, azure sets up a load balancer for the nodes, its public ip address should be the same that your ingress uses.

  2. On the azure portal go to the resource group where your Kubernetes Cluster is deployed, and find the load balancer, there create a load balancing rule for the udp port you will be exposing (for example, following the default one 30300). This will allow that your load balancer will be able to listen to request that are targeted for a specific port and then route them to the proper machine.

  3. Finally, on that same resource group, go to the network security group that protects your Virtual Machine Scale Set (VMSS) and create an inbound security rule that allows the communication targeted to the UDP port you have selected (30300 if you are following the default).

  4. Deploy your service.

I’m having the same issue, I think I’ve got it configured the way you described here.
What did you do for load balancer health probe? Did you create another service and exposed it over tcp?

Do you know of any way to test if udp request is going through to verify it gets balanced correctly?