Need help with Jitsi and Kubernetes setup

I am currently working on a personal project to better understand Kubernetes and Jitsi, and I could use some help with my setup. I am not a member of any organization, so I am turning to the Jitsi community for assistance.

I have attached my YAML files and would appreciate it if someone could review them and let me know if there are any configuration elements that may be missing. I am running this setup on my local machine, using minikube on Ubuntu, which is deployed on a VMware workstation. I have noticed that multiple participants can join and engage in chat via Firefox and Chrome, but they are unable to see or hear each other.

I am new to both Jitsi and Kubernetes, so please excuse any perceived lack of expertise. Any advice or suggestions would be greatly appreciated. Thank you in advance for your help.

YAML files.txt (10.9 KB)

I have noticed that multiple participants can join and engage in chat via Firefox and Chrome, but they are unable to see or hear each other.

This implies that signalling is working but that connectivity between the participants and JVB is broken.

The NodePort services that you have created for JVB correspond to ports that JVB does not use by default. Unless you have configured JVB to use a multi port harvester on that range, these services are not doing anything. By default, JVB uses only udp/10000 for audio and video.

You need to ensure that JVB is presenting the same IP address in its ICE candidates that participants are connecting to, and that its outbound traffic is arriving at participants with that same source address. In k8s, this generally means you are forced to use hostPort for its udp/10000 port, or give the whole container hostNetwork. If you choose to use hostPort, also take note that you have configured the port on the JVB deployment as protocol: TCP (the default) when it should be UDP. That isn’t a problem now, because containerPort is mostly just informational unless hostPort is used (all ports on a Pod are reachable on the Pod IP regardless of what is listed in ports), but if you use hostPort you will need to correct this.

If you really want to use Service to proxy the traffic, you must set up a manual mapping to tell JVB what its “external” address is — and the address translation implemented by your network (k8s and whatever is outside k8s) must be symmetric, that is, outbound traffic from the JVB pod must source from the same IP that peers are signalled to send inbound traffic to.

This can be very difficult or impossible to achieve in some cloud provider environments due to their NAT implementations, forcing the use of hostPort/hostNetwork, but it is probably achievable in bare metal k8s / minikube since you have more control over the networking.

1 Like

I have implemented several modifications based on your recommendations, although I am currently testing only one instance of jvb, and once successful, I will attempt to configure the stateful set accordingly. Specifically, I have made the following adjustments:

I have altered the default port for jvb’s audio and video from 10000 to 32567 by modifying the “org.jitsi.videobridge.TCP_HARVESTER_PORT” setting in the sip.communicator.properties file.

I have also exposed udp port 32567 on the jvp.

I have created a “my-service” for jvb, utilizing nodeport to map the port and target to 32567, and exposing the service on nodePort 32567.

To accommodate ICE, I have included the following settings in the sip.communicator.properties file: (with the assumption that my minikube ip is 192.168.49.2 and my-service cluster ip is 10.105.90.130)

org.ice4j.ice.harvest.NAT_HARVESTER_PUBLIC_ADDRESS=192.168.49.2
org.ice4j.ice.harvest.NAT_HARVESTER_LOCAL_ADDRESS=10.105.90.130

sip.communicator.properties after configuration changes:
org.ice4j.ice.harvest.DISABLE_AWS_HARVESTER=true
org.ice4j.ice.harvest.STUN_MAPPING_HARVESTER_ADDRESSES=meet-jit-si-turnrelay.jitsi.net:443
org.jitsi.videobridge.ENABLE_STATISTICS=true
org.jitsi.videobridge.STATISTICS_TRANSPORT=muc
org.jitsi.videobridge.xmpp.user.shard.HOSTNAME=localhost
org.jitsi.videobridge.xmpp.user.shard.DOMAIN=auth.localhost
org.jitsi.videobridge.xmpp.user.shard.USERNAME=jvb
org.jitsi.videobridge.xmpp.user.shard.PASSWORD=@7DYxXJp
org.jitsi.videobridge.xmpp.user.shard.MUC_JIDS=JvbBrewery@internal.auth.localhost
org.jitsi.videobridge.xmpp.user.shard.MUC_NICKNAME=f410798a-74ae-495d-b255-cd510410aefa
org.jitsi.videobridge.TCP_HARVESTER_PORT=32567
org.ice4j.ice.harvest.NAT_HARVESTER_PUBLIC_ADDRESS=192.168.49.2
org.ice4j.ice.harvest.NAT_HARVESTER_LOCAL_ADDRESS=10.105.90.130

I have restarted both the jvb service and the jicofo service.
Despite these alterations, the configuration continues to fail, as evidenced by the attached console errors. Please advise if the modifications I made based on your input are appropriate or if there are additional steps I should take to resolve the issue. Thank you in advance for your assistance. Your help is greatly appreciated.

consoleErrors.txt (41.7 KB)

I don’t think this exists anymore. And tcp is disabled by default in the bridge and not recommended to use.

This means all your clients will be in the same network 192.168… no other client from internet or other networks will be able to send media to the bridge.

This is the ip, that reaches clients and on which they try to send media to the bridge. That should be some public address and there should be port forwarding so the packets reach jvb process on port udp 10000.

To accommodate ICE, I have included the following settings in the sip.communicator.properties file: (with the assumption that my minikube ip is 192.168.49.2 and my-service cluster ip is 10.105.90.130)

org.ice4j.ice.harvest.NAT_HARVESTER_PUBLIC_ADDRESS=192.168.49.2
org.ice4j.ice.harvest.NAT_HARVESTER_LOCAL_ADDRESS=10.105.90.130

The local address is what JVB will bind to, so must be the pod IP, not the service IP. The public address should be the address of the node that JVB is running on.

I have some doubts that you will be able to make this setup with NodePort services work correctly, because outbound traffic from JVB will be NATed to random ports on the node, rather than to the port that was negotiated in ICE, breaking the needed symmetry.

That’s why I recommend exposing JVB’s port (udp/10000 by default, which I would recommend keeping for simplicity) as a hostPort or using hostNetwork to expose the pod on the node’s IP, which you can then ensure is either a public IP or NATed symmetrically 1-1 from a public IP.

1 Like