Webrtc-internals only display local IP for local candidates, remote candidate are correct

Hi,

We have installed Jitsi on our private AKS and are using an Nginx ingress to expose Jitsi to the internet. When we make a connection and inspect the webrtc-internals, we see that all local candidates use local IP addresses and not the proper public IP.

https://xxxxxxxx/ThomasIsStillFunny, { iceServers: [turn:coturn-xxx.xxx.xxx.net?transport=tcp], iceTransportPolicy: all, bundlePolicy: max-bundle, rtcpMuxPolicy: require, iceCandidatePoolSize: 0 }

ICE connection state: new => checking => connected => disconnected
Connection state: new => connecting => connected => disconnected => failed => closed
Signaling state: new => have-remote-offer => stable => have-remote-offer => stable => have-remote-offer => stable
ICE Candidate pair: 10.244.3.4.5:525405 <=> "our public IP":10000

For some reason, our web client cannot resolve the public IP, which we believe is the fact that it does not have any stun servers. When we use an STUN/TURN test page locally, it returns the proper public IP.
We followed the installation guides and read all config templates and have set the stun servers set in our config. Is there a way to
a) test that the config is properly injected into our pods and
b) validate that we have the stun server set in the right places?

We actually figured out, that one of our NginX server is not forwarding the client IPs. We managed to fix that. What is the recommended way to filter out proxy IPs in Jitsi? We know, that this can be done in Prosody. Would we need to do this in other places as well?

Check the KVB configuration for “harvesters”.

We hat set the public NAT harvester address to 50.xx.xxx.xxx. The internal IP of the NAT harvester is the IP of the kubernetes cluster.

But shouldn’t I see as the local candidates in the connections “my” public IP? I just see the IP of the ingress NginX we have.

If I understand correctly there is no problem on the server-side. The client (browser) cannot resolv its external IP.

There is a STUN configuration for a standalone Jitsi setup.

But there is no STUN for Docker setup.

It may be related with this.

Thanks @emrah! So, what you are saying, is that the WebRTC also needs have a STUN server to work in a JVB only setting to publish their own public IP to the JVB? This would mean that we need to change the prosody.cfg.lua, so that it publishes a STUN server to the client. If my understanding is correct, we will try this.

Just to reiterate, what we currently see in the JVB logs is the following:>

local_ufrag=2d4jl1h1qpdqtb ufrag=2d4jl1h1qpdqtb] Agent.triggerCheck#1737: Add peer CandidatePair with new reflexive address to checkList: CandidatePair (State=Frozen Priority=7962116751041232895):
LocalCandidate=candidate:1 1 udp 2130706431 “POD Ip” 10000 typ host
RemoteCandidate=candidate:10003 1 udp 1853824767 “local Ingress Ip” 34689 typ prflx
and a little bit later, we see:
Nomination confirmed for pair: “Public Ip”:10000/udp/srflx → “local Ingress Ip”*:59549/udp/prflx (stream-28493e18.RTP).

and then

 Selected pair for stream stream-28493e18.RTP: **"Public IP"**:10000/udp/srflx -> "local Ingress Ip":59549/udp/prflx (stream-28493e18.RTP)

In the WebRTC console, we see that the ICE candidates then are remote candidate: Public IP of our JITSI instance and local candidate is the “local Inress IP”, which is 10.XXX.XXX.XXX. So, where is this 10.XXX.XXX.XXX address coming from, as this is also an internal IP address only known to our Jitsi instance, and hence should not be known to the client.