Cannot establish a meeting with peers behind restrictive firewall

I think I know what the original problem was: the host is behind a pfsense firewall/nat and it is in the dmz, so, in spite of having nat reflection enabled, it cannot access its external ip (even though I put a rule that should allow that).
Is it a security risk if I leave the nginx configuration pointing to the internal ip?

Nope, as long as it has all the denied-peer rules from the template: jitsi-meet/turnserver.conf at a6ad592d2502ec8f65e508ef8cf0805d29d77986 · jitsi/jitsi-meet · GitHub

Yes, I have all that, but I had to add an allowed-peer-ip for its own internal address.
I’m still trying to “fix” the firewall but it’s proving to be difficult.

allowed-peer-ip for the jvb address or?

for the internal ip address of the jvb (it’s all on the same host: nginx, jvb2, prosody, coturn)

Now I think I fixed the pfsense firewall: I had to add a rule to the dmz zone to allow the host to connect back to the firewall itself, then I had to change the nat reflection rules from “pure NAT” to “NAT+proxy”.
This way I could point nginx to the external ip of the turnserver but I still have to leave the allowed-peer-ip for the internal address, so I’m not sure this bought me anything.

Then only the services which listens the internal IP or 0.0.0.0 become publicly accessible. I think this is not a big problem. Not good for API endpoints but maybe acceptable in your case

Well, it is a problem, since it’s most of them, including some I don’t want to be exposed :slightly_frowning_face:
But I’m not sure if my latest changes to the firewall fixed it: now I specify the external address which will loop back to the same host through the firewall, but I still have to leave the allowed-peer-ip for the internal address

You don’t need it. What is your turn config look like, is it like the template? If you have external-ip (not sure about the exact name) then you need to allow it, but if you don’t have that it should be fine as coturn will use the public address of the bridge. You need to make sure coturn can connect to the bridge using its external address and port.

Yes, it’s like it

Ouch, I forgot to put it (external-ip I mean) and the tricke ice test reports the internal ip. Now I added the external-ip and it stopped working (I think it’s because it cannot connect to the external address of the bridge in spite of all the rules, I think that “nat+proxy” doesn’t work for udp).

Edit: exactly, nat+proxy doesn’t work for udp

I though about it. I think the guide and my messages are wrong. There is no need to use the public IP of TURN in nginx config. The loopback IP or the interface IP should be OK.

Allowing to access TURN server through its internal IP is not a security issue since denied-peer-ip blocks access to local networks.

I tried the following iptables rule in Jitsi server to redirect the outgoing UDP/10000 packets to itself and it works

iptables -t nat -A OUTPUT -o eth0 -s INTERNAL_IP -d PUBLIC_IP -p udp --dport 10000 -j DNAT --to INTERNAL_IP:10000

That seemed promising, unfortunately it doesn’t work here.

Just in case it matters, in order to completely bypass the firewall and its NAT reflection, I replicated that rule for all involved coturn/jvb2 ports (tcp 443, tcp 5349, udp 3478 and udp 10000).

I tried capturing traffic on port 10000 (using tcpdump -i any) on the jitsi host and I see no traffic whatsoever (I do see the traffic when the client has port 10000 unblocked).

This is what I see in the verbose turnserver log when two peers enter the meeting (the client with port 10000 blocked is 10.0.9.3, nowhere to be seen, the internal address of the server is 172.16.69.25)
Note I inadvertendly left the allowed-peer-ip=172.16.69.25 line, it seems it’s still trying to use the internal ip instead of the external one :thinking:

IPv4. tcp or tls connected to: 172.16.69.25:44868
IPv4. tcp or tls connected to: 172.16.69.25:44868
session 003000000000000009: realm <turn-meet.wetron.es> user <>: incoming packet message processed, error 401: Unauthorized
session 003000000000000009: realm <turn-meet.wetron.es> user <>: incoming packet message processed, error 401: Unauthorized
IPv4. Local relay addr: 172.16.69.25:51762
IPv4. Local relay addr: 172.16.69.25:51762
session 003000000000000009: new, realm=<turn-meet.wetron.es>, username=<1650442416>, lifetime=3600, cipher=TLS_AES_256_GCM_SHA384, method=TLSv1.3
session 003000000000000009: new, realm=<turn-meet.wetron.es>, username=<1650442416>, lifetime=3600, cipher=TLS_AES_256_GCM_SHA384, method=TLSv1.3
session 003000000000000009: realm <turn-meet.wetron.es> user <1650442416>: incoming packet ALLOCATE processed, success
session 003000000000000009: realm <turn-meet.wetron.es> user <1650442416>: incoming packet ALLOCATE processed, success
session 003000000000000009: peer 172.16.69.25 lifetime updated: 300
session 003000000000000009: peer 172.16.69.25 lifetime updated: 300
session 003000000000000009: realm <turn-meet.wetron.es> user <1650442416>: incoming packet CREATE_PERMISSION processed, success
session 003000000000000009: realm <turn-meet.wetron.es> user <1650442416>: incoming packet CREATE_PERMISSION processed, success
session 003000000000000009: peer 172.16.69.25 lifetime updated: 300
session 003000000000000009: peer 172.16.69.25 lifetime updated: 300
session 003000000000000009: realm <turn-meet.wetron.es> user <1650442416>: incoming packet CREATE_PERMISSION processed, success
session 003000000000000009: realm <turn-meet.wetron.es> user <1650442416>: incoming packet CREATE_PERMISSION processed, success
IPv4. tcp or tls connected to: 172.16.69.25:44878
IPv4. tcp or tls connected to: 172.16.69.25:44878
session 003000000000000010: realm <turn-meet.wetron.es> user <>: incoming packet message processed, error 401: Unauthorized
session 003000000000000010: realm <turn-meet.wetron.es> user <>: incoming packet message processed, error 401: Unauthorized
IPv4. Local relay addr: 172.16.69.25:61901
IPv4. Local relay addr: 172.16.69.25:61901
session 003000000000000010: new, realm=<turn-meet.wetron.es>, username=<1650442843>, lifetime=3600, cipher=TLS_AES_256_GCM_SHA384, method=TLSv1.3
session 003000000000000010: new, realm=<turn-meet.wetron.es>, username=<1650442843>, lifetime=3600, cipher=TLS_AES_256_GCM_SHA384, method=TLSv1.3
session 003000000000000010: realm <turn-meet.wetron.es> user <1650442843>: incoming packet ALLOCATE processed, success
session 003000000000000010: realm <turn-meet.wetron.es> user <1650442843>: incoming packet ALLOCATE processed, success

then nothing, until one of the peers leaves the meeting

session 003000000000000010: TLS/TCP socket closed remotely 172.16.69.25:44878
session 003000000000000010: TLS/TCP socket closed remotely 172.16.69.25:44878
session 003000000000000010: usage: realm=<turn-meet.wetron.es>, username=<1650442843>, rp=2, rb=172, sp=2, sb=232
session 003000000000000010: usage: realm=<turn-meet.wetron.es>, username=<1650442843>, rp=2, rb=172, sp=2, sb=232
session 003000000000000010: peer usage: realm=<turn-meet.wetron.es>, username=<1650442843>, rp=0, rb=0, sp=0, sb=0
session 003000000000000010: peer usage: realm=<turn-meet.wetron.es>, username=<1650442843>, rp=0, rb=0, sp=0, sb=0
session 003000000000000010: closed (2nd stage), user <1650442843> realm <turn-meet.wetron.es> origin <>, local 172.16.69.25:5349, remote 172.16.69.25:44878, reason: TLS/TCP connection closed by client (callback)
session 003000000000000010: closed (2nd stage), user <1650442843> realm <turn-meet.wetron.es> origin <>, local 172.16.69.25:5349, remote 172.16.69.25:44878, reason: TLS/TCP connection closed by client (callback)
session 003000000000000010: SSL shutdown received, socket to be closed (local 172.16.69.25:5349, remote 172.16.69.25:44878)
session 003000000000000010: SSL shutdown received, socket to be closed (local 172.16.69.25:5349, remote 172.16.69.25:44878)
session 003000000000000010: delete: realm=<turn-meet.wetron.es>, username=<1650442843>
session 003000000000000010: delete: realm=<turn-meet.wetron.es>, username=<1650442843>

But it doesn’t block access to the host itself, right? What worries me are other ports that I don’t want to be exposed.
Isn’t there a way to limit coturn to relay only port 10000?

It seems that the only way to make it work is to leave out the external-ip line and use the internal ip in nginx. How does coturn determine the bridge address?
Also, since I’m using the web client, how can I debug the configuration using websockets (the client doesn’t seem to even try using websockets instead of the turn server).

as a stopgap, if you specify no-tcp-relay in your coturn configuration, external systems can still send UDP packets to internal hosts, but it downgrades the threat from ‘intruder in the network, severe and immediate’ to far-fetched, no easy way to exploit.

Don’t do that. You need only to add a rule for UDP/10000. And use the internal IP for TURN in Nginx module config.

Dont forget to try with 3 participants. Otherwise peers communicate directly.

Clients try to connect using all possible IPs published by JMS. Is org.ice4j.ice.harvest.STUN_MAPPING_HARVESTER_ADDRESSES enabled in /etc/jitsi/videobridge/sip-communicator.properties in your setup?

I didn’t test it but if the client uses an IP from the internal networks, denied-peer-ip lines may prevent it too

If I use the internal IP in nginx it works with no iptables rule at all.

One of them has port 10000 blocked (with iptables -A OUTPUT -p udp --dport 10000 -j DROP)

No, it was commented out. Now I uncommented it (using my turn server address instead of the default meet-jit-si-turnrelay.jitsi.net).

Ouch, that could be it, but coturn didn’t complain (as it did fot 172.16.69.25)

Since denied-peer-ip lines are enabled, the remote cannot access the local networks through TURN. So you may use the internal IP for TURN in Nginx module config.

No. The iptables rule simulates this case.

It should be a remote STUN server. The local server doesn’t work. JVB uses it to learn its external IP.