Cannot establish a meeting with peers behind restrictive firewall

Hello,
as per the subject, using our self hosted jitsi meet server, we cannot establish audio/video connectivity with partners behind a corporate firewall.
I found this article so I was going to follow the instructions here on how to setup coturn on port 443 on the same server.
But then, in the template config file /usr/share/jitsi-meet-turnserver/jitsi-meet.conf (from the jitsi-meet-turnserver debian package) I see this snippet:

# Multiplexing based on ALPN is DEPRECATED. ALPN does not play well with websockets on some browsers and reverse proxies.
# To migrate away from using it read: https://jitsi.org/multiplexing-to-bridge-ws-howto
# This file will be removed at some point and if deployment is still using it, will break.

so I followed the instructions there (i.e. here)

Jitsi was already configured to use port 443, so it was just a matter of

  1. changing the tls-listening-port in the turnserver configuration
  2. adjusting the turncredentials setting in prosody (it was already there, I just commented out the “stun” and “turn” entries)
  3. adding the websocket location in nginx
  4. enabling the websockets in jitsi videobridge

then I tried a “normal” connection, everything works as before (wireshark shows that the clients use port 10000 udp).
I then blocked udp port 10000 on one of the clients (iptables -A OUTPUT -p udp --dport 10000 -j DROP) and I audio/video didn’t work.
(both clients are firefox, one under linux the other under android).

I’m using jitsi-meet from the debian repository at download.jitsi.org on debian 11

$ LC_ALL=C dpkg -l prosody coturn jitsi-meet jitsi-videobridge2
Desired=Unknown/Install/Remove/Purge/Hold
| Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend
|/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)
||/ Name               Version             Architecture Description
+++-==================-===================-============-=================================================
ii  coturn             4.5.2-3             amd64        TURN and STUN server for VoIP
ii  jitsi-meet         2.0.7001-1          all          WebRTC JavaScript video conferences
ii  jitsi-videobridge2 2.1-634-gff8609ad-1 all          WebRTC compatible Selective Forwarding Unit (SFU)
ii  prosody            0.11.9-2+deb11u2    amd64        Lightweight Jabber/XMPP server

You need this config. This is a new file which is not based on ALPN

That’s what I initially found (as I said), but then the faq says

For a while, we were using nginx multiplexing to serve jitsi-meet content on https(port 443) and use the same port for running a turn server. This proved to be problematic(you cannot use websockets with this setup) and we moved away from it. Here is how to remove multiplexing and enable websockets in favor of WebRTC Data Channels.

so, which one is the correct approach?

This comment is for ALPN based config. The new config doesn’t share the same FQDN for Jitsi and TURN. It uses different FQDN for each.

Thank you, maybe it’s just me but I found it misleading.

Now, for that configuration, do I still need to advertise a normal turn (not turns) server in the prosody config? i.e.

turncredentials_secret = "xxxxxxx";

turncredentials = {
     { type = "stun", host = "meet.my.dom.ain", port = "???" },
     { type = "turn", host = "meet.my.dom.ain", port = "???", transport = "udp" },
    { type = "turns", host = "turn-meet.my.dom.ain", port = "443", transport = "tcp" }
};

Or if I use XEP-0215/mod_external_services it is not needed?

Participants behind the restricted firewall need only TURNS.

But do I have to include the above settings (turncredentials_secret/turncredentials) or is it enough to define them in the mod_external_services configuration?

AFAIK these params are deprecated and there are external_service_secret and external_services in the new version.

The secret must match the one in turnserver.conf (static-auth-secret).

OK, I’m still failing in making it work, so I’m taking it step by step, the first one is correctly configuring coturn.

Following the tutorial here I’m using trickle-ice to test the turn server.
If I use turns on port 5349 (i.e direct to coturn) tricke-ice gives me an rtp and rtcp relay for my ip address

Time 	Component 	Type 	Foundation 	Protocol 	Address 	Port 	Priority 	Mid 	MLine Index 	Username Fragment
0.188	rtp relay	2	udp	2.139.210.92	49708	0 | 32543 | 255	0	0	9d79cb5f
0.194	rtcp relay	2	udp	2.139.210.92	52918	0 | 32543 | 254	0	0	9d79cb5f

but if I use port 443 (i.e. proxied by nginx) I get nothing.
If I look at the coturn log, the only difference is that it sees 127.0.0.1 as the remote address (since the request is coming from nginx I think), but otherwise the log is the same up until here:

IPv4. tcp or tls connected to: 127.0.0.1:40316
IPv4. tcp or tls connected to: 127.0.0.1:40316
IPv4. tcp or tls connected to: 127.0.0.1:40318
IPv4. tcp or tls connected to: 127.0.0.1:40318
session 001000000000000009: realm <turn-meet.wetron.es> user <>: incoming packet message processed, error 401: Unauthorized
session 001000000000000009: realm <turn-meet.wetron.es> user <>: incoming packet message processed, error 401: Unauthorized
session 000000000000000015: realm <turn-meet.wetron.es> user <>: incoming packet message processed, error 401: Unauthorized
session 000000000000000015: realm <turn-meet.wetron.es> user <>: incoming packet message processed, error 401: Unauthorized
IPv4. Local relay addr: 172.16.69.25:53790
IPv4. Local relay addr: 172.16.69.25:53790
session 001000000000000009: new, realm=<turn-meet.wetron.es>, username=<1649931750>, lifetime=3600, cipher=ECDHE-RSA-AES256-GCM-SHA384, method=TLSv1.2
session 001000000000000009: new, realm=<turn-meet.wetron.es>, username=<1649931750>, lifetime=3600, cipher=ECDHE-RSA-AES256-GCM-SHA384, method=TLSv1.2
session 001000000000000009: realm <turn-meet.wetron.es> user <1649931750>: incoming packet ALLOCATE processed, success
session 001000000000000009: realm <turn-meet.wetron.es> user <1649931750>: incoming packet ALLOCATE processed, success
IPv4. Local relay addr: 172.16.69.25:54660
IPv4. Local relay addr: 172.16.69.25:54660
session 000000000000000015: new, realm=<turn-meet.wetron.es>, username=<1649931750>, lifetime=3600, cipher=ECDHE-RSA-AES256-GCM-SHA384, method=TLSv1.2
session 000000000000000015: new, realm=<turn-meet.wetron.es>, username=<1649931750>, lifetime=3600, cipher=ECDHE-RSA-AES256-GCM-SHA384, method=TLSv1.2
session 000000000000000015: realm <turn-meet.wetron.es> user <1649931750>: incoming packet ALLOCATE processed, success
session 000000000000000015: realm <turn-meet.wetron.es> user <1649931750>: incoming packet ALLOCATE processed, success

then, on port 5349 I see this immediately

session 001000000000000008: TLS/TCP socket closed remotely 79.116.37.233:36827
session 001000000000000008: TLS/TCP socket closed remotely 79.116.37.233:36827
session 001000000000000008: usage: realm=<turn-meet.wetron.es>, username=<1649931750>, rp=2, rb=172, sp=2, sb=232
session 001000000000000008: usage: realm=<turn-meet.wetron.es>, username=<1649931750>, rp=2, rb=172, sp=2, sb=232
session 001000000000000008: peer usage: realm=<turn-meet.wetron.es>, username=<1649931750>, rp=0, rb=0, sp=0, sb=0
session 001000000000000008: peer usage: realm=<turn-meet.wetron.es>, username=<1649931750>, rp=0, rb=0, sp=0, sb=0
session 001000000000000008: closed (2nd stage), user <1649931750> realm <turn-meet.wetron.es> origin <>, local 0.0.0.0:5349, remote 79.116.37.233:36827, reason: TLS/TCP connectio
session 001000000000000008: closed (2nd stage), user <1649931750> realm <turn-meet.wetron.es> origin <>, local 0.0.0.0:5349, remote 79.116.37.233:36827, reason: TLS/TCP connectio
session 001000000000000008: SSL shutdown received, socket to be closed (local 0.0.0.0:5349, remote 79.116.37.233:36827)
session 001000000000000008: SSL shutdown received, socket to be closed (local 0.0.0.0:5349, remote 79.116.37.233:36827)
session 001000000000000008: delete: realm=<turn-meet.wetron.es>, username=<1649931750>
session 001000000000000008: delete: realm=<turn-meet.wetron.es>, username=<1649931750>
session 000000000000000014: refreshed, realm=<turn-meet.wetron.es>, username=<1649931750>, lifetime=0, cipher=ECDHE-RSA-AES256-GCM-SHA384, method=TLSv1.2
session 000000000000000014: refreshed, realm=<turn-meet.wetron.es>, username=<1649931750>, lifetime=0, cipher=ECDHE-RSA-AES256-GCM-SHA384, method=TLSv1.2
session 000000000000000014: realm <turn-meet.wetron.es> user <1649931750>: incoming packet REFRESH processed, success
session 000000000000000014: realm <turn-meet.wetron.es> user <1649931750>: incoming packet REFRESH processed, success
session 000000000000000014: TLS/TCP socket closed remotely 79.116.37.233:44581
session 000000000000000014: TLS/TCP socket closed remotely 79.116.37.233:44581
session 000000000000000014: usage: realm=<turn-meet.wetron.es>, username=<1649931750>, rp=3, rb=292, sp=3, sb=320
session 000000000000000014: usage: realm=<turn-meet.wetron.es>, username=<1649931750>, rp=3, rb=292, sp=3, sb=320
session 000000000000000014: peer usage: realm=<turn-meet.wetron.es>, username=<1649931750>, rp=0, rb=0, sp=0, sb=0
session 000000000000000014: peer usage: realm=<turn-meet.wetron.es>, username=<1649931750>, rp=0, rb=0, sp=0, sb=0
session 000000000000000014: closed (2nd stage), user <1649931750> realm <turn-meet.wetron.es> origin <>, local 0.0.0.0:5349, remote 79.116.37.233:44581, reason: TLS/TCP connectio
session 000000000000000014: closed (2nd stage), user <1649931750> realm <turn-meet.wetron.es> origin <>, local 0.0.0.0:5349, remote 79.116.37.233:44581, reason: TLS/TCP connectio
session 000000000000000014: SSL shutdown received, socket to be closed (local 0.0.0.0:5349, remote 79.116.37.233:44581)
session 000000000000000014: SSL shutdown received, socket to be closed (local 0.0.0.0:5349, remote 79.116.37.233:44581)
session 000000000000000014: delete: realm=<turn-meet.wetron.es>, username=<1649931750>
session 000000000000000014: delete: realm=<turn-meet.wetron.es>, username=<1649931750>

while on port 443, it’s similar but only after 13 seconds, and it’s missing the 4 lines starting with “refreshed, realm…”

Apart of the coturn server apparently not working on port 443, there must be something else: even if I advertise port 5349 in prosody, as soon as I block udb port 10000 audio/video doesn’t work.

Partial success, I took and adapted the coturn configuration from this post, added an allowed-peer line for the ip of my internal host (it’s behind a nat) and now I get audio/video when I advertiise the turn server on port 5349.
Still no success if I try port 443.

Can you share Nginx module config?

Success!
Since here it says to use the public ip for the turn server, and with the real public ip it failed, I though to put 127.0.0.1 which also fails.
It turns out I had to put the internal ip for it to work.

Because if the loopback IP is set for the TURN server, all local services become publicly accessible.

And it seems that the missing part is the firewall rule which redirects the internal UDP/10000 requests to the internal JVB service

So, should I use the real external ip? (I mean, using the loopback address or the internal address should represent the same security risk, right?).
But if I use the external ip it doesn’t work (and, I’m not 100% sure, but I think I have nat reflection enabled on the pfsense firewall).
Edit: yes, it’s enabled.

How do you do that? With default jitsi configs for turn, the turn server need to have access to that port on the public address advertised by jvb.

Well, now it works (apart from the security problem mentioned by @emrah).
I’m using external_services:

external_service_secret = "xxxxxxxxx";
external_services = {
  { type = "stun", host = "turn-meet.wetron.es", port = 3478 },
  { type = "turns", host = "turn-meet.wetron.es", port = 443, transport = "tcp", secret = true, ttl = 86400, algorithm = "turn" }
};

previously I had turncredentials directives, as well as the turncredendials module enabled in both virtualhosts (guest and authenticated) instead of external_services, I don’t know if that made a difference or not.

Do you block on the client side or on the server side?
It should be on the client side

Yes, on the client side.