Firewalls allowing only ports 80 and 443 with docker-jitsi-meet

Hello community!,

I’m having problems to make my setup work behind firewalls with only ports 80 and 443 open, but if I close port 10000 outbound I get no video. I’ve been banging my head against this for months already because it’s critical for my setup to work in strict firewall configurations :frowning_face: My guess (see end of the post) is that the colibri communications are taking place in port 10000 but I do not know what should I change to change the port of colibry wss communications.

As a side note I’ve tried to experiment with these (official) docs without solving the problem: running behind nat, advanced configuration (docker-jitsi-meet) and web-sockets.

Setup:

I use a docker-jitsi-meet installation and modify it in two ways to better work behind firewalls:

  1. I add a turn server as PR 163. Contray to that PR I’m also using letsencrypt to create a secure key for the turn server which is not part of that PR, not sure if this matters.
  2. I setup multiplexing as the docs. The exact setup for the multiplexing is shown in this comment.

In config.js apart from useStunTurn: true, (whether I set it to false for p2p it does not change the result for those, that is, no video) I set openBridgeChannel: 'websocket', however setting or not openBridgeChannel does not solve the problem.

Debugging the Problem

Videoconferences work with all ports closed except 80, 443 and 10000 outbound (outbound from the browser point of view). 10000 inbound can be blocked and the videoconferences still work. However from the moment I block 10000 outbound the video does no longer work.

Serverside I do not see anything in the logs that leads me to understand what’s going on, everything seems equivalent to when videoconferences are working.

In chrome://webrtc-internals/ turn seems to work or at least shows up in iceServers.

The only indications of problems I have are in Chrome dev tools Console and Network tabs:

Console tab (Chrome dev tools):

I start by getting this error:

[modules/RTC/BridgeChannel.js] <WebSocket.e.onclose>: Channel closed: 1006

and after a few ones of those, I start getting also this one.

[modules/connectivity/IceFailedHandling.js] <i._conference.jvbJingleSession.terminate.reason>: session-terminate for ice restart - error: undefined
[modules/RTC/BridgeChannel.js] <l._send>: Bridge Channel send: no opened channel.

And end up getting this message when colibri through websocket communications start failing with a 403

BridgeChannel.js:88 WebSocket connection to ‘wss://my.domain/colibri-ws/default-id/e5f750ce352a6029/078d99d0?pwd=fpu0roiurneap8d437ofs1nf5’ failed: Error during WebSocket handshake: Unexpected response code: 403

The first error "Channel closed: 1006 " is preceded by some info

[modules/RTC/BridgeChannel.js] <WebSocket.e.onclose>: Channel closed by server

If I open the port 10000 these errors do not appear (and the video works)

Network tab (Chrome dev tools):

The websockets communications that the colibri exchanges (wss://my.domain/colibri-ws/default-id/…) seem to work but after a while the messages coming from the server with this shape.

{active: “false”, colibriClass: “EndpointConnectivityStatusChangeEvent”, endpoint: “8d4c2d18”}

If I open the port 10000 there are no communications in this channel. Because of this and the absence of problems in the server logs I think what’s going on through colibri is key to solve this problem

Finally as described in the websocket docs troubleshooting section. If I look for session-initiate in the logs I get the correct url, in this case wss://my.comain/colibri-ws/default-id/conf-id/edpoint-id?pwd=thePassword, I copy here part of the xml of the session-initiate

<transport xmlns=“urn:xmpp:jingle:transports:ice-udp:1” ufrag=“dtt2j1eke7m0ss” pwd=“thePassword”>
<web-socket xmlns=“http://jitsi.org/protocol/colibri” url=“wss://my.comain/colibri-ws/default-id/conf-id/endpoint-id?pwd=thePassword”/>
<rtcp-mux/>
<fingerprint xmlns=“urn:xmpp:jingle:apps:dtls:0” setup=“actpass” required=“false” hash=“sha-256”>…
</fingerprint>
<candidate type=“host” foundation=“1” component=“1” protocol=“udp” port=“10000” ip=“172.18.0.6” id=“72b45472e40c7710904c12a” network=“0” generation=“0” priority=“2130706431”/>
<candidate type=“srflx” generation=“0” rel-port=“10000” foundation=“2” component=“1” ip=“188.166.9.14” port=“10000” rel-addr=“172.18.0.6” protocol=“udp” network=“0” id=“5c209da6e40c77101998ca32” priority=“1694498815”/>
</transport>

It seems very strange to me that the port detailed for this candidates is 10000 but I don’t know how I could go about debugging the colibri communications or how could I change the setup serverside so they’d work. I’ve been struggling with this for a long time, any help would be hugely appreciated :slight_smile:

This post was flagged by the community and is temporarily hidden.

Can you try adding to your bridge config /etc/jitsi/videobridge/jvb.conf

videobridge {
  sctp {
    enabled = False
  }
}

And to /etc/jitsi/jicofo/jicofo.conf

jicofo {
  sctp {
    enabled = false
  }
}

Does that change anything?

I did the changes in both containers and then restarted the services with service restart in their respective docker containers. Some things have changed even if I still do not have video in the videoconference. I see this changes:

  1. The errors in the console take now much longer to appear. Colibri wss communications do last a long time until the first errors appears, once that happens the errors are the same as before
  2. When searching for session-initiate in Chrome dev tools I get something very similar to what I was getting before except that the candidates in port 10000 do not appear. I paste below what I get now
  3. I see errors serverside in the logs of the jvb container. I paste below the errors there

XML of session initiate I paste only what is inside <iq><jingle><content> which is what I had pasted before

<transport xmlns=“urn:xmpp:jingle:transports:ice-udp:1” ufrag=“dtt2j1eke7m0ss” pwd=“thePassword”>
<web-socket xmlns=“http://jitsi.org/protocol/colibri” url=“wss://my.comain/colibri-ws/default-id/conf-id/endpoint-id?pwd=thePassword”/>
<rtcp-mux></rtcp-mux>
<fingerprint xmlns=“urn:xmpp:jingle:apps:dtls:0” setup=“actpass” required=“false” hash=“sha-256”>…
</fingerprint>
</transport>

Errors serverside

In the jvb container

SEVERE: Health check failed in PT0.001S:
java.lang.Exception: Failed to bind single-port
at org.jitsi.videobridge.health.JvbHealthChecker.check(JvbHealthChecker.kt:47)
at org.jitsi.videobridge.health.JvbHealthChecker.access$check(JvbHealthChecker.kt:28)
at org.jitsi.videobridge.health.JvbHealthChecker$healthChecker$1.invoke(JvbHealthChecker.kt:36)
at org.jitsi.videobridge.health.JvbHealthChecker$healthChecker$1.invoke(JvbHealthChecker.kt:28)
at org.jitsi.health.HealthChecker.run(HealthChecker.kt:142)
at org.jitsi.utils.concurrent.RecurringRunnableExecutor.run(RecurringRunnableExecutor.java:216)
at org.jitsi.utils.concurrent.RecurringRunnableExecutor.runInThread(RecurringRunnableExecutor.java:292)
at org.jitsi.utils.concurrent.RecurringRunnableExecutor.access$000(RecurringRunnableExecutor.java:36)
at org.jitsi.utils.concurrent.RecurringRunnableExecutor$1.run(RecurringRunnableExecutor.java:328)

I also see this in the jicofo logs a warning but that does seem just some protection agains excessive requests

Jicofo 2020-10-12 16:03:18.468 WARNING: [29] org.jitsi.jicofo.JitsiMeetConferenceImpl.log() Rate limiting Participant[1@muc.my.domain/8939eead]@2128743760 for restart requests

This means the videobridge cannot bind to port 10000, something else is using it.

Ok, it seems I left out something important in the description of the configuration.

I was using in .env JVB_PORT=443, it seemed to me that this was part of the necessary configuration to make jitsi behind a firewall with only 80 and 443 open. Then because I had SINGLE_PORT_HARVESTER_PORT=443 in jvb’s sip-communicator.properties there was this conflict with binding. If I change SINGLE_PORT_HARVESTER_PORT to 4443 the error is no longer there.

However after having to do this I wondered if changing JVB_PORT to 443 is the correct approach, perhaps JVB_PORT is only used for internal communications and because of that it does not matter that it’s in a port blocked by the firewall?

I found this previous post in the community where you talk about a very similar issue Videobridge behind nginx proxy (443 => 4443 because 4443 port is closed)

Would you say that is the correct approach?, creating a new subdomain pointing to the machine and using multiplexing for jvb (and not only for turn)?

Also if that’s the case would in which config files should I input the new subdomain that jvb would be using?

I’m not sure about docker … with debian packages I would just install default installation and do this https://jitsi.github.io/handbook/docs/devops-guide/faq#how-to-migrate-away-from-multiplexing-and-enable-bridge-websockets
Make it so the turn server is behind its own DNS but again listening on port 443.
Mind that you cannot drop port 10000, the turn server will receive media on port 443 and will contact jvb on that port to relay it.