Issue with calls to jvb

Environment: single host docker (docker compose)
Networking: Cloudflare->firewall->nginx->docker (80/443)
jitsi version: stable-5963

I like the idea of this setup, because it allows a single endpoint (cloudflare to be exposed to endusers)
in the env I have this set: - DOCKER_HOST_ADDRESS=104.21.66.81 (single cloudflare IP) & - PUBLIC_URL=https://jitsi. domain . tld

This setup works fine for 2 users. As soon as a 3rd user tries to join, errors similar to this start to show in the console:
WebSocket connection to 'wss://jitsi. domain . tld /colibri-ws/ ip /bca6457f854b0087/7c65a5bb?pwd= pwd ’ failed:
After the 3rd user joins, no one gets video or audio anymore.

this is the nginx config that is in front of the containers:
jitsi is the container name of the jitsi web container

#this fixes the errors with websockets and /xmpp-websocket
location = /xmpp-websocket {
proxy_pass http://jitsi:80;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection “upgrade”;
proxy_set_header Host xmpp.meet.jitsi;
tcp_nodelay on;
}
#still having issues with this
location ~ ^/colibri-ws/default-id/(.*) {
proxy_pass http://jitsi:80;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection “upgrade”;
tcp_nodelay on;
}

    location / {
            proxy_set_header Host $http_host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Proto $scheme;
            proxy_set_header X-Forwarded-Protocol $scheme;
            proxy_set_header X-Forwarded-Host $http_host;

            proxy_pass      http://jitsi:80;
    }

I’ve tried variations on ports and upsteam servers. It sounds like when JVB_TCP_HARVESTER_DISABLED=true is set, then the call doesn’t require another port, and will try to use 443.

Looking into the logs of the jitsi web container, looks like there are 2 different kind of failures for colibri-ws

[error] 257#257: *3 connect() failed (111: Connection refused) while connecting to upstream, client: ip , server: _, request: “GET /colibri-ws/ ip /54e3279554d5bfa6/a8e30f08?pwd= pwd HTTP/1.0”, upstream: "http:// ip :9090/colibri-ws/ ip /54e3279554d5bfa6/a8e30f08?pwd= pwd ", host: jitsu. domain . tld "
and
[09/Jul/2021:19:46:00 +0000] “GET /colibri-ws/ ip /bca6457f854b0087/7c65a5bb?pwd= pwd HTTP/1.0” 405 657 “-” “Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36”

Both to me indicate that jvb is having issues with the calls being made to it, for some reason.
When I look at the jvb logs, I see a lot similar to:
INFO: Running expire()
Jul 09, 2021 8:06:27 PM org.jitsi.utils.logging2.LoggerImpl log
INFO: Performed a successful health check in PT0S. Sticky failure: false
Jul 09, 2021 8:06:37 PM org.jitsi.utils.logging2.LoggerImpl log
INFO: Performed a successful health check in PT0S. Sticky failure: false
Jul 09, 2021 8:06:47 PM org.jitsi.utils.logging2.LoggerImpl log
INFO: Performed a successful health check in PT0S. Sticky failure: false
Jul 09, 2021 8:06:57 PM org.jitsi.utils.logging2.LoggerImpl log
INFO: Performed a successful health check in PT0S. Sticky failure: false
Jul 09, 2021 8:07:07 PM org.jitsi.utils.logging2.LoggerImpl log
INFO: Performed a successful health check in PT0S. Sticky failure: false
Jul 09, 2021 8:07:17 PM org.jitsi.utils.logging2.LoggerImpl log
INFO: Performed a successful health check in PT0S. Sticky failure: false
Jul 09, 2021 8:07:26 PM org.jitsi.utils.logging2.LoggerImpl log
INFO: Running expire()
Jul 09, 2021 8:07:27 PM org.jitsi.utils.logging2.LoggerImpl log
INFO: Performed a successful health check in PT0S. Sticky failure: false

nothing that looks like a failure.

How would I go about getting assistance trying to figure this out? I’ve been going crazy trying to get this setup for a few days now.

Thank you for any assistance that could be provided
-J

Had some progress.
Realized afterwords, that the configuration for the 2nd websockets uri was not right, can’t remember which issues I grabbed that from, but this fixes the websockets error
location ~ ^/colibri-ws/ {
proxy_pass http://jitsi:80;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection “upgrade”;

proxy_set_header Host jitsi.domain.tld;

    tcp_nodelay on;
}

still has issues when 3+ people join the call

Is port 10000/udp open and properly forwarded?

rather something like that
proxy_pass http://jitsi:9090/colibri-ws/default-id/$1$is_args$args;

I’m running it behind cloudflare, which doesn’t support non http[s] ports with some exceptions, of which 10000 isn’t one of them. There isn’t a way to do a full reverse proxy of all traffic via HTTPS port 443?

I didn’t see 9090 listening or being exposed on any of the docker containers.
I did some testing with forwarding it to the jitsi web, and the jvb container on 9090. No dice
When it set it to go to jitsi/jvb on 9090, it explicitly fails with this error in the browser console: BridgeChannel.js:83 WebSocket connection to ‘wss://jitsi.domain.tld/colibri-ws/192.168.144.21/c2b0acfe39448cc0/f2a98019?pwd=pwd’ failed:

You’ll need to deploy a TURN server.

with something like that ?

videobridge {
  http-servers {
      public {
          port = 9090
      }
      private {
          port = 8080
          host = 0.0.0.0
          #tls-port = 8443
          #key-store-path = /etc/jitsi/videobridge/ssl.store
          #key-store-password = mypasswd
      }
  }

looking in the docker compose file, I’m not seeing port 9090 mentioned
grep 9090 ~/jitsimeet/docker-jitsi-meet-stable-5963/docker-compose.yml

I’m guessing Freddie has is, and I need to setup a TURN server. Will start looking at how to do that.

sorry, I don’t know anything about docker. As a wild guess, I would hypothetise that jvb config files have parameters that are instantiated in the env files.