JVB/Colibri-ws Connecting To Wrong Domain

After recently updating Jitsi, I am having a weird issue where I am getting my my websocket failing as such:

WebSocket connection to 'wss://localhost/colibri-ws/default-id/aafd074c31a646f3/691ad61f?pwd=xxxxxxxxx' failed: 
[modules/RTC/BridgeChannel.js] <WebSocket.e.onclose>:  Channel closed: 1006 
[modules/RTC/BridgeChannel.js] <zr._send>:  Bridge Channel send: no opened channel.

My connection should not be to the localhost, but to wss://mysubdomain.example.com . In my /etc/jitsi/videobridge/jvb.conf I have the following:

videobridge {
    http-servers {
        public {
            port = 9090
        }
    }
    websockets {
        enabled = true
        domain = "subomdian.example.com:443"
        tls = true
        server-id = jvb1
    }
}

And in my /etc/nginx/sites-enabled/subdomain.example.com.conf

 location = /xmpp-websocket {
        #proxy_pass http://127.0.0.1:5280/xmpp-websocket?prefix=$prefix&$args;
        proxy_pass http://subdomain.example.com:5280/xmpp-websocket?prefix=$prefix&$args;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
        proxy_set_header Host $http_host;
        tcp_nodelay on;
    }

    # colibri (JVB) websockets for jvb1
    location ~ ^/colibri-ws/default-id/(.*) {
        #proxy_pass http://127.0.0.1:9090/colibri-ws/default-id/$1$is_args$args;
        proxy_pass http://subdomain.example.com:9090/colibri-ws/default-id/$1$is_args$args;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
        tcp_nodelay on;
    }

And in my /etc/jitsi/videobridge/sip-communicator.properties I have this:

org.ice4j.ice.harvest.DISABLE_AWS_HARVESTER=true
org.ice4j.ice.harvest.STUN_MAPPING_HARVESTER_ADDRESSES=meet-jit-si-turnrelay.jitsi.net:443
org.jitsi.videobridge.ENABLE_STATISTICS=true
org.jitsi.videobridge.STATISTICS_TRANSPORT=muc
org.jitsi.videobridge.xmpp.user.shard.HOSTNAME=localhost
org.jitsi.videobridge.xmpp.user.shard.DOMAIN=auth.subdomain.example.com
org.jitsi.videobridge.xmpp.user.shard.USERNAME=xxx
org.jitsi.videobridge.xmpp.user.shard.PASSWORD=xxxxxxxx
org.jitsi.videobridge.xmpp.user.shard.MUC_JIDS=JvbBrewery@internal.auth.subdomain.example.com
org.jitsi.videobridge.xmpp.user.shard.MUC_NICKNAME=xxxxxxxxxx

Am I missing something in my setup?

If nginx and jvb are on the same machine, you can leave the localhost one. Otherwise you need to open 9090 to the public.

After changing jvb config you restarted it?

Your jvb server id is jvb1, while nginx config use default-id for id. Fix that.

My firewall rules have always been:

Did both changes and still not working. Could the default-id be set wrong elsewhere?

Nope that is in nginx and jvb config.
Do you still see wss://localhost… In js console?

Yes still seeing it:

BridgeChannel.js:84 WebSocket connection to 'wss://localhost/colibri-ws/default-id/1cda7b944d9d00c4/d90e8281?pwd=xxxxxxxx' failed: 
_initWebSocket @ BridgeChannel.js:84
t @ BridgeChannel.js:103
Logger.js:154 2022-03-09T15:07:08.268Z [modules/RTC/BridgeChannel.js] <WebSocket.e.onclose>:  Channel closed: 1006

Yeah, this is strange. The part where you see localhost is coming from jvb.conf domain = …
No idea why you have entered domain = "subomdian.example.com:443" and in the client you see localhost … you have restarted jvb after editing the conf file, right?

Yes I’ve restarted the entire server. I actually had this error a while back, and then updating Jitsi fixed then. I updated Jitsi again recently and now its randomly back. The logs are clean too. I might just try a clean re-install.

what is the output for

netstat -tanp | grep 5222

This is the output.

root@ip-10-0-0-230:/home/ubuntu# netstat -tanp | grep 5222
tcp        0      0 0.0.0.0:5222            0.0.0.0:*               LISTEN      877/lua5.2          
tcp        0      0 127.0.0.1:5222          127.0.0.1:52544         ESTABLISHED 877/lua5.2          
tcp        0      0 10.0.0.230:5222         34.201.13.155:58328     ESTABLISHED 877/lua5.2          
tcp        0      0 10.0.0.230:5222         3.239.197.22:58020      ESTABLISHED 877/lua5.2          
tcp        0      0 127.0.0.1:5222          127.0.0.1:52540         ESTABLISHED 877/lua5.2          
tcp        0      0 10.0.0.230:5222         54.84.82.174:40780      ESTABLISHED 877/lua5.2          
tcp6       0      0 :::5222                 :::*                    LISTEN      877/lua5.2          
tcp6       0      0 127.0.0.1:52540         127.0.0.1:5222          ESTABLISHED 2374/java           
tcp6       0      0 127.0.0.1:52544         127.0.0.1:5222          ESTABLISHED 2429/java

Seems like you have multiple JVBs. Did you fix all?

Thanks to your hint, I actually ended up figuring it out. The JVB is on a seperate server, and that server’s config had localhost.

That then lead me to the issue of nginx being unable to connect. So I changed my proxypass to the ip of the JVB instance.

location ~ ^/colibri-ws/default-id/(.*) {
        #proxy_pass http://127.0.0.1:9090/colibri-ws/default-id/$1$is_args$args;
        proxy_pass http://123.456.789:9090/colibri-ws/default-id/$1$is_args$args;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
        tcp_nodelay on;
    }

Now the error message is gone and screening sharing works again! Next issue I have to figure out how to scale the JVBs. Having a static IP doesn’t allow multiple instance. Should the JVBs be behind an LB that routes the traffic accordingly?

No need an LB. Jitsi manages the load according to the selected bridge strategy (the default is OK for most cases).