SOLVED: Docker Compose Setup for Local Development Using Websockets

I’ve been struggling to use the docker-compose.yml file provided here to bring up a sane local development environment.

My goal is to get something running on localhost that I can then use to test out a new iframe controller.

Here’s my diff with what is in env.example:

< TZ=America/Chicago
> TZ=Europe/Amsterdam

I had errors about the bridge controller not being able to connect and couldn’t run a conference with > 2 people until I added the DOCKER_HOST_ADDRESS. I also changed the Chrome setting to be able to trust self-signed certificates on localhost, so port 8443 is working fine over https. I can verify using chrome::webrtc-internals that the RTC data connection is up and working.

At this point I can run multi-way conferences fine. The problem comes when participants exit. I’m comparing the behavior with the official installation at On the official installation, if I open two tabs, and then close one, the remaining participant sees the one that left disappear immediately. This is my sense of what the expected behavior should be.

On my setup it takes much longer for the participant that left to disappear. Their video freezes immediately. Then I get the message about “Fellow Jitser is having connectivity issues…” That hangs around for a while. Finally, after some time (too long), they are finally dropped from the meeting.

I also notice this behavior on reload. If I just reload the same meeting page several times, I end up with a bunch of dead participants hanging out until they are finally removed.

My sense is that there is some signalling that is not happening properly, maybe due to some port incompatibility. I’ve also noticed these messages in the logs:

jvb_1      | JVB 2020-03-16 18:17:46.478 WARNING: [15] org.jitsi.videobridge.EndpointMessageTransport.log() SCTP connection with cfd220ccb3eb8973 not ready yet.
jvb_1      | JVB 2020-03-16 18:17:46.478 WARNING: [15] org.jitsi.videobridge.EndpointMessageTransport.log() No available transport channel, can't send a message

I feel like I’m close here so any help would be greatly appreciated :slight_smile:. FWIW my use case involves supporting rapid entry and exits among different rooms, so this kind of delay during development is not OK. I’ve also considered that this could be a configuration difference, and perhaps is more aggressive about kicking participants out of rooms. If that’s the case I’d like to replicate that behavior.

1 Like


  1. I tried this on a remote Ubuntu machine just to see if something about my local Mac setup was wonky. Nope. It exhibited the same behavior.
  2. I noticed that is using websockets, rather than the WebRTC data connection configured by default in the Docker Compose setup. I may try that next to see if it helps.

FYI: I managed to solve this. As I suspected, I did need to enable websockets—but (I think) only for XMPP, not for the bridge communication. The documentation here is a bit all over the place so I’ll share what worked for me using the current Docker-based setup.

I’m just going to throw my changes in here in case they help someone. I ended up needing to combine this guide (for Colibri web sockets), this open patch (for XMPP web sockets), and some elbow grease :slight_smile:.

Changes for Colibri and XMPP are intermingled below. Although I think that it’s the XMPP websocket that is much more helpful in addressing the problems I described above.


diff --git b/jitsi/config/jvb/ a/jitsi/config/jvb/
index 7228cd6..bff9855 100644
--- b/jitsi/config/jvb/
+++ a/jitsi/config/jvb/

These additions are required to get the video bridge to advertise a websocket connection.


--- b/jitsi/config/prosody/conf.d/jitsi-meet.cfg.lua
+++ a/jitsi/config/prosody/conf.d/jitsi-meet.cfg.lua
@@ -1,6 +1,10 @@
 admins = { "" }
 plugin_paths = { "/prosody-plugins/", "/prosody-plugins-custom" }
 http_default_host = "meet.jitsi"
+trusted_proxies = { "" }
+consider_websocket_secure = true
+cross_domain_websocket = true

@@ -22,11 +26,12 @@ VirtualHost "meet.jitsi"
         certificate = "/config/certs/meet.jitsi.crt";
     modules_enabled = {
+        "websocket";

Here we enabled and configure websockets for Prosody. A few things to note. First, the trusted_proxies needs to match the IP address of your proxy server. More on that in a minute, since we’ll need to fix the IP address in our docker-compose.yml file so it can be used here. (It did not seem as if this option would resolve names).

Second, the websocket options both seem to need to live at the top level. At least, moving them there is what (finally) worked for me.


diff --git b/jitsi/config/web/config.js a/jitsi/config/web/config.js
index 66a3152..6d9d9f8 100644
--- b/jitsi/config/web/config.js
+++ a/jitsi/config/web/config.js
@@ -38,7 +38,8 @@ var config = {

     // BOSH URL. FIXME: use XEP-0156 to discover it.
-    bosh: '/http-bind',
+    // bosh: '/http-bind',
+    bosh: 'wss://localhost:8443/xmpp-websocket',

     // The name of client node advertised in XEP-0115 'c' stanza
     clientNode: '',
@@ -236,7 +237,7 @@ var config = {
     // Values can be 'datachannel', 'websocket', true (treat it as
     // 'datachannel'), undefined (treat it as 'datachannel') and false (don't
     // open any channel).
-    // openBridgeChannel: true,
+    openBridgeChannel: 'websocket',

The first bit was one of the more maddening stumbling blocks for me.'s config.js includes a websocket option, but this seems completely ignored by the latest frontend.

Instead, bosh or serverUrl (IIRC) is pulled and parsed to determine if it is a websocket connection or not based on the scheme. I lost a lot of time here trying to figure out why my client kept using bosh… :expressionless:.

The second part configures the Colibri bridge to also use a websocket. Again, I’m not sure what kind of performance impact this has.


diff --git b/jitsi/config/web/nginx/meet.conf a/jitsi/config/web/nginx/meet.conf
index e4e3b5b..b1d0f81 100644
--- b/jitsi/config/web/nginx/meet.conf
+++ a/jitsi/config/web/nginx/meet.conf
@@ -32,5 +32,19 @@ location /http-bind {
     proxy_set_header X-Forwarded-For $remote_addr;
     proxy_set_header Host meet.jitsi;

+location ~ ^/colibri-ws/jvb/(.*) {
+    proxy_pass$1$is_args$args;
+    proxy_http_version 1.1;
+    proxy_set_header Upgrade $http_upgrade;
+    proxy_set_header Connection "Upgrade";
+    proxy_set_header Host meet.jitsi;
+    tcp_nodelay on;
+location = /xmpp-websocket {
+    proxy_pass;
+    proxy_http_version 1.1;
+    proxy_set_header Upgrade $http_upgrade;
+    proxy_set_header Connection "Upgrade";
+    proxy_set_header Host meet.jitsi;
+    tcp_nodelay on;

The relevant additions to the nginx configuration. These are more or less out of the documentation, except that in the containerized configuration JVB is running in its own container and so can’t be reached at localhost.

I also needed to add something to the main ngnix.conf as shown next, since initially I was getting errors resolving

diff --git b/jitsi/config/web/nginx/nginx.conf a/jitsi/config/web/nginx/nginx.conf
index fa1a78e..b0704c7 100644
--- b/jitsi/config/web/nginx/nginx.conf
+++ a/jitsi/config/web/nginx/nginx.conf
@@ -14,6 +14,7 @@ http {
        # Basic Settings

+       resolver;
        sendfile on;
        tcp_nopush on;
        tcp_nodelay on;

TBH I have no idea why this is required, given that nginx can clearly resolve so I have no idea why gives it fits. (Accessing the machine directly and using dig works fine.) My only hypothesis here is that this has something to do with startup order—perhaps the web container comes up quickly enough that doesn’t exist yet? No clue.

Anyway, is the internal Docker DNS resolver. Adding this made the name resolution errors vanish.


--- docker-compose.yml  2020-03-17 03:02:10.000000000 -0500
+++ ../docker-jitsi-meet/docker-compose.yml     2020-03-16 07:54:57.000000000 -0500
@@ -39,7 +39,6 @@
             - ENABLE_RECORDING
+                ipv4_address:
                     - ${XMPP_DOMAIN}
@@ -135,11 +134,10 @@
         image: jitsi/jvb
+        expose:
+            - '9090'
             - '${JVB_PORT}:${JVB_PORT}/udp'
             - '${JVB_TCP_PORT}:${JVB_TCP_PORT}'
@@ -158,13 +156,7 @@
             - prosody
+                aliases:
+                    -

 # Custom network so all services can communicate using a FQDN
+        ipam:
+            driver: default
+            config:
+                - subnet:

Finally, a few changes are required to the docker-compose.yml file to complete the job. We’re doing two things here (1) fixing the IP address of the nginx web frontend so that we can use it in our Prosody configuration as a trusted_proxy and (2) marking that port 9090 is used on the JVB, although this is probably isn’t strictly necessary since nothing is firewalling here.


With this configuration, as soon as I enter a meeting—even as the only participant—I can see the XMPP websocket connection:

If I enter the same room in a second tab and and leave, the left participant exits immediately—no more “Connection error…” and frozen video. A second connection also triggers the bridge websocket connection to be set up:

My JS console logs are fairly free of errors, as are the server-side logs.

Anyway, I hope that this saves someone some of the pain and suffering I went through yesterday. Now having succeeded in configuring my local development environment, I can actually get down to work :slight_smile:.


Awesome project and solution. Will have a look this someday when I have a local project. Thank you @geoffreychallen for the sharing.