Docker Swarm

Hi all, I’m having hard times trying to get dockerized Jitsi Meet running in Docker swarm mode :confused: (to answer the why: my plan is to let the videobridge run as a global service so it automatically runs on every swarm worker added to the cluster). Hope that someone can give me some directions…

I’m currently experimenting with a single swarm manager only (no workers).

Here’s what I did:

  1. Downloaded the latest release. :white_check_mark:

  2. Configured .env file according to the guide. :white_check_mark:

  3. Generated an all-in-one config: docker-compose config > jitsi.yml (because .env files are docker-compose only). Manually changed version to 3.8 and removed all depends_on entries. :white_check_mark:

  4. Verified it is still working with standard docker-compose up -d with three tabs open. :white_check_mark:

  5. Running docker stack deploy --compose-file jitsi.yml jitsi :x:

The result:

  • still works in P2P mode (two tabs)
  • when opening the third tab, this one is alone in the conference for quite some time
  • then the third tab also joins the conference but now nobody is able to see/hear each other any longer

Things I noticed in the logs (with domains being changed to example.com):

Chrome browser console error
2021-04-20T11:16:46.367Z [modules/RTC/BridgeChannel.js] <p._send>:  Bridge Channel send: no opened channel.
JVB INFO Exception
jitsi_jvb.1.oiuop8h9ao06@aws-master    | Apr 20, 2021 12:16:47 PM org.ice4j.ice.harvest.StunMappingCandidateHarvester discover
jitsi_jvb.1.oiuop8h9ao06@aws-master    | INFO: We failed to obtain addresses for the following reason: 
jitsi_jvb.1.oiuop8h9ao06@aws-master    | java.io.IOException: Operation not permitted (sendto failed)
jitsi_jvb.1.oiuop8h9ao06@aws-master    | 	at java.net.PlainDatagramSocketImpl.send(Native Method)
jitsi_jvb.1.oiuop8h9ao06@aws-master    | 	at java.net.DatagramSocket.send(DatagramSocket.java:693)
jitsi_jvb.1.oiuop8h9ao06@aws-master    | 	at org.ice4j.socket.IceUdpSocketWrapper.send(IceUdpSocketWrapper.java:53)
jitsi_jvb.1.oiuop8h9ao06@aws-master    | 	at org.ice4j.stack.Connector.sendMessage(Connector.java:328)
jitsi_jvb.1.oiuop8h9ao06@aws-master    | 	at org.ice4j.stack.NetAccessManager.sendMessage(NetAccessManager.java:654)
jitsi_jvb.1.oiuop8h9ao06@aws-master    | 	at org.ice4j.stack.NetAccessManager.sendMessage(NetAccessManager.java:600)
jitsi_jvb.1.oiuop8h9ao06@aws-master    | 	at org.ice4j.stack.StunClientTransaction.sendRequest0(StunClientTransaction.java:267)
jitsi_jvb.1.oiuop8h9ao06@aws-master    | 	at org.ice4j.stack.StunClientTransaction.sendRequest(StunClientTransaction.java:245)
jitsi_jvb.1.oiuop8h9ao06@aws-master    | 	at org.ice4j.stack.StunStack.sendRequest(StunStack.java:680)
jitsi_jvb.1.oiuop8h9ao06@aws-master    | 	at org.ice4j.stack.StunStack.sendRequest(StunStack.java:618)
jitsi_jvb.1.oiuop8h9ao06@aws-master    | 	at org.ice4j.stack.StunStack.sendRequest(StunStack.java:585)
jitsi_jvb.1.oiuop8h9ao06@aws-master    | 	at org.ice4j.stunclient.BlockingRequestSender.sendRequestAndWaitForResponse(BlockingRequestSender.java:166)
jitsi_jvb.1.oiuop8h9ao06@aws-master    | 	at org.ice4j.stunclient.SimpleAddressDetector.getMappingFor(SimpleAddressDetector.java:123)
jitsi_jvb.1.oiuop8h9ao06@aws-master    | 	at org.ice4j.ice.harvest.StunMappingCandidateHarvester.discover(StunMappingCandidateHarvester.java:81)
jitsi_jvb.1.oiuop8h9ao06@aws-master    | 	at org.ice4j.ice.harvest.MappingCandidateHarvesters$1.call(MappingCandidateHarvesters.java:277)
jitsi_jvb.1.oiuop8h9ao06@aws-master    | 	at org.ice4j.ice.harvest.MappingCandidateHarvesters$1.call(MappingCandidateHarvesters.java:272)
jitsi_jvb.1.oiuop8h9ao06@aws-master    | 	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
jitsi_jvb.1.oiuop8h9ao06@aws-master    | 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
jitsi_jvb.1.oiuop8h9ao06@aws-master    | 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
jitsi_jvb.1.oiuop8h9ao06@aws-master    | 	at java.lang.Thread.run(Thread.java:748)
JVB SEVERE Exception (this one occurs three times)
jitsi_jvb.1.oiuop8h9ao06@aws-master    | Apr 20, 2021 12:22:04 PM org.ice4j.stack.NetAccessManager handleFatalError
jitsi_jvb.1.oiuop8h9ao06@aws-master    | SEVERE: Unexpected Error!
jitsi_jvb.1.oiuop8h9ao06@aws-master    | java.lang.NullPointerException
jitsi_jvb.1.oiuop8h9ao06@aws-master    | 	at org.ice4j.socket.MergingDatagramSocket.initializeActive(MergingDatagramSocket.java:577)
jitsi_jvb.1.oiuop8h9ao06@aws-master    | 	at org.ice4j.ice.ComponentSocket.propertyChange(ComponentSocket.java:176)
jitsi_jvb.1.oiuop8h9ao06@aws-master    | 	at org.ice4j.ice.IceMediaStream.firePairPropertyChange(IceMediaStream.java:877)
jitsi_jvb.1.oiuop8h9ao06@aws-master    | 	at org.ice4j.ice.CandidatePair.nominate(CandidatePair.java:629)
jitsi_jvb.1.oiuop8h9ao06@aws-master    | 	at org.ice4j.ice.Agent.nominate(Agent.java:1788)
jitsi_jvb.1.oiuop8h9ao06@aws-master    | 	at org.ice4j.ice.DefaultNominator.strategyNominateFirstValid(DefaultNominator.java:144)
jitsi_jvb.1.oiuop8h9ao06@aws-master    | 	at org.ice4j.ice.DefaultNominator.propertyChange(DefaultNominator.java:120)
jitsi_jvb.1.oiuop8h9ao06@aws-master    | 	at org.ice4j.ice.IceMediaStream.firePairPropertyChange(IceMediaStream.java:877)
jitsi_jvb.1.oiuop8h9ao06@aws-master    | 	at org.ice4j.ice.CandidatePair.validate(CandidatePair.java:667)
jitsi_jvb.1.oiuop8h9ao06@aws-master    | 	at org.ice4j.ice.IceMediaStream.addToValidList(IceMediaStream.java:675)
jitsi_jvb.1.oiuop8h9ao06@aws-master    | 	at org.ice4j.ice.Agent.validatePair(Agent.java:1752)
jitsi_jvb.1.oiuop8h9ao06@aws-master    | 	at org.ice4j.ice.ConnectivityCheckClient.processSuccessResponse(ConnectivityCheckClient.java:641)
jitsi_jvb.1.oiuop8h9ao06@aws-master    | 	at org.ice4j.ice.ConnectivityCheckClient.processResponse(ConnectivityCheckClient.java:405)
jitsi_jvb.1.oiuop8h9ao06@aws-master    | 	at org.ice4j.stack.StunClientTransaction.handleResponse(StunClientTransaction.java:314)
jitsi_jvb.1.oiuop8h9ao06@aws-master    | 	at org.ice4j.stack.StunStack.handleMessageEvent(StunStack.java:1040)
jitsi_jvb.1.oiuop8h9ao06@aws-master    | 	at org.ice4j.stack.MessageProcessingTask.run(MessageProcessingTask.java:196)
jitsi_jvb.1.oiuop8h9ao06@aws-master    | 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
jitsi_jvb.1.oiuop8h9ao06@aws-master    | 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
jitsi_jvb.1.oiuop8h9ao06@aws-master    | 	at java.lang.Thread.run(Thread.java:748)
nginx error (multiple times)
jitsi_web.1.ki9oj4h7bary@aws-master    | 2021/04/20 12:22:36 [error] 1654#1654: *8 connect() failed (110: Connection timed out) while connecting to upstream, client: 10.0.0.2, server: _, request: "GET /colibri-ws/172.18.0.6/a1f3e6f6ce668ba5/a36db68d?pwd=5uqb98od1frm6tk6q74875iqg2 HTTP/1.1", upstream: "http://172.18.0.6:9090/colibri-ws/172.18.0.6/a1f3e6f6ce668ba5/a36db68d?pwd=5uqb98od1frm6tk6q74875iqg2", host: "jitsi.example.com"
jitsi_web.1.ki9oj4h7bary@aws-master    | 10.0.0.2 - - [20/Apr/2021:12:22:36 +0200] "GET /colibri-ws/172.18.0.6/a1f3e6f6ce668ba5/a36db68d?pwd=5uqb98od1frm6tk6q74875iqg2 HTTP/1.1" 502 580 "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/90.0.4430.72 Safari/537.36"
JVB WARNING
WARNING: Returning a candidate matching the address, while no candidates match both address (18.156.176.48:10000/udp) and base (candidate:3 1 udp 2113932031 10.0.0.184 10000 typ host): candidate:4 1 udp 1677724415 18.156.176.48 10000 typ srflx raddr 172.18.0.6 rport 10000 with base candidate:1 1 udp 2130706431 172.18.0.6 10000 typ host

For reference, here’s the full jitsi.yml:

jitsi.yml
networks:
  meet-jitsi: {}
services:
  jicofo:
    environment:
      JIBRI_BREWERY_MUC: jibribrewery
      JIBRI_PENDING_TIMEOUT: "90"
      JICOFO_AUTH_PASSWORD: 126ee153751d3d2f0d4d58879c591274
      JICOFO_AUTH_USER: focus
      JIGASI_BREWERY_MUC: jigasibrewery
      JVB_BREWERY_MUC: jvbbrewery
      TZ: Europe/Berlin
      XMPP_AUTH_DOMAIN: auth.meet-jitsi
      XMPP_DOMAIN: meet-jitsi
      XMPP_INTERNAL_MUC_DOMAIN: internal-muc.meet-jitsi
      XMPP_MUC_DOMAIN: muc.meet-jitsi
      XMPP_SERVER: xmpp.meet-jitsi
    image: jitsi/jicofo:stable-5765-1
    networks:
      meet-jitsi: null
    restart: unless-stopped
    volumes:
      - /home/admin/.jitsi-meet-cfg/jicofo:/config:Z
  jvb:
    environment:
      JVB_AUTH_PASSWORD: 0213ae8289eaf8c88dccfd723168da49
      JVB_AUTH_USER: jvb
      JVB_BREWERY_MUC: jvbbrewery
      JVB_PORT: "10000"
      JVB_STUN_SERVERS: meet-jit-si-turnrelay.jitsi.net:443
      JVB_TCP_HARVESTER_DISABLED: "true"
      JVB_TCP_MAPPED_PORT: "4443"
      JVB_TCP_PORT: "4443"
      PUBLIC_URL: https://jitsi.example.com
      TZ: Europe/Berlin
      XMPP_AUTH_DOMAIN: auth.meet-jitsi
      XMPP_INTERNAL_MUC_DOMAIN: internal-muc.meet-jitsi
      XMPP_SERVER: xmpp.meet-jitsi
    image: jitsi/jvb:stable-5765-1
    networks:
      meet-jitsi:
        aliases:
          - jvb.meet-jitsi
    ports:
      - protocol: udp
        published: 10000
        target: 10000
      - published: 4443
        target: 4443
    restart: unless-stopped
    volumes:
      - /home/admin/.jitsi-meet-cfg/jvb:/config:Z
  prosody:
    environment:
      JIBRI_RECORDER_PASSWORD: d2d9f6a20047f7f9c0c87eec93ada781
      JIBRI_RECORDER_USER: recorder
      JIBRI_XMPP_PASSWORD: 74dc8c8877a9edacc9dc2c43d507bb22
      JIBRI_XMPP_USER: jibri
      JICOFO_AUTH_PASSWORD: 126ee153751d3d2f0d4d58879c591274
      JICOFO_AUTH_USER: focus
      JIGASI_XMPP_PASSWORD: 4cfbb87e0b13aacb800a4708366d2d7e
      JIGASI_XMPP_USER: jigasi
      JVB_AUTH_PASSWORD: 0213ae8289eaf8c88dccfd723168da49
      JVB_AUTH_USER: jvb
      PUBLIC_URL: https://jitsi.example.com
      TZ: Europe/Berlin
      XMPP_AUTH_DOMAIN: auth.meet-jitsi
      XMPP_DOMAIN: meet-jitsi
      XMPP_GUEST_DOMAIN: guest.meet-jitsi
      XMPP_INTERNAL_MUC_DOMAIN: internal-muc.meet-jitsi
      XMPP_INTERNAL_MUC_MODULES: ""
      XMPP_MODULES: ""
      XMPP_MUC_DOMAIN: muc.meet-jitsi
      XMPP_MUC_MODULES: ""
      XMPP_RECORDER_DOMAIN: recorder.meet-jitsi
    expose:
      - "5222"
      - "5347"
      - "5280"
    image: jitsi/prosody:stable-5765-1
    networks:
      meet-jitsi:
        aliases:
          - xmpp.meet-jitsi
    restart: unless-stopped
    volumes:
      - /home/admin/.jitsi-meet-cfg/prosody/config:/config:Z
      - /home/admin/.jitsi-meet-cfg/prosody/prosody-plugins-custom:/prosody-plugins-custom:Z
  web:
    environment:
      ENABLE_LETSENCRYPT: 1
      JICOFO_AUTH_USER: focus
      LETSENCRYPT_DOMAIN: jitsi.example.com
      LETSENCRYPT_EMAIL: jitsi@example.com
      PUBLIC_URL: https://jitsi.example.com
      TZ: Europe/Berlin
      XMPP_AUTH_DOMAIN: auth.meet-jitsi
      XMPP_BOSH_URL_BASE: http://xmpp.meet-jitsi:5280
      XMPP_DOMAIN: meet-jitsi
      XMPP_GUEST_DOMAIN: guest.meet-jitsi
      XMPP_MUC_DOMAIN: muc.meet-jitsi
      XMPP_RECORDER_DOMAIN: recorder.meet-jitsi
    image: jitsi/web:stable-5765-1
    networks:
      meet-jitsi:
        aliases:
          - meet-jitsi
    ports:
      - published: 80
        target: 80
      - published: 443
        target: 443
    restart: unless-stopped
    volumes:
      - /home/admin/.jitsi-meet-cfg/web:/config:Z
      - /home/admin/.jitsi-meet-cfg/transcripts:/usr/share/jitsi-meet/transcripts:Z
version: "3.8"

Looking at the nginx error, that IP address 172.18.0.6 belongs to the docker_gwbridge network, not the overlay network. That seems incorrect to me…

From my understanding this IP comes from videobridge.websockets.server-id (in jvb.conf) which in turn is filled by LOCAL_ADDRESS which in turn is determined like this (ref):

ip addr show dev "$(ip route|awk '/^default/ { print $5 }')" | grep -oP '(?<=inet\s)\d+(\.\d+){3}'

When using a patched JVB container that uses the IP of the overlay network as server-id it seems to work flawlessly with three participants…

1 Like

Thanks! You’re a genius!

Hey folks!

I haven’t tried swarm in a while. I wonder if we can do anything to guess the correct IP here. Do you know if there is a way to tell we should be picking the other one?

Hey @saghul,

for my swarm setup it is sufficient to simply use hostname -i as LOCAL_ADDRESS. But I cannot really overview all the other deployment options that require this rather complex ip route / awk / grep construct.

What IP addresses / network interfaces does your container have?

This is the output of ip route:

default via 172.18.0.1 dev eth1 
10.0.4.0/24 dev eth0 proto kernel scope link src 10.0.4.14 
172.18.0.0/16 dev eth1 proto kernel scope link src 172.18.0.16

…where 172.18.0.0/16 is the docker_gwbridge network and 10.0.4.0/24 is my custom meet.jitsi network.

And what does hostname -i return for you?

# hostname -i
10.0.4.14
# ip addr show dev "$(ip route|awk '/^default/ { print $5 }')" | grep -oP '(?<=inet\s)\d+(\.\d+){3}'
172.18.0.16

Ok, i think I know what we can do: get the IP of the route used to connect to the web container and use that one for the ID, and keep using the default one for ICE.

Sounds reasonable, at least to me as a networking noob :slightly_smiling_face: I’d be happy to test any version containing that patch.

Can you open an issue on our Docker repo and link to this conversation?

Done.