RESOLVED: Configure Turn in Kubernetes inside VPC

Hello there!

Thank you so much for your awesome tool! We are running it inside kubernetes and is awesome!

We have one problem though. Our current infrastructure is as follows:

  1. Application Load Balancer in front of the K8S cluster
  2. Network Load Balancer listening in port 30300 for the video bridge
  3. Video bridge pod creates a nodePort in port 30300
  4. The instances inside the cluster are inside a private VPC, therefore they do not have access to or from the internet. The only way to get to them is through the NLB (for UDP traffic) and the ALB (for TCP traffic)

This all works as expected. People goes to to mysite.com and they can talk to each other. However, for certain network conditions (such as enterprise networks), the video does not work.

We know that this is fixed by adding a turn server. However, we cannot make it work.

The configuration we are using is based on the containers created here: https://github.com/jitsi/docker-jitsi-meet

This is our current turn configuration:

# turnserver.conf
use-auth-secret
keep-address-family
static-auth-secret=$PASSWORD

realm=$TURN_DOMAIN

external-ip=$EXTERNAL_IP
listening-port=$TURN_PORT_MIN
log-file=/var/log/turnserver.log
verbose
#!/usr/bin/env bash

sed -i "s/\$EXTERNAL_IP/${EXTERNAL_IP}/" /etc/turnserver.conf
sed -i "s/\$PASSWORD/${PASSWORD}/" /etc/turnserver.conf
sed -i "s/\$TURN_DOMAIN/${TURN_DOMAIN}/" /etc/turnserver.conf
sed -i "s/\$TURN_PORT_MIN/${TURN_PORT_MIN}/" /etc/turnserver.conf

cat /etc/turnserver.conf

/etc/init.d/coturn start

_status_code=$?

if [[ "${_status_code}" -eq 0 ]]; then
  _turn_pid=$(pgrep turnserver)
  tail --pid="${_turn_pid}" -f /dev/null
  exit 0
fi

exit 1
FROM ubuntu:18.04
LABEL org.opencontainers.image.title=turn

RUN apt-get update \
    && apt-get -y install coturn=4.5.0.7-1ubuntu2.18.04.1 --no-install-recommends \
    && apt-get clean \
    && rm -rf /var/lib/apt/lists/*

COPY ./src/etc/default/coturn /etc/default/coturn
COPY ./src/etc/turnserver.conf /etc/turnserver.conf
COPY ./src/entrypoint.bash /entrypoint

CMD [ "/entrypoint" ]

The values for the variables are passed in the yml for the k8s deployment.

And this is the configuration for prosody:

# /rootfs/defauts/conf.d/turn.cfg.lua
turncredentials_secret = "{{ .Env.TURN_PASSWORD }}";

turncredentials = {
    { type = "stun", host = "{{ .Env.EXTERNAL_IP }}", port = "{{ .Env.TURN_PORT_MIN }}" },
    { type = "turn", host = "{{ .Env.EXTERNAL_IP }}", port = "{{ .Env.TURN_PORT_MIN }}", transport = "udp"},
    { type = "turns", host = "{{ .Env.TURN_HOST }}", port = "443", transport = "tcp" }
}

This line is added to the 10-config file:

tpl /defaults/conf.d/turn.cfg.lua > /config/conf.d/turn.cfg.lua

TURN_PORT_MIN is 30001 and that is opened in the NLB. EXTERNAL_IP is the IP of the NLB and TURN_HOST is mysite.com. This is because the UDP traffic is not allowed via the domain (ALB), only through the NLB.

This is completely new for us. Do you see where in all this routing we have a problem? The communication for more than two clientes outside of the enterprise network works ok. But not from within:

Outside:

Within:

Thank you so much for all the help you could provide! And, one more time, congratulations for this tool. It has prove to be really useful. And super easy to integrate.

1 Like

Hello Community!

I have an update. I had a problem with the configuration in the turn server. The option keep-address-family was giving errors. After I removed that, now I can see the iceServers configured in chrome://webrtc-internals. However, it still has errors. But now this:

url: stun:52.X.XXX.X:30003
address: [0:0:0:x:x:x:x:x]
port: 60475
host_candidate: [0:0:0:x:x:x:x:x]:60475
error_text: STUN server address is incompatible.
error_code: 701

This is the iceServers configuration:

{ 
	iceServers: [
		turn:52.X.XXX.X:30003, 
  		turns:52.X.XXX.X:30003?transport=tcp, 
		stun:52.X.XXX.X:30003
	], 
 	iceTransportPolicy: all, 
	bundlePolicy: max-bundle, 
	rtcpMuxPolicy: require, 
	iceCandidatePoolSize: 0, 
	sdpSemantics: "plan-b" 
}

Do you have any ideas what could be wrong?

Hello @acruz.
@saghul and the great team in community are still working on it here. Pretty sure they will have it done soon.

By the way, may i have a look your K8S yaml files? Thank you bro.

Sure! I’m still working on finishing turn in the cluster. Once we have that done, I’ll share our yaml files.

Dear @Janto and @shuang, do you have any suggestion for me? Here is my status:

TURN Log.

9: handle_udp_packet: New UDP endpoint: local addr 172.16.21.243:30003, remote addr 172.16.13.235:43986
9: session 001000000000000001: realm <jitsi-acruz.my-site.com> user <>: incoming packet BINDING processed, success
9: IPv4. tcp or tls connected to: 172.16.13.235:55807
9: IPv4. tcp or tls connected to: 172.16.13.235:41149
9: session 000000000000000001: realm <jitsi-acruz.my-site.com> user <>: incoming packet message processed, error 401: Unauthorized
9: IPv4. Local relay addr: 172.16.21.243:30006
9: session 000000000000000001: new, realm=<jitsi-acruz.my-site.com>, username=<1593091821>, lifetime=600
9: session 000000000000000001: realm <jitsi-acruz.my-site.com> user <1593091821>: incoming packet ALLOCATE processed, success
9: session 000000000000000001: peer 10.196.104.50 lifetime updated: 300
9: session 000000000000000001: realm <jitsi-acruz.my-site.com> user <1593091821>: incoming packet CREATE_PERMISSION processed, success
19: session 001000000000000001: realm <jitsi-acruz.my-site.com> user <>: incoming packet BINDING processed, success
22: handle_udp_packet: New UDP endpoint: local addr 172.16.21.243:30003, remote addr 172.16.13.235:59369
22: session 000000000000000003: realm <jitsi-acruz.my-site.com> user <>: incoming packet BINDING processed, success
22: IPv4. tcp or tls connected to: 172.16.13.235:58828
22: IPv4. tcp or tls connected to: 172.16.13.235:54118
22: session 001000000000000002: realm <jitsi-acruz.my-site.com> user <>: incoming packet message processed, error 401: Unauthorized
22: IPv4. Local relay addr: 172.16.21.243:30005
22: session 001000000000000002: new, realm=<jitsi-acruz.my-site.com>, username=<1593094030>, lifetime=600
22: session 001000000000000002: realm <jitsi-acruz.my-site.com> user <1593094030>: incoming packet ALLOCATE processed, success
23: session 000000000000000001: refreshed, realm=<jitsi-acruz.my-site.com>, username=<1593091821>, lifetime=0
23: session 000000000000000001: realm <jitsi-acruz.my-site.com> user <1593091821>: incoming packet REFRESH processed, success
23: session 000000000000000001: TCP socket closed remotely 172.16.13.235:55807
23: session 000000000000000001: usage: realm=<jitsi-acruz.my-site.com>, username=<1593091821>, rp=64, rb=8324, sp=4, sb=396
23: session 000000000000000001: peer usage: realm=<jitsi-acruz.my-site.com>, username=<1593091821>, rp=0, rb=0, sp=60, sb=5760
23: session 000000000000000001: closed (2nd stage), user <1593091821> realm <jitsi-acruz.my-site.com> origin <>, local 172.16.21.243:30003, remote 172.16.13.235:55807, reason: TCP connection closed by client (callback)
23: session 000000000000000001: delete: realm=<jitsi-acruz.my-site.com>, username=<1593091821>
23: session 000000000000000001: peer 10.196.104.50 deleted
23: session 000000000000000002: TCP socket closed remotely 172.16.13.235:41149

JVB Log:

INFO: Add peer CandidatePair with new reflexive address to checkList: CandidatePair (State=Frozen Priority=7962116751041232895):
 LocalCandidate=candidate:1 1 udp 2130706431 172.16.13.77 30302 typ host
 RemoteCandidate=candidate:10000 1 udp 1853824767 172.16.22.222 47833 typ prflx
INFO: Start connectivity checks.
INFO: Transport description:
<transport xmlns='urn:xmpp:jingle:transports:ice-udp:1' pwd='2mlg3089jn1ing31i557hvd231' ufrag='bu73i1ebjahfrd'><rtcp-mux/><fingerprint xmlns='urn:xmpp:jingle:apps:dtls:0' setup='active' hash='sha-256'>9D:D3:63:2E:A7:5A:FD:E3:11:80:BB:49:F3:09:4C:44:3C:32:32:87:02:59:C7:28:B9:C0:49:67:97:69:6E:3B</fingerprint><candidate component='1' foundation='1' generation='0' id='592e513879134eb801c719a3' network='0' priority='2130706431' protocol='udp' type='host' ip='172.16.13.77' port='30302'/><candidate component='1' foundation='2' generation='0' id='5c39de1879134eb80ffffffff8c57147e' network='0' priority='1694498815' protocol='udp' type='srflx' ip='54.X.X.X' port='30302' rel-addr='172.16.13.77' rel-port='30302'/><candidate component='1' foundation='2' generation='0' id='3b006fc079134eb80ffffffff8a835359' network='0' priority='1694498815' protocol='udp' type='srflx' ip='52.204.71.3' port='30302' rel-addr='172.16.13.77' rel-port='30302'/></transport>
INFO: Update remote candidate for stream-aaf7eeed.RTP: 10.0.17.111:52578/udp
INFO: Update remote candidate for stream-aaf7eeed.RTP: 10.144.5.41:51169/udp
INFO: new Pair added: 172.16.13.77:30302/udp/host -> 10.0.17.111:52578/udp/host (stream-aaf7eeed.RTP).
INFO: new Pair added: 172.16.13.77:30302/udp/host -> 10.144.5.41:51169/udp/host (stream-aaf7eeed.RTP).
INFO: Transport description:
<transport xmlns='urn:xmpp:jingle:transports:ice-udp:1' pwd='2mlg3089jn1ing31i557hvd231' ufrag='bu73i1ebjahfrd'><rtcp-mux/><fingerprint xmlns='urn:xmpp:jingle:apps:dtls:0' setup='active' hash='sha-256'>9D:D3:63:2E:A7:5A:FD:E3:11:80:BB:49:F3:09:4C:44:3C:32:32:87:02:59:C7:28:B9:C0:49:67:97:69:6E:3B</fingerprint><candidate component='1' foundation='1' generation='0' id='1dfa96e879134eb801c719a3' network='0' priority='2130706431' protocol='udp' type='host' ip='172.16.13.77' port='30302'/><candidate component='1' foundation='2' generation='0' id='36f6a3fe79134eb80ffffffff8c57147e' network='0' priority='1694498815' protocol='udp' type='srflx' ip='54.X.X.X' port='30302' rel-addr='172.16.13.77' rel-port='30302'/><candidate component='1' foundation='2' generation='0' id='242d622679134eb80ffffffff8a835359' network='0' priority='1694498815' protocol='udp' type='srflx' ip='52.204.71.3' port='30302' rel-addr='172.16.13.77' rel-port='30302'/></transport>
INFO: Pair succeeded: 172.16.13.77:30302/udp/host -> 172.16.22.222:47833/udp/prflx (stream-aaf7eeed.RTP).
INFO: Adding allowed address: 172.16.22.222:47833/udp
INFO: Pair validated: 54.X.X.X:30302/udp/srflx -> 172.16.22.222:47833/udp/prflx (stream-aaf7eeed.RTP).
INFO: Nominate (first valid): 54.X.X.X:30302/udp/srflx -> 172.16.22.222:47833/udp/prflx (stream-aaf7eeed.RTP).
INFO: verify if nominated pair answer again
INFO: IsControlling: true USE-CANDIDATE:false.
INFO: Pair succeeded: 172.16.13.77:30302/udp/host -> 172.16.22.222:47833/udp/prflx (stream-aaf7eeed.RTP).
INFO: Pair validated: 54.X.X.X:30302/udp/srflx -> 172.16.22.222:47833/udp/prflx (stream-aaf7eeed.RTP).
INFO: IsControlling: true USE-CANDIDATE:false.
INFO: Pair failed: 172.16.13.77:30302/udp/host -> 10.0.17.111:52578/udp/host (stream-aaf7eeed.RTP)
INFO: Pair succeeded: 54.X.X.X:30302/udp/srflx -> 172.16.22.222:47833/udp/prflx (stream-aaf7eeed.RTP).
INFO: Pair validated: 54.X.X.X:30302/udp/srflx -> 172.16.22.222:47833/udp/prflx (stream-aaf7eeed.RTP).
INFO: IsControlling: true USE-CANDIDATE:true.
INFO: Nomination confirmed for pair: 54.X.X.X:30302/udp/srflx -> 172.16.22.222:47833/udp/prflx (stream-aaf7eeed.RTP).
INFO: Selected pair for stream stream-aaf7eeed.RTP: 54.X.X.X:30302/udp/srflx -> 172.16.22.222:47833/udp/prflx (stream-aaf7eeed.RTP)
INFO: CheckList of stream stream-aaf7eeed is COMPLETED
INFO: ICE state changed from Running to Completed.
INFO: ICE state changed old=Running new=Completed
INFO: ICE connected
INFO: Starting DTLS handshake
INFO: Harvester used for selected pair for stream-aaf7eeed.RTP: srflx
INFO: Negotiated DTLS version DTLS 1.2
INFO: DTLS handshake complete
INFO: Attempting to establish SCTP socket connection
Got sctp association state update: 1
sctp is now up.  was ready? false
INFO: ds_change ds_id=aaf7eeed
INFO: SCTP connection is ready, creating the Data channel stack
INFO: Will wait for the remote side to open the data channel.
INFO: Received data channel open message
INFO: Remote side opened a data channel.
INFO: The remote side is acting as DTLS server, we'll act as client
INFO: Starting the Agent without remote candidates.
INFO: Start ICE connectivity establishment.
INFO: ICE state changed from Waiting to Running.
INFO: ICE state changed old=Waiting new=Running
INFO: Start connectivity checks.

I’m using the same configuration for turn that exists in the merge request, and I’m using the same base image and same versions.

The log is showing that the connection TCP is closed in the IP 172.16.13.235. That is the IP of the instance running jicofo.

This is my deployment.yml.tmp file (we use go templating):

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: jitsi-jicofo
  name: jitsi-jicofo
  namespace: {{ .kapp_namespace }}
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: jitsi-jicofo
  template:
    metadata:
      labels:
        app: jitsi-jicofo
    spec:
      containers:
        - name: jitsi-jicofo
          image: XXXXXXX.dkr.ecr.us-east-1.amazonaws.com/my-area/jicofo/jicofo:{{ .ci_commit_sha }}
          imagePullPolicy: Always
          env:
            - name: XMPP_SERVER
              value: jitsi-prosody
            - name: XMPP_DOMAIN
              value: {{ .host_prefix }}.{{ .host }}
            - name: XMPP_AUTH_DOMAIN
              value: auth.{{ .host_prefix }}.{{ .host }}
            - name: XMPP_INTERNAL_MUC_DOMAIN
              value: internal-muc.{{ .host_prefix }}.{{ .host }}
            - name: JIBRI_BREWERY_MUC
              value: JibriBrewery
            - name: XMPP_INTERNAL_MUC_DOMAIN
              value: internal-muc.{{ .host_prefix }}.{{ .host }}
            - name: JICOFO_COMPONENT_SECRET
              valueFrom:
                secretKeyRef:
                  name: jitsi-config
                  key: JICOFO_COMPONENT_SECRET
            - name: JICOFO_AUTH_USER
              value: focus
            - name: JICOFO_AUTH_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: jitsi-config
                  key: JICOFO_AUTH_PASSWORD
            - name: TZ
              value: America/Los_Angeles
            - name: JVB_BREWERY_MUC
              value: jvbbrewery
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: jitsi-prosody
  name: jitsi-prosody
  namespace: {{ .kapp_namespace }}
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: jitsi-prosody
  template:
    metadata:
      labels:
        app: jitsi-prosody
    spec:
      containers:
        - name: jitsi-prosody
          image: XXXXXXX.dkr.ecr.us-east-1.amazonaws.com/my-area/prosody/prosody:{{ .ci_commit_sha }}
          imagePullPolicy: Always
          ports:
            - name: http
              containerPort: 5280
            - name: c2s
              containerPort: 5222
            - name: component
              containerPort: 5347
            - name: s2s
              containerPort: 5269
          env:
            - name: EXTERNAL_IP
              value: 52.X.XXX.0
            - name: XMPP_DOMAIN
              value: {{ .host_prefix }}.{{ .host }}
            - name: XMPP_AUTH_DOMAIN
              value: auth.{{ .host_prefix }}.{{ .host }}
            - name: XMPP_MODULES
              value: turncredentials
            - name: XMPP_MUC_DOMAIN
              value: muc.{{ .host_prefix }}.{{ .host }}
            - name: XMPP_INTERNAL_MUC_DOMAIN
              value: internal-muc.{{ .host_prefix }}.{{ .host }}
            - name: XMPP_RECORDER_DOMAIN
              value: recorder.{{ .host_prefix }}.{{ .host }}
            - name: JIBRI_XMPP_USER
              valueFrom:
                secretKeyRef:
                  name: jitsi-config
                  key: JIBRI_XMPP_USER
            - name: JIBRI_XMPP_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: jitsi-config
                  key: JIBRI_XMPP_PASSWORD
            - name: JIBRI_RECORDER_USER
              valueFrom:
                secretKeyRef:
                  name: jitsi-config
                  key: JIBRI_RECORDER_USER
            - name: JIBRI_RECORDER_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: jitsi-config
                  key: JIBRI_RECORDER_PASSWORD
            - name: JICOFO_COMPONENT_SECRET
              valueFrom:
                secretKeyRef:
                  name: jitsi-config
                  key: JICOFO_COMPONENT_SECRET
            - name: JVB_AUTH_USER
              value: jvb
            - name: JVB_AUTH_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: jitsi-config
                  key: JVB_AUTH_PASSWORD
            - name: JICOFO_AUTH_USER
              value: focus
            - name: JICOFO_AUTH_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: jitsi-config
                  key: JICOFO_AUTH_PASSWORD
            - name: TURN_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: jitsi-config
                  key: TURN_PASSWORD
            - name: TURN_PORT
              value: "30003"
            - name: TZ
              value: America/Los_Angeles
            - name: JVB_TCP_HARVESTER_DISABLED
              value: "true"
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: jitsi-web
  name: jitsi-web
  namespace: {{ .kapp_namespace }}
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: jitsi-web
  template:
    metadata:
      labels:
        app: jitsi-web
    spec:
      containers:
        - name: jitsi-web
          image: XXXXXXX.dkr.ecr.us-east-1.amazonaws.com/my-area/jitsi-web/jitsi-web:{{ .ci_commit_sha }}
          imagePullPolicy: Always
          ports:
            - name: http
              containerPort: 80
            - name: https
              containerPort: 443
          env:
            - name: XMPP_SERVER
              value: jitsi-prosody
            - name: JICOFO_AUTH_USER
              value: focus
            - name: XMPP_DOMAIN
              value: {{ .host_prefix }}.{{ .host }}
            - name: XMPP_AUTH_DOMAIN
              value: auth.{{ .host_prefix }}.{{ .host }}
            - name: XMPP_INTERNAL_MUC_DOMAIN
              value: internal-muc.{{ .host_prefix }}.{{ .host }}
            - name: XMPP_BOSH_URL_BASE
              value: http://jitsi-prosody:5280
            - name: XMPP_MUC_DOMAIN
              value: muc.{{ .host_prefix }}.{{ .host }}
            - name: TZ
              value: America/Los_Angeles
            - name: JVB_TCP_HARVESTER_DISABLED
              value: "true"
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: jitsi-jvb
  name: jitsi-jvb
  namespace: {{ .kapp_namespace }}
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: jitsi-jvb
  template:
    metadata:
      labels:
        app: jitsi-jvb
    spec:
      containers:
        - name: jitsi-jvb
          image: XXXXXXX.dkr.ecr.us-east-1.amazonaws.com/my-area/jvb/jvb:{{ .ci_commit_sha }}
          imagePullPolicy: Always
          resources:
            requests:
              memory: "2000Mi"
              cpu: "1000m"
            limits:
              memory: "3000Mi"
              cpu: "2000m"
          env:
            - name: XMPP_SERVER
              value: jitsi-prosody
            - name: DOCKER_HOST_ADDRESS
              value: 54.XXX.X.XXX
            - name: XMPP_DOMAIN
              value: {{ .host_prefix }}.{{ .host }}
            - name: XMPP_AUTH_DOMAIN
              value: auth.{{ .host_prefix }}.{{ .host }}
            - name: XMPP_INTERNAL_MUC_DOMAIN
              value: internal-muc.{{ .host_prefix }}.{{ .host }}
            - name: JVB_STUN_SERVERS
              value: meet-jit-si-turnrelay.jitsi.net:443
            - name: JICOFO_AUTH_USER
              value: focus
            - name: JVB_AUTH_USER
              value: jvb
            - name: JVB_PORT
              value: "{{ .jvb_udp_port }}"
            - name: JVB_AUTH_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: jitsi-config
                  key: JVB_AUTH_PASSWORD
            - name: JICOFO_AUTH_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: jitsi-config
                  key: JICOFO_AUTH_PASSWORD
            - name: JVB_BREWERY_MUC
              value: jvbbrewery
            - name: TZ
              value: America/Los_Angeles
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: jitsi-turn
  name: jitsi-turn
  namespace: {{ .kapp_namespace }}
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: jitsi-turn
  template:
    metadata:
      labels:
        app: jitsi-turn
    spec:
      containers:
        - name: jitsi-turn
          image: XXXXXXX.dkr.ecr.us-east-1.amazonaws.com/my-area/turn/turn:{{ .ci_commit_sha }}
          imagePullPolicy: Always
          ports:
            - name: udp
              containerPort: 30003
            - name: udp-tls
              containerPort: 30004
          resources:
            requests:
              memory: "500Mi"
              cpu: "500m"
            limits:
              memory: "1000Mi"
              cpu: "1000m"
          env:
            - name: TURN_DOMAIN
              value: kube-ing-LB-1KH3M62W1XJBY-XXXXXXXXX.elb.us-east-1.amazonaws.com
            - name: PASSWORD
              valueFrom:
                secretKeyRef:
                  name: jitsi-config
                  key: TURN_PASSWORD
            - name: EXTERNAL_IP
              value: 52.X.XXX.0
            - name: TURN_PORT
              value: "30003"
            - name: TLS_PORT
              value: "30004"
            - name: TURN_REALM
              value: {{ .host_prefix }}.{{ .host }}
            - name: TURN_RTP_MIN
              value: "30005"
            - name: TURN_RTP_MAX
              value: "30006"
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: jitsi-jibri
  name: jitsi-jibri
  namespace: {{ .kapp_namespace }}
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: jitsi-jibri
  template:
    metadata:
      labels:
        app: jitsi-jibri
    spec:
      containers:
        - name: jitsi-jibri-streamer
          image: XXXXXXX.dkr.ecr.us-east-1.amazonaws.com/my-area/jibri-streamer/jibri-streamer:{{ .ci_commit_sha }}
          volumeMounts:
          - mountPath: /src/wolf
            name: "wolf"
        - name: jitsi-jibri
          image: XXXXXXX.dkr.ecr.us-east-1.amazonaws.com/my-area/jibri/jibri:{{ .ci_commit_sha }}
          imagePullPolicy: Always
          resources:
            requests:
              memory: "1000Mi"
              cpu: "1000m"
            limits:
              memory: "2000Mi"
              cpu: "2000m"
          ports:
            - name: http
              containerPort: 2222
          env:
            - name: JIBRI_LOGS_DIR
              value: /var/log/jibri
            - name: JIBRI_RECORDER_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: jitsi-config
                  key: JIBRI_RECORDER_PASSWORD
            - name: JIBRI_RECORDING_DIR
              value: /src/recordings
            - name: JIBRI_XMPP_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: jitsi-config
                  key: JIBRI_XMPP_PASSWORD
            - name: JIBRI_XMPP_USER
              valueFrom:
                secretKeyRef:
                  name: jitsi-config
                  key: JIBRI_XMPP_USER
            - name: JIBRI_XMPP_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: jitsi-config
                  key: JIBRI_XMPP_PASSWORD
            - name: JIBRI_RECORDER_USER
              valueFrom:
                secretKeyRef:
                  name: jitsi-config
                  key: JIBRI_RECORDER_USER
            - name: JIBRI_RECORDER_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: jitsi-config
                  key: JIBRI_RECORDER_PASSWORD
            - name: JIBRI_STRIP_DOMAIN_JID
              value: conference
            - name: XMPP_INTERNAL_MUC_DOMAIN
              value: internal-muc.{{ .host_prefix }}.{{ .host }}
            - name: JIBRI_BREWERY_MUC
              value: JibriBrewery
            - name: JIBRI_INSTANCE_ID
              value: jibry-nickname
            - name: XMPP_AUTH_DOMAIN
              value: auth.{{ .host_prefix }}.{{ .host }}
            - name: XMPP_DOMAIN
              value: {{ .host_prefix }}.{{ .host }}
            - name: XMPP_RECORDER_DOMAIN
              value: recorder.{{ .host_prefix }}.{{ .host }}
            - name: XMPP_SERVER
              value: jitsi-prosody
            - name: DISPLAY
              value: ":0"
          volumeMounts:
          - mountPath: /dev/snd
            name: "devsnd"
          - mountPath: /src/wolf
            name: "wolf"
          securityContext:
            privileged: true
      volumes:
      - name: "devsnd"
        hostPath:
          path: /dev/snd
      - name: "wolf"
        emptyDir: {}

And this is my service.yml

apiVersion: v1
kind: Service
metadata:
  labels:
    service: jvb
  name: jvb-udp
  namespace: {{ .kapp_namespace }}
spec:
  type: NodePort
  externalTrafficPolicy: Cluster
  ports:
  - port: {{ .jvb_udp_port }}
    protocol: UDP
    targetPort: {{ .jvb_udp_port }}
    nodePort: {{ .jvb_udp_port }}
  selector:
    app: jitsi-jvb
---
apiVersion: v1
kind: Service
metadata:
  labels:
    service: jitsi-turn-udp
  name: jitsi-turn-udp
  namespace: {{ .kapp_namespace }}
spec:
  type: NodePort
  externalTrafficPolicy: Cluster
  ports:
  - name: 30003-tcp
    nodePort: 30003
    port: 30003
    protocol: TCP
    targetPort: 30003
  - name: 30004-tcp
    nodePort: 30004
    port: 30004
    protocol: TCP
    targetPort: 30004
  - name: 30005-udp
    nodePort: 30005
    port: 30005
    protocol: UDP
    targetPort: 30005
  - name: 30006-udp
    nodePort: 30006
    port: 30006
    protocol: UDP
    targetPort: 30006
  - name: 30003-udp
    nodePort: 30003
    port: 30003
    protocol: UDP
    targetPort: 30003
  - name: 30004-udp
    nodePort: 30004
    port: 30004
    protocol: UDP
    targetPort: 30004
  selector:
    app: jitsi-turn
---
apiVersion: v1
kind: Service
metadata:
  name: jitsi-prosody
  labels:
    metrics: prometheus
  namespace: {{ .kapp_namespace }}
spec:
  selector:
    app: jitsi-prosody
  clusterIP: None
  ports:
  - name: http
    port: 5280
  - name: c2s
    port: 5222
  - name: component
    port: 5347
  - name: s2s
    port: 5269
---
apiVersion: v1
kind: Service
metadata:
  name: jitsi-jibri
  labels:
    metrics: prometheus
  namespace: {{ .kapp_namespace }}
spec:
  selector:
    app: jitsi-jibri
  clusterIP: None
  ports:
  - name: http
    port: 2222
---
apiVersion: v1
kind: Service
metadata:
  labels:
    service: web
  name: web
  namespace: {{ .kapp_namespace }}
spec:
  ports:
  - name: "http"
    port: 80
    targetPort: 80
  - name: "https"
    port: 443
    targetPort: 443
  selector:
    app: jitsi-web
1 Like

Dear community,

Sorry to bother you with this same thread. Here is my current status:

  1. Some corporate networks block all traffic that is not “normal” traffic. Such as tcp/udp in weird ports (+30000)
  2. Because of that, we are sending the tcp and udp traffic to the Network Load Balancer in port 443. This is the prosody configuration:
    turncredentials_secret = "{{ .Env.TURN_PASSWORD }}";
    
    turncredentials = {
        { type = "stun", host = "{{ .Env.EXTERNAL_IP }}", port = "443" },
        { type = "turn", host = "{{ .Env.EXTERNAL_IP }}", port = "443", transport = "udp"},
        { type = "turns", host = "{{ .Env.EXTERNAL_IP }}", port = "443", transport = "tcp" }
     }
    
  3. That allows us to listen the traffic in the port 443 in the NLB but we forward it to port 30003 in the cluster. The service in the cluster is binding port 30003 to the instance and that, in time, is routed to port 30003 in the pod.
  4. All of this works as expected and we can see the candidates working as they should (I think):
    // Turn server logs
     282: IPv4. tcp or tls connected to: 172.16.13.235:39659
     282: IPv4. tcp or tls connected to: 172.16.13.235:59446
     283: IPv4. tcp or tls connected to: 172.16.13.235:54509
    
    // jvb logs
    <transport xmlns='urn:xmpp:jingle:transports:ice-udp:1' pwd='iqhaag0pmpqkaherstqaqlf7b' ufrag='bi35g1ebn40f1v'><rtcp-mux/><fingerprint xmlns='urn:xmpp:jingle:apps:dtls:0' setup='active' hash='sha-256'>04:9E:48:32:29:51:73:27:51:6B:99:AB:15:6B:0B:74:15:55:93:50:18:9F:BB:90:09:22:B0:01:EE:38:23:CE</fingerprint><candidate component='1' foundation='1' generation='0' id='2904ab5974d046c301f1c0af6' network='0' priority='2130706431' protocol='udp' type='host' ip='172.16.15.166' port='30302'/><candidate component='1' foundation='2' generation='0' id='3dd4887f74d046c30ffffffffa9ac0378' network='0' priority='1694498815' protocol='udp' type='srflx' ip='InternetGatewayIP' port='30302' rel-addr='172.16.15.166' rel-port='30302'/><candidate component='1' foundation='2' generation='0' id='2aeb6d2574d046c30ffffffffa7d84253' network='0'priority='1694498815' protocol='udp' type='srflx' ip='NLBIP' port='30302' rel-addr='172.16.15.166' rel-port='30302'/></transport>
    Jun 25, 2020 6:31:27 PM org.jitsi.utils.logging2.LoggerImpl log
    INFO: The remote side is acting as DTLS server, we'll act as client
    Jun 25, 2020 6:31:27 PM org.jitsi.utils.logging2.LoggerImpl log
    INFO: Starting the Agent without remote candidates.
    Jun 25, 2020 6:31:27 PM org.jitsi.utils.logging2.LoggerImpl log
    INFO: Start ICE connectivity establishment.
    Jun 25, 2020 6:31:27 PM org.jitsi.utils.logging2.LoggerImpl log
    INFO: Init checklist for stream stream-9ec06a97
    Jun 25, 2020 6:31:27 PM org.jitsi.utils.logging2.LoggerImpl log
    INFO: ICE state changed from Waiting to Running.
    Jun 25, 2020 6:31:27 PM org.jitsi.utils.logging2.LoggerImpl log
    INFO: ICE state changed old=Waiting new=Running
    Jun 25, 2020 6:31:27 PM org.jitsi.utils.logging2.LoggerImpl log
    INFO: Start connectivity checks.
    

After that it fails:

 TCP socket closed remotely 172.16.13.235:39659
 TCP socket closed remotely 172.16.13.235:59446
 TCP socket closed remotely 172.16.13.235:54509

Something interesting, is that the IP (172.16.13.235) the TCP socket is opened, is the one of the instance that is running jicofo.

Shouldn’t that TCP connection be opened to jvb?

The candidate is the right IP, for the pod running jvb.

My understanding on how the TURN communication works is this:

corporate -> tcp:443 -> nlb -> 443-tcp:30003-udp -> turn -> XXXX-udp -> video-bridge ---|
corporate <- tcp:443 <- nlb <- 443-tcp:30003-udp <- turn <- XXXX-udp <- video-bridge <--|

Is that accurate?

Thank you for all the help you can provide!

Ok. We’ve made progress. But now a question came up:

Is there any way to tell jvb to advertise the local port 10000 for example, but the external port 443?

We need to have something like this:

pod 172.10.25.32:10000 -> 52.2.34.15:443

The reason is that the video bridge will be listening in port 10000, but I have an ingress in the cluster that is routing from external 443 to the internal 10000. Why? Because I don’t want to run jvb as root (otherwise it cannot open 443).

Add the following line to /etc/jitsi/videobridge/sip-communicator.properties:

org.jitsi.videobridge.SINGLE_PORT_HARVESTER_PORT=443

This works without being root because /lib/systemd/system/jitsi-videobridge2.service has

AmbientCapabilities=CAP_NET_BIND_SERVICE

Thanks @emrah for the quick reply!

We are using this inside a kubernetes cluster that runs the image with s6 instead of systemd, so when we tried that, it did not work.

We use the jitsi-meet-docker configuration as base.

I don’t know what is ‘s6’ but if you start a service using a command, it’s possible to set the binding privilege with setpriv

setpriv --reuid=jvb --ambient-caps=+net_bind_service --inh-caps=+net_bind_service your_command

Nice! Thank you! Will try it.

This is what we use:

After a long try and error research and testing, we were able to make Jitsi work within corporate networks.

Since we are using Kubernetes and we have more control on what we do, we ended up not needing the turn server. It works just by doing a few configurations in the cluster itself.

Here is what we are doing:

jvb_pod:443
jvb_service:10000
jvb_ingress:443
nlb:443 (udp and tcp)

The traffic goes like this:

client -> 443 -> nlb -> 443 -> ingress -> 10000 -> service -> 443 -> jvb

That allows us to advertise jvb in port 443 but setting up the traffic inside the cluster so it reaches the pod.

This helps because clients outside the corporate networks will use port 443 via udp. And clients inside the network will also use port 443 but with tcp traffic.

In addition, since the jvb needs to use the NLB ip and that might change, we are doing this:

LOAD_BALANCER_ADDRESS=$(dig +short "${LOAD_BALANCER_HOST}" | head -1)

JAVA_SYS_PROPS="-Dnet.java.sip.communicator.SC_HOME_DIR_LOCATION=/ -Dnet.java.sip.communicator.SC_HOME_DIR_NAME=config -Djava.util.logging.config.file=/config/logging.properties"

LOCAL_ADDRESS=$(hostname -I | cut -d " " -f1)
JAVA_SYS_PROPS="$JAVA_SYS_PROPS -Dorg.ice4j.ice.harvest.NAT_HARVESTER_LOCAL_ADDRESS=$LOCAL_ADDRESS -Dorg.ice4j.ice.harvest.NAT_HARVESTER_PUBLIC_ADDRESS=$LOAD_BALANCER_ADDRESS"

So when the bridge pod is created, it will retrieve the IP of the NLB at that time.

This was a really interesting journey! But we learned a lot and were able to configure it in such a way that works for all the clients with little changes.

If you guys can see errors or problems, we will be super happy to hear from you.

Again, thank you so much for your work! This tool is really awesome!

1 Like

Hello @acruz,
Awesome step forward (at least without CoTURN). Does your jitsi deployment can be accessed from ‘ANY’ organization’s (such school, university) firewall from outside as well as deployment inside firewall/NAT too?

Additional benefit of CoTURN server is TURN will lowering JVB load significantly since P2P connection will be done through it. Well at least additional nod of JVB can also solve this issue at this point in time, is it?

Thank you @emrah and @acruz, too

We’ve been able to try in a few corporate networks and it works. Our audience will not be too broad and we can pinpoint problems when they come. So far we feel it will work since it uses standards ports (443).

During development we wanted to use also low ports for development, but this does not work. For development we have to use high number ports and configure the NLB accordingly. This means that for development it will not work in corporate networks but that is ok, since we have a CI/CD pipeline that is pushing to a qa stage that works ok.

Our audience will almost never have only two people connected. So we feel either way we don’t gain too much of allowing P2P over TURN.

Yes, we are configuring auto scaling of jvb, so this also should be fine.

Thank you for all your feedback and help! Hopefully this approach helps someone else. Will keep you posted if we find more things in the future!

Hi @acruz! Awesome that you got it working. Can you share some k8s details about your config to get this working? We are in that step as well, adding TURN to increase customer connectivity. The issue I have with using an external load balancer is the cost, and we have lots of calls with 2 people. Our JVB traffic does no go via a loadbalancer right now and we intent to have the TURN server work like that as well.

Having the 443 port configured as well is really cool. I wonder if that can be used for TURN as well.

1 Like

Hello @vkruoso sorry for the late reply.

Yes, I’ll share our configuration. We’re finishing some stuff but will share it here in a couple of days. In a previous post I had added our (back then) current configuration. Would that work in the meantime?

Awesome, thanks.

Hi you can share the config i am trying to configure jitsi with nlb and i got some bugs.

I don’t know if it’s letscrypt

What bugs do you have?