External Jibri Server

Hello,
I have two machines with the following IP’s
Jitsi: XXX.200
Jibri: XXX.201
Both of them are debian 10.

I want to use an external dedicated server for Jibri.
I deployed Jitsi using Docker.
And followed https://github.com/jitsi/jibri for deploying Jibri.

Since Jibri instance is not internal, I’m not sure how I’m supposed to configure them.
Here is my .env file for Jitsi

# Security
#
# Set these to strong passwords to avoid intruders from impersonating a service account
# The service(s) won't start unless these are specified
# Running ./gen-passwords.sh will update .env with strong passwords
# You may skip the Jigasi and Jibri passwords if you are not using those
# DO NOT reuse passwords
#

# XMPP component password for Jicofo
JICOFO_COMPONENT_SECRET=93c61ab0fa80ade48b24603ea536c84b

# XMPP password for Jicofo client connections
JICOFO_AUTH_PASSWORD=7f60ff53401aa988a5db2fc0a92e6076

# XMPP password for JVB client connections
JVB_AUTH_PASSWORD=5074aeb389cb19bbb34721c2cee70e04

# XMPP password for Jigasi MUC client connections
JIGASI_XMPP_PASSWORD=cf90944e94d801f0285a5f5429f20ebb

# XMPP recorder password for Jibri client connections
JIBRI_RECORDER_PASSWORD=6c91c983a34343a89273d51dd6bebf69

# XMPP password for Jibri client connections
JIBRI_XMPP_PASSWORD=4f37b5997123d19621b2f0bc37cd5bd7


#
# Basic configuration options
#

# Directory where all configuration will be stored
CONFIG=~/.jitsi-meet-cfg

# Exposed HTTP port
HTTP_PORT=80

# Exposed HTTPS port
HTTPS_PORT=443

# System time zone
TZ=Europe/Istanbul

# Public URL for the web service (required)
PUBLIC_URL=MY.DOMAIN

# IP address of the Docker host
# See the "Running behind NAT or on a LAN environment" section in the README
DOCKER_HOST_ADDRESS=XXX.200
# Control whether the lobby feature should be enabled or not
#ENABLE_LOBBY=1

#
# Let's Encrypt configuration
#

# Enable Let's Encrypt certificate generation
#ENABLE_LETSENCRYPT=1

# Domain for which to generate the certificate
#LETSENCRYPT_DOMAIN=meet.example.com

# E-Mail for receiving important account notifications (mandatory)
#LETSENCRYPT_EMAIL=alice@atlanta.net


#
# Etherpad integration (for document sharing)
#

# Set etherpad-lite URL in docker local network (uncomment to enable)
#ETHERPAD_URL_BASE=http://etherpad.meet.jitsi:9001

# Set etherpad-lite public URL (uncomment to enable)
#ETHERPAD_PUBLIC_URL=https://etherpad.my.domain

#
# Basic Jigasi configuration options (needed for SIP gateway support)
#

# SIP URI for incoming / outgoing calls
#JIGASI_SIP_URI=test@sip2sip.info

# Password for the specified SIP account as a clear text
#JIGASI_SIP_PASSWORD=passw0rd

# SIP server (use the SIP account domain if in doubt)
#JIGASI_SIP_SERVER=sip2sip.info

# SIP server port
#JIGASI_SIP_PORT=5060

# SIP server transport
#JIGASI_SIP_TRANSPORT=UDP

#
# Authentication configuration (see handbook for details)
#

# Enable authentication
ENABLE_AUTH=1

# Enable guest access
#ENABLE_GUESTS=1

# Select authentication type: internal, jwt or ldap
AUTH_TYPE=internal

# JWT authentication
#

# Application identifier
#JWT_APP_ID=my_jitsi_app_id

# Application secret known only to your token
#JWT_APP_SECRET=my_jitsi_app_secret

# (Optional) Set asap_accepted_issuers as a comma separated list
#JWT_ACCEPTED_ISSUERS=my_web_client,my_app_client

# (Optional) Set asap_accepted_audiences as a comma separated list
#JWT_ACCEPTED_AUDIENCES=my_server1,my_server2


# LDAP authentication (for more information see the Cyrus SASL saslauthd.conf man page)
#

# LDAP url for connection
#LDAP_URL=ldaps://ldap.domain.com/

# LDAP base DN. Can be empty
#LDAP_BASE=DC=example,DC=domain,DC=com

# LDAP user DN. Do not specify this parameter for the anonymous bind
#LDAP_BINDDN=CN=binduser,OU=users,DC=example,DC=domain,DC=com

# LDAP user password. Do not specify this parameter for the anonymous bind
#LDAP_BINDPW=LdapUserPassw0rd

# LDAP filter. Tokens example:
# %1-9 - if the input key is user@mail.domain.com, then %1 is com, %2 is domain and %3 is mail
# %s - %s is replaced by the complete service string
# %r - %r is replaced by the complete realm string
#LDAP_FILTER=(sAMAccountName=%u)

# LDAP authentication method
#LDAP_AUTH_METHOD=bind

# LDAP version
#LDAP_VERSION=3

# LDAP TLS using
#LDAP_USE_TLS=1

# List of SSL/TLS ciphers to allow
#LDAP_TLS_CIPHERS=SECURE256:SECURE128:!AES-128-CBC:!ARCFOUR-128:!CAMELLIA-128-CBC:!3DES-CBC:!CAMELLIA-128-CBC

# Require and verify server certificate
#LDAP_TLS_CHECK_PEER=1

# Path to CA cert file. Used when server certificate verify is enabled
#LDAP_TLS_CACERT_FILE=/etc/ssl/certs/ca-certificates.crt

# Path to CA certs directory. Used when server certificate verify is enabled
#LDAP_TLS_CACERT_DIR=/etc/ssl/certs

# Wether to use starttls, implies LDAPv3 and requires ldap:// instead of ldaps://
# LDAP_START_TLS=1


#
# Advanced configuration options (you generally don't need to change these)
#

# Internal XMPP domain
XMPP_DOMAIN=meet.jitsi

# Internal XMPP server
XMPP_SERVER=xmpp.meet.jitsi

# Internal XMPP server URL
XMPP_BOSH_URL_BASE=http://xmpp.meet.jitsi:5280

# Internal XMPP domain for authenticated services
XMPP_AUTH_DOMAIN=auth.meet.jitsi

# XMPP domain for the MUC
XMPP_MUC_DOMAIN=muc.meet.jitsi

# XMPP domain for the internal MUC used for jibri, jigasi and jvb pools
XMPP_INTERNAL_MUC_DOMAIN=internal-muc.meet.jitsi

# XMPP domain for unauthenticated users
XMPP_GUEST_DOMAIN=guest.meet.jitsi

# Custom Prosody modules for XMPP_DOMAIN (comma separated)
XMPP_MODULES=info,alert

# Custom Prosody modules for MUC component (comma separated)
XMPP_MUC_MODULES=info,alert

# Custom Prosody modules for internal MUC component (comma separated)
XMPP_INTERNAL_MUC_MODULES=

# MUC for the JVB pool
JVB_BREWERY_MUC=jvbbrewery

# XMPP user for JVB client connections
JVB_AUTH_USER=jvb

# STUN servers used to discover the server's public IP
JVB_STUN_SERVERS=meet-jit-si-turnrelay.jitsi.net:443

# Media port for the Jitsi Videobridge
JVB_PORT=3478

# TCP Fallback for Jitsi Videobridge for when UDP isn't available
JVB_TCP_HARVESTER_DISABLED=true
JVB_TCP_PORT=4443
JVB_TCP_MAPPED_PORT=4443

# A comma separated list of APIs to enable when the JVB is started [default: none]
# See https://github.com/jitsi/jitsi-videobridge/blob/master/doc/rest.md for more information
#JVB_ENABLE_APIS=rest,colibri

# XMPP user for Jicofo client connections.
# NOTE: this option doesn't currently work due to a bug
JICOFO_AUTH_USER=focus

# Base URL of Jicofo's reservation REST API
#JICOFO_RESERVATION_REST_BASE_URL=http://reservation.example.com

# Enable Jicofo's health check REST API (http://<jicofo_base_url>:8888/about/health)
#JICOFO_ENABLE_HEALTH_CHECKS=true

# XMPP user for Jigasi MUC client connections
JIGASI_XMPP_USER=jigasi

# MUC name for the Jigasi pool
JIGASI_BREWERY_MUC=jigasibrewery

# Minimum port for media used by Jigasi
JIGASI_PORT_MIN=20000

# Maximum port for media used by Jigasi
JIGASI_PORT_MAX=20050

# Enable SDES srtp
#JIGASI_ENABLE_SDES_SRTP=1

# Keepalive method
#JIGASI_SIP_KEEP_ALIVE_METHOD=OPTIONS

# Health-check extension
#JIGASI_HEALTH_CHECK_SIP_URI=keepalive

# Health-check interval
#JIGASI_HEALTH_CHECK_INTERVAL=300000
#
# Enable Jigasi transcription
#ENABLE_TRANSCRIPTIONS=1

# Jigasi will record audio when transcriber is on [default: false]
#JIGASI_TRANSCRIBER_RECORD_AUDIO=true

# Jigasi will send transcribed text to the chat when transcriber is on [default: false]
#JIGASI_TRANSCRIBER_SEND_TXT=true

# Jigasi will post an url to the chat with transcription file [default: false]
#JIGASI_TRANSCRIBER_ADVERTISE_URL=true

# Credentials for connect to Cloud Google API from Jigasi
# Please read https://cloud.google.com/text-to-speech/docs/quickstart-protocol
# section "Before you begin" paragraph 1 to 5
# Copy the values from the json to the related env vars
#GC_PROJECT_ID=
#GC_PRIVATE_KEY_ID=
#GC_PRIVATE_KEY=
#GC_CLIENT_EMAIL=
#GC_CLIENT_ID=
#GC_CLIENT_CERT_URL=

# Enable recording
#ENABLE_RECORDING=1

# XMPP domain for the jibri recorder
XMPP_RECORDER_DOMAIN=recorder.meet.jitsi

# XMPP recorder user for Jibri client connections
JIBRI_RECORDER_USER=recorder

# Directory for recordings inside Jibri container
JIBRI_RECORDING_DIR=/config/recordings

# The finalizing script. Will run after recording is complete
JIBRI_FINALIZE_RECORDING_SCRIPT_PATH=/config/finalize.sh

# XMPP user for Jibri client connections
JIBRI_XMPP_USER=jibri

# MUC name for the Jibri pool
JIBRI_BREWERY_MUC=jibribrewery

# MUC connection timeout
JIBRI_PENDING_TIMEOUT=90

# When jibri gets a request to start a service for a room, the room
# jid will look like: roomName@optional.prefixes.subdomain.xmpp_domain
# We'll build the url for the call by transforming that into:
# https://xmpp_domain/subdomain/roomName
# So if there are any prefixes in the jid (like jitsi meet, which
# has its participants join a muc at conference.xmpp_domain) then
# list that prefix here so it can be stripped out to generate
# the call url correctly
JIBRI_STRIP_DOMAIN_JID=muc

# Directory for logs inside Jibri container
JIBRI_LOGS_DIR=/config/logs

# Disable HTTPS: handle TLS connections outside of this setup
#DISABLE_HTTPS=1

# Redirect HTTP traffic to HTTPS
# Necessary for Let's Encrypt, relies on standard HTTPS port (443)
ENABLE_HTTP_REDIRECT=1

# Container restart policy
# Defaults to unless-stopped
RESTART_POLICY=unless-stopped

Check this guide too if Docker is not a must for you

Looks clear and easy, but sadly Docker is a must for my deployment.
However, the thing is that Jibri instance is running on a different machine.
I’m looking for a configuration to make them communicate through each other.
With regards to Jibri, I see two files that might be configured:
/etc/jitsi/jibri/config.json
and
/var/jitsi/jibri/jibri.conf

This is how jibri.conf looks like now (I made no changes)
`jibri {

id = “”
// Whether or not Jibri should return to idle state after handling
// (successfully or unsuccessfully) a request. A value of ‘true’
// here means that a Jibri will NOT return back to the IDLE state
// and will need to be restarted in order to be used again.
single-use-mode = false
api {
http {
external-api-port = 2222
internal-api-port = 3333
}
xmpp {
// See example_xmpp_envs.conf for an example of what is expected here
environments =
}
}
recording {
recordings-directory = “/tmp/recordings”
# TODO: make this an optional param and remove the default
finalize-script = “/path/to/finalize”
}
streaming {
// A list of regex patterns for allowed RTMP URLs. The RTMP URL used
// when starting a stream must match at least one of the patterns in
// this list.
rtmp-allow-list = [
// By default, all services are allowed
“.*”
]
}
chrome {
// The flags which will be passed to chromium when launching
flags = [
“–use-fake-ui-for-media-stream”,
“–start-maximized”,
“–kiosk”,
“–enabled”,
“–disable-infobars”,
“–autoplay-policy=no-user-gesture-required”
]
}
stats {
enable-stats-d = true
}
webhook {
// A list of subscribers interested in receiving webhook events
subscribers =
}
jwt-info {
// The path to a .pem file which will be used to sign JWT tokens used in webhook
// requests. If not set, no JWT will be added to webhook requests.
# signing-key-path = “/path/to/key.pem”

// The kid to use as part of the JWT
# kid = "key-id"

// The issuer of the JWT
# issuer = "issuer"

// The audience of the JWT
# audience = "audience"

// The TTL of each generated JWT.  Can't be less than 10 minutes.
# ttl = 1 hour

}
call-status-checks {
// If all clients have their audio and video muted and if Jibri does not
// detect any data stream (audio or video) comming in, it will stop
// recording after NO_MEDIA_TIMEOUT expires.
no-media-timeout = 30 seconds

// If all clients have their audio and video muted, Jibri consideres this
// as an empty call and stops the recording after ALL_MUTED_TIMEOUT expires.
all-muted-timeout = 10 minutes

// When detecting if a call is empty, Jibri takes into consideration for how
// long the call has been empty already. If it has been empty for more than
// DEFAULT_CALL_EMPTY_TIMEOUT, it will consider it empty and stop the recording.
default-call-empty-timeout = 30 seconds

}
}

config.json is the deprecated config file. Don’t use it and delete it if exists. Check this template. You need to customize this according to your environment

But in the shell script that starts the jibri as a service (systemctl start jibri) /opt/jitsi/jibri/launch.sh refers to config.json:

#!/bin/bash

exec /usr/lib/jvm/adoptopenjdk-8-hotspot-amd64/bin/java -Djava.util.logging.config.file=/etc/jitsi/jibri/logging.properties -Dconfig.file="/etc/jitsi/jibri/jibri.conf" -jar /opt/jitsi/jibri/jibri.jar --config "/etc/jitsi/jibri/config.json"

config.json is deprecated (it will not be supported soon) but still supported. The new config file is jibri.conf

Okay. I removed the file and filled jibri.conf accordingly with that of yours.
I basically replaced ___JITSI_HOST___ with my myJitsiServerDomainName.com

For control-login password, I inserted the value of JIBRI_XMPP_PASSWORD key in .env in the JitsiServer.

For call-login password, I inserted the value of JIBRI_RECORDER_PASSWORD key in .env in the JitsiServer.

I didn’t change anything else.

—For Jitsi Server:
In .env
I enabled it by setting ENABLE_RECORDING=1


However, I was not able to start the call, it immediately failed, and I was not able to see any relevant logs in either of the servers.

Change recordings-directory too. You can use /tmp folder at the test time

recordings-directory = "/tmp"

My focus is actually on YouTube live streaming.
However, as I said it failed, and I didn’t see any relevant logs in either of the servers.
I don’t think the two servers are communicating at all (XXX.200 (Jitsi and other components) and XXX.201 (Jibri))

TCP/5222 should be accessable by jibri

       TCP/5222
jibri ----------> JMS

And check the snd_aloop module on the jibri server

lsmod | grep aloop

Let me provide more details.
On Jibri Server running lsmod | grep aloop outputs:

snd_aloop              28672  1
snd_pcm               114688  1 snd_aloop
snd                    94208  5 snd_timer,snd_aloop,snd_pcm

And running ufw status

Status: active

To                         Action      From
--                         ------           ----
22/tcp                     ALLOW       Anywhere                  
22/tcp (v6)                ALLOW       Anywhere (v6)

On Jitsi Server running ufw status outputs:

Status: active

To                         Action      From
--                         ------      ----
22/tcp                     ALLOW       Anywhere                  
80/tcp                     ALLOW       Anywhere                  
443/tcp                    ALLOW       Anywhere                  
10000/udp                  ALLOW       Anywhere                  
4443/udp                   ALLOW       Anywhere                  
5222/tcp                   ALLOW       Anywhere                  
22/tcp (v6)                ALLOW       Anywhere (v6)             
80/tcp (v6)                ALLOW       Anywhere (v6)             
443/tcp (v6)               ALLOW       Anywhere (v6)             
10000/udp (v6)             ALLOW       Anywhere (v6)             
4443/udp (v6)              ALLOW       Anywhere (v6)             
5222/tcp (v6)              ALLOW       Anywhere (v6)

And running netstat -ltnp | grep -w LISTEN outputs:

tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      625/sshd            
tcp6       0      0 :::80                   :::*                    LISTEN      9643/docker-proxy   
tcp6       0      0 :::22                   :::*                    LISTEN      625/sshd            
tcp6       0      0 :::4443                 :::*                    LISTEN      9949/docker-proxy   
tcp6       0      0 :::443                  :::*                    LISTEN      9621/docker-proxy 

I suppose Jitsi Server is not listening on port 5222, and I’m not sure if Jibri server is ever trying to reach to the Jitsi server.

@emrah
Here I could see a log on the Jitsi Server from jicofo:

Jicofo 2020-11-10 18:09:40.071 SEVERE: [50] org.jitsi.jicofo.recording.jibri.JibriSession.log() Unable to find an available Jibri, can’t start

Jicofo 2020-11-10 18:09:40.072 INFO: [50] org.jitsi.jicofo.recording.jibri.JibriRecorder.log() Failed to start a Jibri session, no Jibris available

Could you try to connect from the jibri server to the JMS server through TCP/5222

curl http://your.domain.com:5222

Try the same test using the JMS’s local IP too

It could not connect.
There is basically no listener for port 5222 in the Jitsi server.

curl: (7) Failed to connect to my.domain.com port 5222: Connection refused

It seems that there is no working prosody in your system. First you need to solve this issue.

Did you try to start a conference with three participants?

Yes, I tried with more than 2 participants (up to 5), and it works.

Here are the docker logs coming from prosody container:

startup info Hello and welcome to Prosody version 0.11.5

[services.d] done.

startup info Prosody is using the epoll backend for connection handling

modulemanager error Unable to load module ‘info’: /usr/lib/prosody/modules/mod_info.lua: No such file or directory

modulemanager error Unable to load module ‘alert’: /usr/lib/prosody/modules/mod_alert.lua: No such file or directory

general info Starting speakerstats for muc.MY.DOMAIN

speakerstats.MY.DOMAIN:speakerstats_component info Hook to muc events on muc.MY.DOMAIN

portmanager info Activated service ‘c2s’ on [::]:5222, [*]:5222

portmanager info Activated service ‘legacy_ssl’ on no ports

recorder.MY.DOMAIN.com:tls error Error creating context for c2s: No certificate present in SSL/TLS configuration for recorder.MY.DOMAIN.com

recorder.MY.DOMAIN.com:tls error Error creating contexts for s2sin: No certificate present in SSL/TLS configuration for recorder.MY.DOMAIN.com

portmanager info Activated service ‘http’ on [::]:5280, [*]:5280

portmanager info Activated service ‘https’ on no ports

modulemanager error Unable to load module ‘info’: /usr/lib/prosody/modules/mod_info.lua: No such file or directory

modulemanager error Unable to load module ‘alert’: /usr/lib/prosody/modules/mod_alert.lua: No such file or directory

portmanager info Activated service ‘component’ on [*]:5347

internal.auth.MY.DOMAIN.com:tls error Error creating context for c2s: No certificate present in SSL/TLS configuration for internal.auth.MY.DOMAIN.com

internal.auth.MY.DOMAIN.com:tls error Error creating contexts for s2sin: No certificate present in SSL/TLS configuration for internal.auth.MY.DOMAIN.com
general info Starting conference duration timer for

muc.MY.DOMAIN
conferenceduration.MY.DOMAIN:conference_duration_component info Hook to muc events on
muc.MY.DOMAIN

c2s55c9d2189cc0 info Client connected

c2s55c9d29faca0 info Client connected

c2s55c9d29faca0 info Stream encrypted (TLSv1.2 with ECDHE-RSA-AES256-GCM-SHA384)

c2s55c9d2189cc0 info Stream encrypted (TLSv1.2 with ECDHE-RSA-AES256-GCM-SHA384)

c2s55c9d29faca0 info Authenticated as jvb@auth.MY.DOMAIN

c2s55c9d2189cc0 info Authenticated as focus@auth.MY.DOMAIN

focus.MY.DOMAIN:component warn Component not connected, bouncing error for:

The thing is that for Jitsi server, it is running behind docker-proxy, so most probably prosody or whoever has to listen 5222 is doing that within the container only.

Here’s the result of running docker ps -a

CONTAINER ID        IMAGE                        COMMAND                  CREATED             STATUS              PORTS                                            NAMES
375c9a9838d6        jitsi/jicofo:stable-5142     "/init"                  37 minutes ago      Up 9 minutes                                                         
docker-jitsi-meet_jicofo_1
fa898211e6ff        jitsi/jvb:stable-5142        "/init"                  37 minutes ago      Up 9 minutes        0.0.0.0:3478->3478/udp, 0.0.0.0:4443->4443/tcp   docker-jitsi-meet_jvb_1

17d7a3a652ac        jitsi/web:stable-5142        "/init"                  37 minutes ago      Up 9 minutes        0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp         docker-jitsi-meet_web_1

da8951169bab        jitsi/prosody:stable-5142    "/init"                  37 minutes ago      Up 9 minutes        5222/tcp, 5280/tcp, 5347/tcp                     docker-jitsi-meet_prosody_1

38e17b5da51a        jitsi/etherpad:stable-5142   "docker-entrypoint.s…"   37 minutes ago      Up 9 minutes        9001/tcp

Since you have a running Jitsi, the prosody server should be up but jibri can’t connect it through TCP/5222. There is a routing problem or something prevents the connection.

Did you test the TCP/5222 connectivity using the JMS’s local IP? On the jibri server

curl http://jms-local-ip:5222/

host your.domain.com

I’m assuming that JMS and jibri are on the same network

I’m confused. Why do I need to test it on Jibri server? I did not even open 5222 for Jibri server.
As I mentioned above, on Jibri server running ufw status:

Status: active
To                         Action      From
--                         ------      ----
22/tcp                     ALLOW       Anywhere                  
22/tcp (v6)                ALLOW       Anywhere (v6)  

And this is the result of running netstat -ltnp | grep -w 'LISTEN'

tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      653/sshd            
tcp6       0      0 :::22                   :::*                    LISTEN      653/sshd            
tcp6       0      0 :::8001                 :::*                    LISTEN      2507/java           
tcp6       0      0 :::8002                 :::*                    LISTEN      2507/java 

All of the above belongs to Jibri server. Why do I need to open 5222 on Jibri server?

This command tries to establish a connection from jibri to JMS. it doesn’t test Jibri itself