Jibri for Jitsi on Kubernetes

I have set up a working jitsi (web/jicofo/prosody/jvb) configuration in a single pod on a custom kubernetes cluster and I’m trying to bring in the Jibri to enable recording features. But it fails cause there is an error log on the jibri container as follows.

Jibri 2022-06-01 08:32:05.409 WARNING: [29] [hostname=localhost id=localhost] MucClient.lambda$getConnectAndLoginCallable$7#661: Failed to join the MUCs. org.jivesoftware.smack.XMPPException$XMPPErrorException: XMPP error reply received from jibribrewery@internal-muc.meet.jitsi/jibri-653057866: XMPPError: not-allowed - cancel [Room creation is restricted]

Most of the articles and forums suggests to add the ENABLE_RECORDING in the jicofo container, but it was not successful for me.

jitsi community answer

github answer

As I understand the XMPP_INTERNAL_MUC_DOMAIN is important and I have used the value: “internal-muc.meet.jits”

Anybody got any idea on how to fix it?

PS im using stable-7001 version across all containers

Thanks in advance

1 Like

This is wrong. It has to be your domain.

This means that jicofo is not connected to that room.
Share your jicofo configs

Hi Damencho I have added the jicofo configs as requested.

    - name: jicofo
      image: jitsi/jicofo:stable-7001
        - mountPath: /config
          name: jicofo-config-volume
      imagePullPolicy: IfNotPresent
        - name: XMPP_SERVER
          value: localhost
        - name: XMPP_DOMAIN
          value: meet.jitsi
        - name: XMPP_AUTH_DOMAIN
          value: auth.meet.jitsi
        - name: ENABLE_RECORDING
          value: "1"
        - name: JIBRI_BREWERY_MUC
          value: jibribrewery              
        - name: PUBLIC_URL
          value: https://<public url>
          value: internal-muc.meet.jitsi           
              name: jitsi-config
              key: JICOFO_COMPONENT_SECRET
        - name: JICOFO_AUTH_USER
          value: focus
        - name: JICOFO_AUTH_PASSWORD
              name: jitsi-config
              key: JICOFO_AUTH_PASSWORD
        - name: TZ
          value: Asia/Colombo
        - name: JVB_BREWERY_MUC
          value: jvbbrewery

as in my public domain? im using an nginx ingress on kubernetes to manage the traffic. but all the components except for the jvb are in the same pod

Nope, not the docker config, but the actual jicofo configs it is sip-communicator.properties or/and jicofo.conf, sorry I’m not familiar with docker … normally that is in /etc/jitsi/jicofo in docker maybe it is in config folder …

Also is jicofo starting before jibri?

Yes jicofo is starting before jibri. I will exec into the pod and go through the directories and check it out

Managed to fix this by adding the STRIP_JID as muc

I have all that setup but I am getting this error

[s6-init] making user provided files available at /var/run/s6/etc...exited 0.
[s6-init] ensuring user provided files have correct perms...exited 0.
[fix-attrs.d] applying ownership & permissions fixes...
[fix-attrs.d] done.
[cont-init.d] executing container initialization scripts...
[cont-init.d] 01-set-timezone: executing...
[cont-init.d] 01-set-timezone: exited 0.
[cont-init.d] 10-config: executing...
e[31mERROR: Please load snd-aloop module on the docker host.e[39m
e[31mERROR: Binding /dev/snd is not found. Please check that you run docker-compose with -f jibri.yml.e[39m
Usage: usermod [options] LOGIN
-b, --badnames                allow bad names
-c, --comment COMMENT         new value of the GECOS field
-d, --home HOME_DIR           new home directory for the user account
-e, --expiredate EXPIRE_DATE  set account expiration date to EXPIRE_DATE
-f, --inactive INACTIVE       set password inactive after expiration
-g, --gid GROUP               force use GROUP as new primary group
-G, --groups GROUPS           new list of supplementary GROUPS
-a, --append                  append the user to the supplemental GROUPS
mentioned by the -G option without removing
the user from other groups
-h, --help                    display this help message and exit
-l, --login NEW_LOGIN         new value of the login name
-L, --lock                    lock the user account
-m, --move-home               move contents of the home directory to the
new location (use only with -d)
-o, --non-unique              allow using duplicate (non-unique) UID
-p, --password PASSWORD       use encrypted password for the new password
-R, --root CHROOT_DIR         directory to chroot into
-P, --prefix PREFIX_DIR       prefix directory where are located the /etc/* files
-s, --shell SHELL             new login shell for the user account
-u, --uid UID                 new UID for the user account
-U, --unlock                  unlock the user account
-v, --add-subuids FIRST-LAST  add range of subordinate uids
-V, --del-subuids FIRST-LAST  remove range of subordinate uids
-w, --add-subgids FIRST-LAST  add range of subordinate gids
-W, --del-subgids FIRST-LAST  remove range of subordinate gids
-Z, --selinux-user SEUSER     new SELinux user mapping for the user account
[cont-finish.d] executing container finish scripts...
[cont-finish.d] done.
[s6-finish] waiting for services.
[cont-init.d] 10-config: exited 0.
[cont-init.d] done.
[services.d] starting services
s6-svscanctl: fatal: unable to control /var/run/s6/services: supervisor not listening
s6-svscanctl: fatal: unable to control /var/run/s6/services: supervisor not listening
[s6-finish] sending all processes the TERM signal.
[s6-finish] sending all processes the KILL signal and exiting.

This is all done in kubernetes, I have 2 questions how do you load snd-aloop module and how do i fix the supervisor not listening.

the above link should help you. Basically you need to install your sound cards and run the following commands.

To see all the devices,
aplay -l

If you couldn’t find any card devices, then install alsa module.
apt install alsa-utils

load modules with
modprobe snd_aloop

Check if the module is loaded
lsmod | grep snd_aloop
aplay -l

and then you need make sure you have given access in your deployment manifest


I actually followed this, the issue is the kubernetes pod fails to stand up due to the error I posted. So I can’t exec into the pod to see if all devices and if they are not there install them. Any thoughts?

do you have the asroundrc file in the server?

Use PulseAudio, it allows you to do the audio loopback in userspace and doesn’t require any kernel modules or messing with ALSA config.

@jbg I have pulseaudio and its working but now just fails to record I have 2 jibri stood up. Both have 6cpu and 8gb it just tells me preparing to record but never starts!

Look in the Jibri logs, when you start the recording it should emit logs while starting chromedriver etc. Most likely it logs an error if it fails to start recording.

@Pamuditha_Navaratne i have the same problem.
did you deploy jibri on the same pod ?
can you share with us deployment yaml file
thank you

Hi yes… I took the jvb to a separate pod cause then itll allow me to scale it. the rest [jibri/prosody/web/jicofo] I kept in a single pod.
Also sorry Im unable to share my manifests as they have confidential Ips. but I can direct you to my references.