Jibri on Kubernetes

Hello I am trying to setup Jibri on Kubernetes, but I am getting errors was hoping someone can help out.

[s6-init] making user provided files available at /var/run/s6/etc...exited 0.
[s6-init] ensuring user provided files have correct perms...exited 0.
[fix-attrs.d] applying ownership & permissions fixes...
[fix-attrs.d] done.
[cont-init.d] executing container initialization scripts...
[cont-init.d] 01-set-timezone: executing...
[cont-init.d] 01-set-timezone: exited 0.
[cont-init.d] 10-config: executing...
e[31mERROR: Please load snd-aloop module on the docker host.e[39m
e[31mERROR: Binding /dev/snd is not found. Please check that you run docker-compose with -f jibri.yml.e[39m
Usage: usermod [options] LOGIN
Options:
-b, --badnames                allow bad names
-c, --comment COMMENT         new value of the GECOS field
-d, --home HOME_DIR           new home directory for the user account
-e, --expiredate EXPIRE_DATE  set account expiration date to EXPIRE_DATE
-f, --inactive INACTIVE       set password inactive after expiration
to INACTIVE
-g, --gid GROUP               force use GROUP as new primary group
-G, --groups GROUPS           new list of supplementary GROUPS
-a, --append                  append the user to the supplemental GROUPS
mentioned by the -G option without removing
the user from other groups
-h, --help                    display this help message and exit
-l, --login NEW_LOGIN         new value of the login name
-L, --lock                    lock the user account
-m, --move-home               move contents of the home directory to the
new location (use only with -d)
-o, --non-unique              allow using duplicate (non-unique) UID
-p, --password PASSWORD       use encrypted password for the new password
-R, --root CHROOT_DIR         directory to chroot into
-P, --prefix PREFIX_DIR       prefix directory where are located the /etc/* files
-s, --shell SHELL             new login shell for the user account
-u, --uid UID                 new UID for the user account
-U, --unlock                  unlock the user account
-v, --add-subuids FIRST-LAST  add range of subordinate uids
-V, --del-subuids FIRST-LAST  remove range of subordinate uids
-w, --add-subgids FIRST-LAST  add range of subordinate gids
-W, --del-subgids FIRST-LAST  remove range of subordinate gids
-Z, --selinux-user SEUSER     new SELinux user mapping for the user account
[cont-finish.d] executing container finish scripts...
[cont-finish.d] done.
[s6-finish] waiting for services.
[cont-init.d] 10-config: exited 0.
[cont-init.d] done.
[services.d] starting services
s6-svscanctl: fatal: unable to control /var/run/s6/services: supervisor not listening
s6-svscanctl: fatal: unable to control /var/run/s6/services: supervisor not listening
[s6-finish] sending all processes the TERM signal.
[s6-finish] sending all processes the KILL signal and exiting.

This would be my jibri deployment yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    kompose.cmd: kompose --file jibri.yml convert
    kompose.version: 1.26.1 (HEAD)
  creationTimestamp: null
  labels:
    io.kompose.service: jibri
  name: jibri
spec:
  replicas: 1
  selector:
    matchLabels:
      io.kompose.service: jibri
  strategy:
    type: Recreate
  template:
    metadata:
      annotations:
        kompose.cmd: kompose --file jibri.yml convert
        kompose.version: 1.26.1 (HEAD)
      creationTimestamp: null
      labels:
        io.kompose.service: jibri
    spec:
      containers:
        - env:
            - name: DISPLAY
              value: :0
            - name: ENABLE_STATS_D
              value: 'true'
            - name: JIBRI_BREWERY_MUC
              value: 'jibribrewery'
            - name: JIBRI_FFMPEG_AUDIO_DEVICE
              value: 'default'
            - name: JIBRI_FFMPEG_AUDIO_SOURCE
              value: 'alsa'
            - name: JIBRI_FINALIZE_RECORDING_SCRIPT_PATH
              value: /config/jibri/finalize.sh
            - name: JIBRI_HTTP_API_EXTERNAL_PORT
              value: '2222'
            - name: JIBRI_HTTP_API_INTERNAL_PORT
              value: '3333'
            - name: JIBRI_LOGS_DIR
              value: /config/jibri/logs
            - name: JIBRI_RECORDER_PASSWORD
              value: password here
            - name: JIBRI_RECORDER_USER
              value: recorder
            - name: JIBRI_RECORDING_DIR
              value: '/config/jibri/recording'
            - name: JIBRI_RECORDING_RESOLUTION
              value: '1280x720'
            - name: JIBRI_STRIP_DOMAIN_JID
              value: muc
            - name: JIBRI_USAGE_TIMEOUT
              value: '0'
            - name: JIBRI_XMPP_PASSWORD
              value: password here
            - name: JIBRI_XMPP_USER
              value: jibri
            - name: PUBLIC_URL
              value: https://jitsi.example.com
            - name: TZ
              value: 'America/New_York'
            - name: XMPP_AUTH_DOMAIN
              value: auth.meet.jitsi
            - name: XMPP_DOMAIN
              value: meet.jitsi
            - name: XMPP_INTERNAL_MUC_DOMAIN
              value: internal-muc.meet.jitsi
            - name: XMPP_RECORDER_DOMAIN
              value: recorder.meet.jitsi
            - name: XMPP_SERVER
              value: prosody
            - name: XMPP_TRUST_ALL_CERTS
              value: 'true'
          image: jitsi/jibri:stable-7001
          name: jibri
          ports:
            - containerPort: 2222
            - containerPort: 3333
          resources: {}
          securityContext:
            capabilities:
              add:
                - SYS_ADMIN
                - NET_BIND_SERVICE
          volumeMounts:
            - mountPath: /config
              name: jibri-claim0
            - mountPath: /dev/shm
              name: jibri-claim1
      restartPolicy: Always
      volumes:
        - name: jibri-claim0
          persistentVolumeClaim:
            claimName: jibri-claim0
        - name: jibri-claim1
          persistentVolumeClaim:
            claimName: jibri-claim1
status: {}

I’m not familiar with Kubernetes, but this error suggests the kernel is missing the snd-aloop module. Without that module, Jibri can’t work.

You need to load that kernel module on the Kubernetes node before launching any Jibri pod (so ideally configure it to load on startup), if you want to use ALSA. How you do that depends on the OS you are using for your Kubernetes nodes; check your documentation or ask your vendor.

You can also modify Jibri’s ffmpeg command line to use pulseaudio instead, which doesn’t require any kernel modules. Search the source code for alsa and you should find the relevant command line, the ffmpeg documentation covers how to select audio inputs and how to configure the pulseaudio input.

Thank you! I will switch to pulseaudio. Going to check out how load that module on Kubernetes.