Multiple jibri servers on one machine

Hi everyone!

It its actually a pleasure to be able to read that many articles for Jitsi all around the platform from you guys, so first a big shoutout to all of you and a huge thank you.

I have to add that I’m not an advanced user as you people are but I try to follow every single piece of instruction or advice you’ve posted.

As you might have read, the title of this post is related to the deployment of multiple Jibri servers. I’ve successfully hosted Jitsi Meet and a Jibri server within the same machine (at which is my personal server) with enough computing power to perform the both tasks.

I’ve understood that within the one and only jibri server that I have, I could just simply add multiple XMPP environments in the config.json file with different nicknames for each brewery instance but I’m not able to get it to work.

The thing is… Is it actually possible to make it work the way I’ve been trying to? Have I understood it right? Could it be a simple syntax error that prevents me to achieve this goal?

I would appreciate any insight that you could give me.



To run it on the same machine, you need to define more loopback devices for audio, more v4l devices, more xorg servers, and modify jibri so you can configure different for different jibri innstances, as some of the values are hardcoded … So it is not an easy task.

1 Like

Thank you for replying that quickly.

I’ve been thinking about the posibility of runing containerized instances of jibri within docker and pass them all the required software into it.

Do you think that’s a possible scenario?

Thanks again!

There’s some work going on around that here:

1 Like

Definetely will do that!

Thank you!

Do I need multiples xorg instances? Can I run multiple google chrome in the dame xorg :0 instance? So, can I make múltiple loops like csnoop dsnoop esnopp etc, and start múltiples jibri on same xorg instance?

@jota @bbaldino Hai guys, I am deploying multiple Jibri instances on docker using and already follow the official guide properly but still faced an issue when recording in the second instance, for the first instance is working fine. I deployed docker Jibri in google cloud with a compute engine ubuntu server 16.04.
I will explain to you what I have done besides what is in the official guide. I updated jibri.yml to make it standalone. I just removed depends_on Jicofo and networks meet.jitsi. Here is my jibri.yml :

version: '3'

        image: jitsi/jibri
        restart: ${RESTART_POLICY}
            - ${CONFIG}/jibri:/config
            - /dev/shm:/dev/shm
            - SYS_ADMIN
            - NET_BIND_SERVICE
            - /dev/snd:/dev/snd
            - XMPP_AUTH_DOMAIN
            - XMPP_SERVER
            - XMPP_DOMAIN
            - JIBRI_XMPP_USER
            - JIBRI_BREWERY_MUC
            - JIBRI_LOGS_DIR
            - DISPLAY=:0
            - TZ

I run the first instance using this command :

docker-compose -f jibri.yml up -d

After the first instance goes up, I change /home/jibri/.asoundrc to setup the second instance :

slave.pcm "hw:Loopback_1,0,0"
slave.pcm "hw:Loopback_1,0,1"
slave.pcm "hw:Loopback_1,1,1"
slave.pcm "hw:Loopback_1,1,0"

And then run the second instance using this command :

docker-compose -f jibri.yml up -d --scale jibri=2

Like I stated before, the first instance is working fine. The video has shown in ~/.jitsi-meet-cfg/jibri/recordings . but as for the second instance got this error on log.0.txt.1 :

2020-04-27 06:11:18.607 INFO: [59] org.jitsi.jibri.capture.ffmpeg.FfmpegCapturer.onFfmpegProcessUpdate() Ffmpeg quit abruptly. Last output line: plug:cloop: Input/output error
2020-04-27 06:11:18.609 INFO: [59] org.jitsi.jibri.capture.ffmpeg.FfmpegCapturer.onFfmpegStateMachineStateChange() Ffmpeg capturer transitioning from state Starting up to Error: SESSION Ffmpeg failed to start
2020-04-27 06:11:18.611 INFO: [59] org.jitsi.jibri.service.impl.FileRecordingJibriService.onServiceStateChange() File recording service transitioning from state Starting up to Error: SESSION Ffmpeg failed to start
2020-04-27 06:11:18.612 INFO: [59] org.jitsi.jibri.api.xmpp.XmppApi.invoke() Current service had an error, sending error iq

and here is ffmpeg.0.txt.1 error log :

2020-04-27 06:11:13.769 INFO: [59] Input #0, x11grab, from ‘:0.0+0,0’:
2020-04-27 06:11:13.769 INFO: [59] Duration: N/A, start: 1587960673.724562, bitrate: N/A
2020-04-27 06:11:13.769 INFO: [59] Stream #0:0: Video: rawvideo (BGR[0] / 0x524742), bgr0, 1280x720, 30 fps, 1000k tbr, 1000k tbn, 1000k tbc
2020-04-27 06:11:13.769 INFO: [59] ALSA lib pcm_dsnoop.c:618:(snd_pcm_dsnoop_open) unable to open slave
2020-04-27 06:11:13.769 INFO: [59] [alsa @ 0x55f311051e80] cannot open audio device plug:cloop (Device or resource busy)
2020-04-27 06:11:13.769 INFO: [59] plug:cloop: Input/output error

I suspect the Jibri instance doesn’t read .asoundrc file. So all instances still use the same loopback therefore the second instance got the error. I have updated with the wrong one and renamed it to .bsoundrc to make an error. But the first instance is working fine.
Any suggestions to fix this issue?
Thanks in advance.


Have you tried this?

I found it a few days back.

I gave it a quick look through the yaml and it seems to contain everything that it needs to work. I also read the Jibri container and it has everything it needs to work on its own. Haven’t tried it yet but it sounds promising;

Thanks for your response.

Yes, I have tried it. The first instance is working fine. My issue only happens on the second instance. That guidance doesn’t explain how to create multiple Jibri instances.

Ill give it a try and post what I get.

Where did you change .asoundrc file? Remember you must change it INSIDE EVERY docker contaniner.

First start the containers “docker-compose -f jibri.yml up -d --scale jibri=2”

Then enter every docker container, change all the Loopback instances to Loopback_1 in the first contanier inside /home/jibri/.soundrc. then enter the second and change Loopback for Loopback_2"

To test, you can execute de ffmpeg comand inside container from command line as the jibri user. The full command is inside log.0.txt. For example:

su - jibri -c “ffmpeg -y -v info -f x11grab -draw_mouse 0 -r 30 -s 1280x720 -thread_queue_size 4096 -i :0.0+0,0 -f alsa -thread_queue_size 4096 -i plug:cloop -acodec aac -strict -2 -ar 44100 -c:v libx264 -preset veryfast -profile:v main -level 3.1 -pix_fmt yuv420p -r 30 -crf 25 -g 60 -tune zerolatency -f mp4 /home/recordings/fbmeehhvbosyuvlb/prueba_2020-05-06-12-37-29.mp4”

The “/home/recordings” is my recordings directory…

If that command runs OK then you can then test recordings from jitsi room.


Hi deamencho,
Can you tell me more about this configuration? thanks

Hello @jjmasdeu and Jitsi Gurus,

I tried to follow

If I will have 10 jibris docker in Ubuntu 18, should I change Loopback until as follow:

"To setup the second instance, run container with changed /home/jibri/.asoundrc :

slave.pcm "hw:Loopback_1,0,0"
slave.pcm "hw:Loopback_1,0,1"
slave.pcm "hw:Loopback_1,1,1"
slave.pcm "hw:Loopback_1,1,0"

Until the 10th instance, run container with changed /home/jibri/.asoundrc :

slave.pcm “hw:Loopback_9,0,0”

slave.pcm “hw:Loopback_9,0,1”

slave.pcm “hw:Loopback_9,1,1”

slave.pcm “hw:Loopback_9,1,0”


Is there any solution to scale up Jibri automatically :thinking:?


Yes you must change the /home/jibri/.asoundrc file inside every docker container you run, like you describe avobe.
Remember you need to declare the 10 alsa loops (like in the HOST previous to start the 10 docker guests.

Did you solved? Please post if u did.

1 Like

Hi @jjmasdeu,

Thank you your guidance. I still unable to run 2nd-to-10th jibris. :slight_smile:

Does ALSA’s standard declaration below enough (mine is Ubuntu 18), or need to be customized to suit the ten Jibris? Please appoint and provide necessary edit if any.

# install the module
apt update && apt install linux-image-extra-virtual
# configure 5 capture/playback interfaces
echo "options snd-aloop enable=1,1,1,1,1 index=0,1,2,3,4" > /etc/modprobe.d/alsa-loopback.conf
# setup autoload the module
echo "snd-aloop" >> /etc/modules
# check that the module is loaded
lsmod | grep snd_aloop

Also, Let’s just say I need 6 from 10 Jibris running all the time ( the other 4 Jibri nodes will be called up later when scale up needed); Do i need to host each of 6 Jibri in separate VM with min 6 * 4 vCPU, 4GB RAMs (That’s a lot, tho) or you have any other approach? How to scaling up to call the remaining 4 jibris? :thinking:

I guess I am asking too many, LOL. But any answer will always satisfy me.

Thank you again. :+1:

The ALSA’s standard declaration is enough for set 5 loops. If you need more (10) you need
echo “options snd-aloop enable=1,1,1,1,1,1,1,1,1,1 index=0,1,2,3,4,5,6,7,8,9” > /etc/modprobe.d/alsa-loopback.conf
Remeber to restart the module with command:

moprobe -r snd_aloop && modprobe snd_aloop

Then you can check the config with the command “aplay -l”. This command must show you 10 loops configured, from Loopback to Loopback_10 like:

tarjeta 4: Loopback_X [Loopback], dispositivo 1: Loopback PCM [Loopback PCM] (spanish in my case)

  1. All these configuration are for config jibri dockers inside same machine (or VM), not separete jibri VM. Are you trying to install separete VM for every jibri???

Awesome bro,

Maybe yes, I might need to install the last 4 Jibris in different machine (VM) due to possible escalating processing power they needed ( I only have budget of fix 2 * 4vCPU, 4 ~ 8 GB RAMs for ten Jibris ). Or if your approach to install all 10 jibri is reasonably fine in one machine, then what machine specification and scaling method do you have in mind? Do you use serverless feature with Kubernetes (and maybe OpenFaaS) for autoscale Jibri in a Cluster; if I presume you run all your Jibri node in one VM successfully?

Thank you :+1: :+1:

Hi, I was able to scale it to 16 jirbri servers on the same machine with --scale jibri=16. This is because snd_aloop is able support only 32 loopback interfaces. I was using this .asoundrc . Acording to this. each jibri needs two loop back interfaces

pcm.amix {
  type dmix
  ipc_key 219345
  slave.pcm "hw:Loopback_1,0,0"

pcm.asnoop {
  type dsnoop
  ipc_key 219346
  slave.pcm "hw:Loopback_2,1,0"

pcm.aduplex {
  type asym
  playback.pcm "amix"
  capture.pcm "asnoop"

pcm.bmix {
  type dmix
  ipc_key 219347
  slave.pcm "hw:Loopback_2,0,0"

pcm.bsnoop {
  type dsnoop
  ipc_key 219348
  slave.pcm "hw:Loopback_1,1,0"

pcm.bduplex {
  type asym
  playback.pcm "bmix"
  capture.pcm "bsnoop"

pcm.pjsua {
  type plug
  slave.pcm "bduplex"

pcm.!default {
  type plug
  slave.pcm "aduplex"

Hi @BharathKadaluri,

Newbiew question: Where to find or put "–scale jibri=16 " ? :slight_smile:

What machine specification (number of core vCPU and RAMs, any GPU needed maybe?) you use to pack these 16 Jibri in once? I guess this machine do not share resource with Jitsi-meet, NGINX & Prosody etc. doesn’t it?

Thank you