[Solved]Scaling Jibri

Thank you so much @Yasen_Pramatarov1, I will apply your solution and let you know. Thanks

I’m trying to setup jitsi + jibri in a docker swarm in my local server. I used the custom jibri image https://github.com/kpeiruza/jitsi-images

My docker-compose.ym file is added below.

version: ‘3.4’

services:
# Frontend
web:
image: jitsi/web
volumes:
- /opt/efs/jitsi-meet-cfg/web:/config
- /opt/efs/jitsi-meet-cfg/web/letsencrypt:/etc/letsencrypt
- /opt/efs/jitsi-meet-cfg/transcripts:/usr/share/jitsi-meet/transcripts
- /home/ubuntu/jitsi-meet/jitsi-meet:/usr/share/jitsi-meet/
env_file: .env
networks:
jitsi:
deploy:
replicas: 1
placement:
constraints:
- “node.labels.server==jitsi1”
# XMPP server
prosody:
image: jitsi/prosody
volumes:
- /opt/efs/jitsi-meet-cfg/prosody:/config
env_file: .env
networks:
jitsi:
deploy:
replicas: 1
placement:
constraints:
- “node.labels.server==jitsi1”
# Focus component
jicofo:
image: jitsi/jicofo
volumes:
- /opt/efs/jitsi-meet-cfg/jicofo:/config
env_file: .env
depends_on:
- prosody
networks:
jitsi:
deploy:
replicas: 1
placement:
constraints:
- “node.labels.server==jitsi1”
jibri:
image: jibri-custom:latest
environment:
- XMPP_AUTH_DOMAIN
- XMPP_INTERNAL_MUC_DOMAIN
- XMPP_RECORDER_DOMAIN
- XMPP_SERVER
- XMPP_DOMAIN
- JIBRI_XMPP_USER
- JIBRI_XMPP_PASSWORD
- JIBRI_BREWERY_MUC
- JIBRI_RECORDER_USER
- JIBRI_RECORDER_PASSWORD
- JIBRI_RECORDING_DIR
- JIBRI_FINALIZE_RECORDING_SCRIPT_PATH
- JIBRI_STRIP_DOMAIN_JID
- JIBRI_LOGS_DIR
- DISPLAY=:0
networks:
jitsi:
deploy:
replicas: 2
placement:
constraints:
- “node.labels.server==master”
# Video bridge
jvb:
image: jitsi/jvb
env_file: .env
environment:
- DOCKER_HOST_ADDRESS=192.168.2.113
depends_on:
- prosody
networks:
jitsi:
deploy:
replicas: 1
placement:
constraints:
- “node.labels.server==jitsi1”
jvb2:
image: jitsi/jvb:stable-4548-1
env_file: .env
environment:
- DOCKER_HOST_ADDRESS=192.168.2.110
depends_on:
- prosody
networks:
jitsi:
deploy:
replicas: 1
placement:
constraints:
- “node.labels.server==jitsi2”

Use external host network

networks:
jitsi:
external:
name: “host”

But i’m getting the error
Fatal server error:
(EE) Cannot establish any listening sockets - Make sure an X server isn’t already running(EE)
(EE)

Full log is attached. Can anyone please help me.error.txt (1.8 KB)

Hi, I was trying to apply your solution. One quick question, I was checking the jibri status from jitsi server end using below command. But I am getting “curl: (7) Failed to connect to 13.232.162.202 port 2222: Connection timed out” error. Is this because 2222 port not open in jibri server end? Could you please help me on this regards?

curl http://13.232.162.202:2222/jibri/api/v1.0/health

It’s either that (then check for firewalls, security groups, etc.) or the Jibri is not actually working. If it’s not working, check its log. Also check if the port is correct, 2222 is default, but check the config if it’s changed. Check with netstat to see on which port (and whether) it’s listening.

1 Like

Thank you so much @Yasen_Pramatarov1 for your response. I am connect after opening port 2222:tcp on that jibri server.
One quick query. As per your suggestion I have prepared a shell script to run using scheduler. Using that I can start and stop the jibri instances on demand. But here I am facing one issue. I have created the auto scalable jibri instances using AWS api, here I am passing the instance name, when I am stopping the instance using the same name. But the problem is suppose jibri recording is running in that instance I want to stop, how can I check that. Because I have only the instance name here, not the IP. Can you please put some light here?
Thanks,

One way to do this is to rely on scale-in protection in AWS. Here is what I try to implement most of the times, when the cloud tools permit it (and AWS ones do).

For example set the scaling group to have a default setting to protect the instances from scale-in termination, all of them. Then, from inside each instance, do a cron check if the instance is currently recording or has been idle for a long time - if it has been idle, call the ec2 api to remove the termination protection of the current instance - and it will get terminated from the scaling group. In case it is currently recording, or it has been idle for too little time (use some counter in a file, for example) - exit the script and leave the default termination protection in place.

This way you control the termination from inside the instance itself and the termination on scale-in will be faster. Otherwise, if you rely only on external checks (from Meet server, etc.), you risk having slower checks and the system may start a recording right before the scripts decide to terminate the instance.

Plus, when you are calling the api from inside the instance, you don’t need its IP, you just do something like this:

aws autoscaling set-instance-protection --instance-ids $instanceid --auto-scaling-group-name $ascgroupname --no-protected-from-scale-in

Of course, first get the $instanceid and $ascgroupname

instanceid=curl -s http://169.254.169.254/latest/meta-data/instance-id

ascgroupname=aws autoscaling describe-auto-scaling-instances --instance-ids="$instanceid" | jq '.[] | .[] | .AutoScalingGroupName' -r

…or something along that line :slight_smile:

Hi @kpeiruza can you please update your repo jitsi-images with latest stable jibri as it has some new features.
Thanks

In a few days :slight_smile:

1 Like

It has just been updated, based on latest github release + 2 custom patches: pulseaudio and an additional configuration parameter to set the xmpp port of your prosody.

Try kpeiruza/Jibri:20210112

Best regards,

Kenneth

El dl., 4 de gen. 2021, 12:59, Adarsh Singh via Jitsi Community Forum - developers & users <jitsi@discoursemail.com> va escriure:

2 Likes

Can you please update you repo too?

Thanks a lot !!

Hi @kpeiruza ,
I had made some changes in docker file and post recording script. It would be great if you could update your repository as well. GitHub - kpeiruza/jitsi-images: Jitsi Images adapted to Kubernetes
Thanks !!