[Solved]Scaling Jibri

same credential but different nickname

Is there any way I can run a script before Jibri starts recording. Like there is a post_recording_script, is there any pre_recording_script ? Or any way to determine that a jibri instance is about to start recording

i dont think such script exists. but, perhaps you can replace google chrome binary with script that wrap actual chrome call with your pre/post script?

Hello All,

I am also facing the same issue. We used AWS auto-scaling for launching additional instances for parallel recording but due to same “nickename” recording not working.

It would be great if anyone help to resolve this issue.

Thank You.

Why not just edit the config file on boot? Then you’ll have unique nicknames, you can use for example ‘uuidgen’ command to have the nicknames? UUIDs are considered reasonably unique, so there will be no problem.

edit: Or as you are using AWS, why not just use the EC2 instance ID, it will be unique.

instanceid=`curl -s http://169.254.169.254/latest/meta-data/instance-id`
sed -i -e "s/.*\"nickname\":.*/\t\t\"nickname\":\ \"jibri-$instanceid\"/" /etc/jitsi/jibri/config.json

and then restart jibri.

or something like that :slight_smile:

you need to make sure the nickname is unique across Jibri hosts. I use Ansible to deploy Jibri when scaling, and configure the fqdn of the host as nickname:

"nickname": "{{ ansible_fqdn }}"

you could also leverage cloud-init userdata sort of like what @Yasen_Pramatarov1 describes - we do this for our JVB hosts in the ausotscaling group:

bootcmd:
    - "HOSTNAME_PREFIX='whatevere-prefix-you-prefer'"
    - "INSTANCE_ID=`/usr/bin/curl -s http://169.254.169.254/latest/meta-data/instance-id`"
    - "echo $HOSTNAME_PREFIX'-'$INSTANCE_ID > /etc/hostname; hostname -F /etc/hostname" 
    - sed -i -e "s/.*\"nickname\":.*/\t\t\"nickname\":\ \"$HOSTNAME_PREFIX'-'$INSTANCE_ID\"/" /etc/jitsi/jibri/config.json

good luck,

Erik

1 Like

I have a working setup of Jibri with autoscaling on Kubernetes.

In order to make jibri work properly on my K8s provider, I needed to get rid of snd_alsaloop and switch to Pulseaudio.

So far, it has streamed more than 250 meetings averaging 2h per meeting without issues in the last 4 weeks.

From an architectural point of view, it requires:

  • One kubernetes cluster with a pool able to autoscale
  • A Jibri deployment of my Jibri+Pulseaudio docker image
  • Horitzontal Pod Autoscaler

Autoscaling could be improved by fetching more metrics with prometheus, but, so far, it keeps always 1 free jibri, and it takes around 4 minutes to launch a new jibri once all them are busy.

I’m using the same deployment on a local 3 node cluster and I can run more than 1 Jibri for each VM without any reconfiguration.

I’d love to see Jibri master branch migrated or at least supporting Jibri+Pulseaudio. Any thoughts on using it?

1 Like

Hi @kpeiruza,

I am looking autoscaling solution with K8S.

It seems way easier to take your method. Mind to share the Terraform+Ansible configuration file for us to learn? Love to see the more detail step if that is not available. Thanks brother.

We’re just using Kubernetes, no Terraform and even less ansible!

Simply a deployment of jibri with pulseaudio, plain HPA & Deployments.

1 Like

Of course @kpeiruza,
I mean any possible step or yaml configuration file for me to recreate similar result of Jibri with PulseAudio or to start, brother? PulseAudio seems awesome.

This may out of context of Jibri, sorry, do we still need to use CoTURN (turn server) since JVB can be scale into any number in Kubernetes so unloading heavy traffic out of JVB using turn server is just not relevant ? What do you think :thinking:

Thank you

1st I patched Jibri to use Pulseaudio, file: src/main/kotlin/org/jitsi/jibri/capture/ffmpeg/Commands.kt

fun getFfmpegCommandLinux(ffmpegExecutorParams: FfmpegExecutorParams, sink: Sink): List {
return listOf(
“ffmpeg”, “-y”, “-v”, “info”,
“-f”, “x11grab”,
“-draw_mouse”, “0”,
“-r”, ffmpegExecutorParams.framerate.toString(),
“-s”, ffmpegExecutorParams.resolution,
“-thread_queue_size”, ffmpegExecutorParams.queueSize.toString(),
“-i”, “:0.0+0,0”,
“-f”, “pulse”,
“-thread_queue_size”, ffmpegExecutorParams.queueSize.toString(),
“-i”, “default”,
“-acodec”, “aac”, “-strict”, “-2”, “-ar”, “44100”,
“-c:v”, “libx264”, “-preset”, ffmpegExecutorParams.videoEncodePreset,
*sink.options, “-pix_fmt”, “yuv420p”, “-r”, ffmpegExecutorParams.framerate.toString(),
“-crf”, ffmpegExecutorParams.h264ConstantRateFactor.toString(),
“-g”, ffmpegExecutorParams.gopSize.toString(), “-tune”, “zerolatency”,
“-f”, sink.format, sink.path
)
}


Then I built jibri.jar and made a new Dockerfile with Pulseaudio running as user “jibri”. You can find it here: https://github.com/kpeiruza/jitsi-images

Once you have it (double check my github repo, I’m not sure if that’s the build currently running on production), you can use this deployment in kubernetes:

apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: jibri
flavour: allinone
name: jibri
namespace: jitsi
spec:
progressDeadlineSeconds: 600
replicas: 2
revisionHistoryLimit: 10
selector:
matchLabels:
app: jibri
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: jibri
name: jibri
spec:
containers:
- env:
- name: XMPP_AUTH_DOMAIN
value: auth.meet.yourdomain.com
- name: XMPP_INTERNAL_MUC_DOMAIN
value: internal.auth.meet.yourdomain.com
- name: XMPP_RECORDER_DOMAIN
value: recorder.meet.yourdomain.com
- name: XMPP_SERVER
value: meet.yourdomain.com
- name: XMPP_DOMAIN
value: meet.yourdomain.com
- name: JIBRI_XMPP_USER
value: jibri
- name: JIBRI_XMPP_PASSWORD
value: RANDOMPASS
- name: JIBRI_BREWERY_MUC
value: jibribrewery
- name: JIBRI_RECORDER_USER
value: recorder
- name: JIBRI_RECORDER_PASSWORD
value: RANDOMPASS
- name: JIBRI_RECORDING_DIR
value: /tmp
- name: JIBRI_FINALIZE_RECORDING_SCRIPT_PATH
value: /bin/true
- name: JIBRI_STRIP_DOMAIN_JID
value: muc
- name: JIBRI_LOGS_DIR
value: /tmp
- name: DISPLAY
value: :0
- name: TZ
value: Europe/Madrid
image: kpeiruza/jibri:fhd
imagePullPolicy: Always
name: jibri
resources:
limits:
cpu: 6200m
memory: 8800Mi
requests:
cpu: 3200m
memory: 2400Mi
securityContext:
capabilities:
add:
- NET_BIND_SERVICE
- SYS_ADMIN
privileged: true
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /dev/shm
name: shm
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
volumes:
- hostPath:
path: /dev/shm
type: “”
name: shm


Probably you don’t need the /dev/shm , privileged nor capabilities, that was inherited from the former Docker-jibri version but once you’re using pulseaudio, probably you can get rid of it.

Regards!

1 Like