After much load testing, we have landed on a maximum amount of conferences/users that we want to support per shard.
We are now trying to finalize is how to best scale shards using K8s.
Do you all have any examples or suggestions of how to do this?
After much load testing, we have landed on a maximum amount of conferences/users that we want to support per shard.
We are now trying to finalize is how to best scale shards using K8s.
Do you all have any examples or suggestions of how to do this?
@wolfm89 Hi Wolfgang thank you so much for this post. It really helped a lot in getting our kubernetes journey started from your repo. Great work soldier!
Iāve made some improvements and integrated Jibri and Jigasi to this architecture. I did not use the same project layout that you used with kustomize, but itās working. I wanted to make contributions to your base repo but not sure how you were in that regard.
Anyways, I think the community for docker is using newer versions of the containers as well. I tested it up to release 5076 which is the one im using and it looks ok.
I have a little bit of a hard time understanding your architecture in replicating prosody inside all shards. I couldnāt get your setup to work with that so I am keeping only 1 prosody and 1 jicofo and scaling up the videobridges. Is that the intended behavior of your architecture? In many places I see you referring the XMPP_SERVER as prosody-0. For Jibri to work for me I had to adapt it to use the FQDN for the ClusterIP service of the prosody node.
XMPP_SERVER: shard-0-prosody.jitsi.svc.cluster.local
Hi Arthur,
How do you organize jibri? Is it in a separate node?
Hi @Arthur_Morales ,
Great Knowlade,
Would be great ,if you can share your public repo to check your environment?
Thanks in advance!
Hi @aljen ,
Great work brother,
Would be great ,if you can share your public repo to check your environment?
Thanks in advance!
[/quote]
Hello @sunilkumarjena21 Can you share your github repo link which I can refer for deploying jitsi using k8s? It would be a great help.
Is it possible to deploy this on baremetal server(I have 2 machines with 32 CPU and 32 Gb Memory)? Which k8s distro do you recommend for deploying it like microk8s, k3s or something else?
I dont have a public repo yet for this config. Can I make PR including some code for jibri and jigasi in your repo?
No, itās not in a separate node I set up in a statefulset that has access to the audio system in the worker nodes for kubernetes. I wanna share with you guys, but this is part of a bigger project does anyone have a repo that I can submit the kubernetes configs? I also configured jigasi with Vox Implant and itās working like a charm.
Hi @Arthur_Morales ,
Here is our public repo.
Check if its suitable for your PR or do you need some generic repo?
Thanks!
Hi @sunilkumarjena21 since weāre all working on a fork of the base project that was posted here on this thread what I did was I forked the base hpi-schul-cloud/jitsi-deployment and added jibri-deployment and jigasi-deployment folders which contain my code to use both of these components. Iād be happy to give you commit access to this if you want to contribute. Thanks.
Hereās the full repo
To deploy the jigasi and jibri components you can do:
echo "===================================="
echo "Installing Jibri"
pushd jibri-deployment
kubectl delete -f .
kubectl apply -f .
popd
echo "===================================="
echo "Installing Jigasi"
pushd jigasi-deployment
kubectl delete -f .
kubectl apply -f .
popd
Also have a PR from amostech repo to the main hpi-schul-cloud:
Has anyone successfully deployed to Amazon EKS? If so, which files in the project need to be modified outside the Secrets. Also, what does your node group look like?