Separating Jitsi core components (Meet, Prosody, Jicofo) across separate servers (not container/dockerized)?

I keep seeing references to a layout with most of the Jitsi core components on one system, and the JVBs then on a separate system (or multiple to scale).

I am having trouble finding information that breaks up these core components as separate servers for each. I see they seem to be separated like this in the containerized docker versions, but I would like to test out across separate servers first before diving into the containerized versions (gathering baseline performance data on different layouts).

Here is what I keep seeing:

Are there instructions anywhere with the nginx + jitsi-meet server #1, Prosody on server #2, Jicofo on server #3, and JVB on server #4 ?

If not, is this problematic? Some pointers on how to make this work?
Thanks!

JVB-prosody connection is over TCP/5222 and nginx accesses to JVBs’ TCP/9090 for signaling

1 Like

There is no point of separation #1 and #2 and #3:slight_smile:
We run meet.jit.si as the first set of images, one difference, we run two prosody servers one for the client connections and one for the jvb and rest of the components to connect to like jibri.

2 Likes

Thank you for the feedback @damencho very much appreciated as always.

I understand this isn’t ideal for production. I was hoping to try to research each of the components, without resource interference from overlapping on the same system, and was hoping that it would be possible to have their resources separated more cleanly to make it easier to test each component under stress in a more granular way.

I have seen many discussion threads here and elsewhere of people being mislead about where the load was on their setups that was maxing out their server, and had hoped that being able to separate the resources more would give some useful research insight.

So, while less than ideal, is it possible to make this work across the separated servers? Anyone posted how to do this previously? If not, is it okay to get some guidance on how try to make it work?

That is good to know about the 2 Prosody’s, I’ll have to remember that for the production designs.

Thanks kindly!

Could you please explain it a bit more like whether these two prosody servers are talking to each other as well and how are you managing(configuration) it at jicofo, jvb and jibri level?

1 Like

Yeah for testing that is not a problem … All communication between components is through network so you can scatter them as you want.
I can help later with the configs send you some links.
The non optimal is for example prosody alone on a vm you need high performance core, but just one … :slight_smile:

1 Like

Nginx communicating with clients prosody:

Nginx sending client websocket connections to the bridges on port 9090:

Having multiple bridges example from docker: docker-jitsi-meet/meet.conf at db3d790e52a741f9ce4316a058fbafc7e30e98b2 · jitsi/docker-jitsi-meet · GitHub

Jicofo connecting to client prosody in /etc/jitsi/jicofo/config on port 5222: jicofo/postinst at f654702737e62e0a359025440b57975f24e31bfa · jitsi/jicofo · GitHub
Or jicofo/reference.conf at f654702737e62e0a359025440b57975f24e31bfa · jitsi/jicofo · GitHub

Jicofo connecting to the jvb prosody: jicofo/reference.conf at f654702737e62e0a359025440b57975f24e31bfa · jitsi/jicofo · GitHub

Jvb connecting to the jvb prosody:

It is jicofo that connects to both prosody servers from the client one to listen for clients joining a room/conference and from the brewery rooms on the jvb prosody to select bridge/jibri/jigasi when that is needed in a conference.

3 Likes

Thanks a lot @damencho

Very helpful as always @damencho Thank you so very much! I’ll report back on my progress, any issues, or any data from testing. I’ll also detail the steps I take as I progress so that can be easier to debug if I go awry.
Thanks!

1 Like