Jitsi Octo configuration

Hi guys, im trying to test OCTO functionnality with latest docker images and websockets enabled. The goal is to be able to have multiple kubernetes cluster with a single shard.

1/ There is no sticky sessions mechanism yet.
2/ Only ports LB TCP/80/443 + UDP JVB 30000-3000x for stream + octo are opened to the web.
3/ Unlike https://meet.jit.si we are not exposing JVB TCP/443 on the internet
4/ Since i dont need geo mechanism, current config.js web is not configured to provide informations in deploymentInfo. (should i?)
5/ To test OCTO_BRIDGE_SELECTION_STRATEGY is set to SplitBridgeSelectionStrategy
6/ Each shard is on a different kubernetes cluster in a different AZ or region
7/ Currently, we have a single domain for each JVB and websockets and for every shard

Current architecture is :

My goal is to run OCTO configuration to be able to test a big conference using OCTO with users splitted on different shard but same region region1 (no geo needed)

The current behavior is that some participants are in the same room, and some are in others rooms.
Sometimes participants in the same room lost connectivity after time (due to websocket ?)

Could you help me to understand what i miss, i thought we could have chance using OCTO to ensure connectivity between shards but it seems i need to add some stickyness somewhere finally ? at cloud lb ? ingress ? haproxy ? elsewhere ?

Thank you for your answers

One conference needs to be stick to one shard, you cannot have one conference spilt between shards. Then one conference can be split between multiple bridges in that shard.

1 Like

Thank you for your reply, so its even impossible with sticky mechanism ? to have a conference split between shards.

How can we test for more than 500 users in a same region or even lot of big conference in the same region then if we have a single jicofo/prosody (the reason behind having multiple shards ?)

I already had multiple JVB behind a shard but has you know for availability and having more users i would like to have multiple shards behind a region sharing the same domain. Do you have any advice to achieve my goal ? or what is wrong in the current architecture to you ?

To scale and serve many participants from your service.

Yes you can test 500 participant call in one region.

What you need is haproxy in the place of your xloud balancer that will stick sessions to shards based on the url param room.

1 Like

What you need is haproxy in the place of your xloud balancer that will stick sessions to shards based on the url param room.

ok, i got it so just having sticky at haproxy is enough or should i also make some other configuration that i didnt see in octo.md
e.g: i seen that kind of params that i didnt get in octo.md, should i configure them :

org.jitsi.videobridge.rest.COLIBRI_WS_DOMAIN={{ $WS_DOMAIN }}
org.jitsi.videobridge.rest.COLIBRI_WS_SERVER_ID={{ .Env.WS_SERVER_ID }}
org.jitsi.videobridge.xmpp.user.shard.HOSTNAME={{ .Env.XMPP_SERVER }}
org.jitsi.videobridge.xmpp.user.shard.DOMAIN={{ .Env.XMPP_AUTH_DOMAIN }}
org.jitsi.videobridge.xmpp.user.shard.USERNAME={{ .Env.JVB_AUTH_USER }}
org.jitsi.videobridge.xmpp.user.shard.PASSWORD={{ .Env.JVB_AUTH_PASSWORD }}
org.jitsi.videobridge.xmpp.user.shard.MUC_JIDS={{ .Env.JVB_BREWERY_MUC }}@{{ .Env.XMPP_INTERNAL_MUC_DOMAIN }}
org.jitsi.videobridge.xmpp.user.shard.MUC_NICKNAME={{ .Env.HOSTNAME }}

Another question, i read that having octo port publicly exposed is not safe ? (i guess since traffic is unencrypted between jvbs), how is the surface of an attacker if we let it open (what kind of sensible things he can do)

Thank a lot

a Jitsi conference is nothing else than a XMPP room. As Prosody (the XMPP server) has no cluster feature, you can’t have a room shared between several Prosody instances. A shard is jargon for 1 Prosody + 1 Jicofo (and 1 http frontend)

1 Like

after writing this, I got curious and looked up on Prosody tracker, there is an issue for clustering. Clustering should be available for Prosody 1.0. Unfortunately there is no ETA for Prosody 1.0 :wink:

1 Like