Best settings for OCTO in multi corporate site

Hello guys,

We have 2 corporate sites in 2 different towns.
We do not intend to use Jitsi as cloud service, only as on premise services.

City A WAN - 10GB Fiber
City B WAN - 10 GB Fiber
City A <==> City B - 10GB Direct Black Fiber (all traffic encrypted over IP Sec VPN)
The two city are 500km away.

I have setted:

  • 1 JMS in City A: 4VCPU and 8GB RAM
  • 4 JVB in City A: 8VCPU and 32GB RAM
  • 4 JVB in City B: 8VCPU and 32GB RAM

Should I use IntraRegionBridgeSelectionStrategy or RegionBasedBridgeSelectionStrategy ?

I don’t know what is the best choice, having only one region or different region, for example regionCityA and regionCityB.

The latency between City A and City B is 6ms. It is the same ISP datacenter (same physical link, just different VLAN).

For now I scale “manually” the JVB for 120 users globally, 20 users max per meetings for now but it is possible to have some peak at 100 users for one conference.

I’m looking to do it automatically (need to learn Terraform and ansible / DevOps deployment)

Also quickly, is it possible to enable HA for the JMS ? To prevent any downtime of Jitsi if failure occurs on the hypervisor node ?

If I chose RegionBasedBridgeSelectionStrategy

I need to define regionCityA and regionCityB, but how to configure the client to define the region ?
How Jitsi will know from which region the client is from ?

Thank you

I tried theses settings:

On each JVB, I changed the octo.PUBLIC_ADDRESS by the private IP of JVB

#the address to bind to locally
org.jitsi.videobridge.octo.BIND_ADDRESS=0.0.0.0
# the address to advertise (in case BIND_ADDRESS is not accessible)
org.jitsi.videobridge.octo.PUBLIC_ADDRESS=10.100.120.123
# the port to bind to
org.jitsi.videobridge.octo.BIND_PORT=4096
# the region that the jitsi-videobridge instance is in
org.jitsi.videobridge.REGION=region1

Restarted the 2 JVB

Then edited /etc/jitsi/jicofo/jicofo.conf on the JMS:

Add this block inside the jicofo {} block:

octo: {
   enabled = true
   id = "1"
}

And added this line into the bridge: {} block
selection-strategy = IntraRegionBridgeSelectionStrategy

At last on /etc/jitsi/meet/meet.company.lan-config.js

deploymentInfo: {
        environment: "meet.company.lan",
        shard: "shard",
        region: "region1",
        userRegion: "region1",
    },

Result:

When I check the realtime logs of both JVB and I launch a 3 people call, only one JVB is used.
What is wrong with my configuration ?

Since the latency is so small between the cities, you could simplify things by just using IntraRegion. Each conference will only use bridges in one city, and you don’t need to deal with any Octo traffic between the two cities.

Regarding HA, if a bridge goes down Jicofo will automatically reinvite users to another bridge. At the moment it forces a reload of the page on the client side, so it takes some seconds.

When I check the realtime logs of both JVB and I launch a 3 people call, only one JVB is used.
What is wrong with my configuration ?

Why did you expect more than one JVB to be used when you have only 3 people in the call? If you want to test Octo with a small number of participants, use the SplitBridge strategy, it will put every participant on a different bridge for testing purposes.

Since the latency is so small between the cities, you could simplify things by just using IntraRegion. Each conference will only use bridges in one city, and you don’t need to deal with any Octo traffic between the two cities.

That is what I thought, thanks for confirming it.

Regarding HA, if a bridge goes down Jicofo will automatically reinvite users to another bridge. At the moment it forces a reload of the page on the client side, so it takes some seconds.

I was more talking about the JMS it self, if the JMS goes down, all the Jitsi system is in failure. Will a system like pacemaker will work ? (1 VIP, 2 nodes: master/slaves)

Why did you expect more than one JVB to be used when you have only 3 people in the call? If you want to test Octo with a small number of participants, use the SplitBridge strategy, it will put every participant on a different bridge for testing purposes.

I did not find the information on the OCTO page of the Github project.

I was picking up the info based on forum threads and by trying the possible settings.

And it works like a charm with Split strategy, I will try with more user with Intra Strategy.

Thank you for your help.

Something like this can work, but note that neither Jicofo nor Prosody have any clustering support. If you have a hot standby shard, it must remain completely separate and unaware of the active shard, and users would reconnect when a failover happens (because the new Prosody doesn’t have any of the state which was on the old Prosody).

What we do is run each shard’s Jicofo & Prosody as containers in a Kubernetes pod with quite rapid health checks, if they would crash or if the underlying node fails, a new pod will be launched quickly. It’s rare.

We also have separate XMPP servers for Jicofo <-> JVB communication. For that you can actually have some proper load balancing and HA, because JVB supports connecting to multiple XMPP servers. So you can have multiple separate Prosody instances for your internal XMPP, have all JVBs connect to all of them, and load balance your Jicofo shards across them.

1 Like

I think this would be overkill. Something like keepalived would be simpler and better as you would loss all sessions anyway in an interruption

Thank you @jbg and @emrah for the knowledge !

I’m much aware know how it works.

Next steps for my case:

  • DevOps for Autoscaling
  • Will try your case with shards

Thank you all

1 Like