Loadbalancing Octo + tracking feature

Dear All,
Recently we succeeded in configurating Octo feature. In addition we put more Videobridges in one region to see how the conference is routed if participants are mapped to that region. In such case study, I would like to ask you some questions as the following:
-Would you please show me some tools so that we can trace the path of an Octo-based conference. I can imagine a screen showing a participant A connecting to Videobridge RegionA then going to VideoBridge RegionB to talk to a participant B; each segment of the connection is marked with the up/down bandwidth. Do you have such kind of tool?
-Concerning the loadbalancing within Octo Region, according to https://github.com/jitsi/jicofo/commit/51e74b2f32f4e1da1a345c5105fda35fa2a2f684, Video Bridge bitrate is the determinant factor to distribute conferences between Videobridges of the same region. For an unknown reason, we observed that in our 3 Videobriges of the same region there is one which was not assigned any conference. I wonder whether we can set to a simpler strategy like round-robin as in the version non-octo ?
Many thanks for your help


Unfortunately we don’t have a tool to visualize the topology of a conference.

Load balancing between bridges in the same region is always bitrate based, even in the non-octo case. The code is here, in case you want to modify it:

I would suspect that the reason for one bridge not getting any traffic is because of a misconfiguration somewhere.



Many thanks @Boris_Grozev. It is a pity that we cannot somehow explicitly visualize the path of an octo-based conference having more than 2 VBs.
Concernant the strategy of selecting the VB, thank you for the link to the source. I will double check our config as you recommended.
Best regards

I forgot to mention: you can actually see the region that each client is connected to in the UI if you hover over the “gsm bar” button. The bridges are connected in a full mesh, and there is one bridge per region. You can also see the bitrates.


Thank @Boris_Grozev for the hint. I know that feature before. But with the current version, I got the following message while hovering over the “gsm bar” : “Connected to RegionA from RegionB”. Does it signify that the client in question does belong to RegionB?

This means that the client is in RegionB, but it was connected to a bridge in RegionA. This would happen if Octo is not enabled, or if jicofo was not able to use a bridge in RegionA for some reason.


Dear @Boris_Grozev, I bother you again with the load balancing of bridges in the same region.
With the version jicofo_1.0-458-1 và jitsi-videobridge_1109-1, our three bridges seem sharing the same load

But with the version jicofo_1.0-481-1 và jitsi-videobridge_1124-1, we see the asymmetric share in loading.

It seems that the right most bridge always has traffic, while the other two work alternatively.
According to your explanation, we probably had some misconfiguration. Can you give me some more hints to debug the problem?
Many thanks


Are you using XMPP components or user connection for the bridges? Can you share your bridge configuration, specifically these properties:

The only theory I have is that the old load-balancing logic (introduced in jicofo#358 since release 468) used to work fine even when jicofo doesn’t have stats coming from the bridges, but the new one doesn’t. If the configuration for statistics is not right, this could happen with XMPP component (but not MUC).



Dear @Boris_Grozev,
Please have a look on the config of our three bridges having load balancing problem (I just masked some IP address). I include also the related config of the jicofo. Do you see some abnormal setting?

The unbalance seems to be permanent like the one we got today:

You can see it during the period 21h-22h
By the way I don’t understand what you meant by “using XMPP components or user connection for the bridges”
The missing data from the bridge (conducting the the wrong load balancing in the new version) how can we reduce its impact ?
Many thanks

The configuration seems correct. Can you check your bridge logs for
any failures to publish to pubsub?


I will try to double check the log of bridges.
By the way, we try to connect the bridges using MUC, with the hope that the load balance will be better as in your previous answer.
In the MUC mode, is that true that we should keep all the setting as in XMPP mode, and then adding the setting for MUC? In other word, the bridges probably communicate to each other via XMPP AND muc? The reason is that using solely the muc setting, the jicofo never recognizes there is a bridge in the given muc room.
Can you give more detail on muc mode of bridge please?
Appreciate greatly your helps


You do NOT need the XMPP component connection (–apis=xmpp), or the pubsub statistics transport if you are using MUC. You do need to configure the MUC transport for statistics:

and the MUC in jicofo:


Hi @Boris_Grozev.
I also have the same problem with MUC as above, can you see if I missed anything?
It here

Thanks .

Dear @Boris_Grozev,
without finding any abnormal event (at least for me :slight_smile: ) in the log file, we gave up with XMPP and try to config the bridge in MUC mode to see if the load balance is better.
Thank to your help ,we finally succeeded in configurating that way. We can see 2 bridges in MUC mode as the followings

Then we observed two things:
-the addresses of the bridges are not as specified in the MUC_NICKNAME but some strange series of digit as in the above. How can we change these name ?
-With our first test, the loadbalance seems to be quite symmetric between bridges. Then I just wonder what can be other advantages / disadvantages between the two modes XMPP and MUC of bridge communications?
Many thanks for your helps


The bridges have a randomly generated JID, which is what you see on your screenshot. We just recently changed jicofo to use the MUC occupant JIDs instead, so you should see what you expect if you update jicofo.

The advantage of using the MUC client is that our implementation allows one jitsi-videobridge to connect to multiple servers.


Thanks @Boris_Grozev. I will verify the version of jicofo
Best wishes

Dear @Boris_Grozev, I didn,t catch your idea in your previous reply. You wrote “The advantage of using the MUC client is that our implementation allows one jitsi-videobridge to connect to multiple servers”. The servers here are the jicofo servers aren’t they ?

Many thanks

Dear @Boris_Grozev,
Back to you again with the load balancing. We finally succeeded in configurating bridges with MUC. The load balancing is somehow better than bridge with XMPP for the same version (jicofo_1.0-481-1 và jitsi-videobridge_1124-1), but it does not reach to same symmetric level as in old version with XMPP (version jicofo_1.0-458-1 và jitsi-videobridge_1109-1).

We have 5 bridges configured for the same region and you can see the network traffic (also for the CPU’s usage) is no longer concentrated on 1 bridge (as in XMPP) but still not regular between bridges.
You can see from my previous post, I got an almost perfect distribution with old version on XMPP
Would you please show me how to fix this problem?

The videobridge connects to xmpp server (prosody), with component connection it can connect to just one server, where with muc it can use multiple xmpp servers. The connection to multiple xmpp servers is a requirement for OCTO, a bridge connects to all xmpp servers in a deployment in all regions so it can interconnect to other bridges …

This is surprising. We made the changes precisely to improve balancing, and we verified that in our system this resulted in a much more uniform distribution of load. There must be something specific to your environment which triggers this, but I don’t have any specific ideas. Unfortunately, I can’t really delve into it right now.

Damian, the MUC mode or connecting to multiple XMPP servers is NOT a requirement for Octo.