Octo Cascade Bridges - here's how! FULL GUIDE

There are two things to that. OCTO is a method to split users on multiple bridges. Jicofo can already spread users based on JVB’s stress and load.

As for how many users can 1 JVB handle - well that depends on the configuration of the JVB and the bandwidth available.

Bear in mind that the endpoints should have good devices and bandwidth if you’d like that many participants on a single conference. Unless you’re using LastN which turn video of non-speakers off

@rn1984 so even if it OCTO splits users on multiple bridges, lets say 50 users on bridge 1 and 50 on bridge 2 and so on, the users from all three bridges can communicate with each other right via. a single meeting right ?

Example :

Meeting name : octotest
Total Participants : 150
Video Bridges : 3

VB 1 - 50 Users
VB 2 - 50 Users
VB 3 - 50 Users

Yep

1 Like

Wow. Thanks @damencho :smiley: I am trying the same. I cloned a PR, request you to take a look here whenever free - How to configure OCTO in docker-jitsi-meet?

Have you tried the experiment?

octo is not working between videobridges.

octo conference is zero. i could not solve this problem…help me…

i configured single JMS at 1 region and configured multi JVB at 3
and octo protocol.

i use GeoIP in nginx config for seperate region videobridge

and jitsi-meet-config.js
image
image

region(ap-northeast-2, ap-southeast-1, us-east-1)

when i make conference room,
i can see in jicofo.log what jvb(ap-southeast-1, us-east-1, ap-northeast-2) joined.

and Region Info in jicofo.log

and i see connected to ap-southeast-1 from us-east-1
image

but… octo conference is zero… and send/receive packet is zero…

another video bridge has same status.

below is video bridge sip-communicator.properties

org.ice4j.ice.harvest.DISABLE_AWS_HARVESTER=true
org.ice4j.ice.harvest.STUN_MAPPING_HARVESTER_ADDRESSES=meet-jit-si-turnrelay.jitsi.net:443
org.jitsi.videobridge.ENABLE_STATISTICS=true
org.jitsi.videobridge.STATISTICS_TRANSPORT=muc,colibri
org.jitsi.videobridge.xmpp.user.shard.HOSTNAME={public-ip-jms}
org.jitsi.videobridge.xmpp.user.shard.DOMAIN=auth.meet.ebridgehub.net
org.jitsi.videobridge.xmpp.user.shard.USERNAME=jvb
org.jitsi.videobridge.xmpp.user.shard.PASSWORD={jvb_secret}
org.jitsi.videobridge.xmpp.user.shard.MUC_JIDS=JvbBrewery@internal.auth.meet.ebridgehub.net
org.jitsi.videobridge.xmpp.user.shard.MUC_NICKNAME=cc3d5634-53f3-4fc1-bdb7-67f0c4a04112
org.jitsi.videobridge.xmpp.user.shard.DISABLE_CERTIFICATE_VERIFICATION=true
#org.ice4j.ice.harvest.NAT_HARVESTER_LOCAL_ADDRESS={public-ip-jvb}
#org.ice4j.ice.harvest.NAT_HARVESTER_PUBLIC_ADDRESS={public-ip-jvb}
org.jitsi.videobridge.rest.private.jetty.port = 8080
org.jitsi.videobridge.rest.private.jetty.host=0.0.0.0
# the address to bind to locally
org.jitsi.videobridge.octo.BIND_ADDRESS={public-ip-jvb}
# the address to advertise (in case BIND_ADDRESS is not accessible)
org.jitsi.videobridge.octo.PUBLIC_ADDRESS={public-ip-jvb}
# the port to bind to
org.jitsi.videobridge.octo.BIND_PORT=4096
# the region that the jitsi-videobridge instance is in
org.jitsi.videobridge.REGION=us-east-1 ( ap-southeast-1, ap-northeast-2 at another jvb)

i used both “SplitBridge” and “BasedRegion” at jicofo but same result.
maybe octo is not working
always end up in one JVB

conference room working well. but end up in one JVB

if i make conference in ap-northeast-2, so conference made at ap-northeast-2 jvb
another participants(ap-southeast-1, us-east-1) joined ap-northeast-2 jvb.

for example
browser display connected to ap-southeast-2 from us-east-1 or ap-northeast-2 from ap-southeast-1

but… not exists octo conference room at us-east-1 and ap-southeast-1 in colibri/stats result
(test client did not have web cam, only my laptop had web cam)

image
image
image

below is jvb.log when i refresh browser in ap-northeast-2

2020-10-21 14:56:37.910 INFO: [33] [confId=94aa7e951f5a0e42 epId=3e5113cb gid=276849 stats_id=Norwood-Mz1 conf_name=test3@conference.meet.ebridgehub.net] AbstractEndpoint.expire#246: Expiring.
2020-10-21 14:56:37.911 INFO: [121] [confId=94aa7e951f5a0e42 gid=276849 conf_name=test3@conference.meet.ebridgehub.net] Conference.dominantSpeakerChanged#420: ds_change ds_id=4eb88ed1
2020-10-21 14:56:37.912 INFO: [33] [confId=94aa7e951f5a0e42 epId=3e5113cb gid=276849 stats_id=Norwood-Mz1 conf_name=test3@conference.meet.ebridgehub.net] Transceiver.teardown#311: Tearing down
2020-10-21 14:56:37.912 INFO: [33] [confId=94aa7e951f5a0e42 epId=3e5113cb gid=276849 stats_id=Norwood-Mz1 conf_name=test3@conference.meet.ebridgehub.net] RtpReceiverImpl.tearDown#312: Tearing down
2020-10-21 14:56:37.913 INFO: [33] [confId=94aa7e951f5a0e42 epId=3e5113cb gid=276849 stats_id=Norwood-Mz1 conf_name=test3@conference.meet.ebridgehub.net] RtpSenderImpl.tearDown#290: Tearing down
2020-10-21 14:56:37.914 INFO: [33] [confId=94aa7e951f5a0e42 epId=3e5113cb gid=276849 stats_id=Norwood-Mz1 conf_name=test3@conference.meet.ebridgehub.net] DtlsTransport.stop#184: Stopping
2020-10-21 14:56:37.914 INFO: [33] [confId=94aa7e951f5a0e42 epId=3e5113cb local_ufrag=1333g1el5qi24d gid=276849 stats_id=Norwood-Mz1 conf_name=test3@conference.meet.ebridgehub.net] IceTransport.stop#235: Stopping
2020-10-21 14:56:37.915 INFO: [136] [confId=94aa7e951f5a0e42 gid=276849 stats_id=Norwood-Mz1 componentId=1 conf_name=test3@conference.meet.ebridgehub.net ufrag=1333g1el5qi24d name=stream-3e5113cb epId=3e5113cb local_ufrag=1333g1el5qi24d] MergingDatagramSocket$SocketContainer.runInReaderThread#770: Failed to receive: java.net.SocketException: Socket closed
2020-10-21 14:56:37.916 WARNING: [136] [confId=94aa7e951f5a0e42 gid=276849 stats_id=Norwood-Mz1 componentId=1 conf_name=test3@conference.meet.ebridgehub.net ufrag=1333g1el5qi24d name=stream-3e5113cb epId=3e5113cb local_ufrag=1333g1el5qi24d] MergingDatagramSocket.doRemove#349: Removing the active socket. Won’t be able to send until a new one is elected.
2020-10-21 14:56:37.918 INFO: [33] [confId=94aa7e951f5a0e42 gid=276849 stats_id=Norwood-Mz1 componentId=1 conf_name=test3@conference.meet.ebridgehub.net ufrag=1333g1el5qi24d name=stream-3e5113cb epId=3e5113cb local_ufrag=1333g1el5qi24d] MergingDatagramSocket.close#142: Closing.
2020-10-21 14:56:37.919 INFO: [129] [confId=94aa7e951f5a0e42 epId=3e5113cb local_ufrag=1333g1el5qi24d gid=276849 stats_id=Norwood-Mz1 conf_name=test3@conference.meet.ebridgehub.net] IceTransport.startReadingData#201: Socket closed, stopping reader
2020-10-21 14:56:37.919 INFO: [129] [confId=94aa7e951f5a0e42 epId=3e5113cb local_ufrag=1333g1el5qi24d gid=276849 stats_id=Norwood-Mz1 conf_name=test3@conference.meet.ebridgehub.net] IceTransport.startReadingData#213: No longer running, stopped reading packets
2020-10-21 14:56:37.919 INFO: [33] [confId=94aa7e951f5a0e42 epId=3e5113cb gid=276849 stats_id=Norwood-Mz1 conf_name=test3@conference.meet.ebridgehub.net] Endpoint.expire#783: Expired.
2020-10-21 14:56:39.054 INFO: [33] [confId=94aa7e951f5a0e42 epId=ae27f539 local_ufrag=e9bbj1el5qiu4e gid=276849 conf_name=test3@conference.meet.ebridgehub.net ufrag=e9bbj1el5qiu4e] Agent.gatherCandidates#622: Gathering candidates for component stream-ae27f539.RTP.
2020-10-21 14:56:39.065 INFO: [33] [confId=94aa7e951f5a0e42 epId=ae27f539 gid=276849 conf_name=test3@conference.meet.ebridgehub.net] Endpoint.lambda$setTransportInfo$11#1049: Ignoring empty DtlsFingerprint extension:
2020-10-21 14:56:40.260 INFO: [33] [confId=94aa7e951f5a0e42 epId=ae27f539 gid=276849 stats_id=Norwood-Mz1 conf_name=test3@conference.meet.ebridgehub.net] DtlsTransport.setSetupAttribute#120: The remote side is acting as DTLS client, we’ll act as server
2020-10-21 14:56:40.260 INFO: [33] [confId=94aa7e951f5a0e42 epId=ae27f539 local_ufrag=e9bbj1el5qiu4e gid=276849 stats_id=Norwood-Mz1 conf_name=test3@conference.meet.ebridgehub.net] IceTransport.startConnectivityEstablishment#182: Starting the Agent without remote candidates.
2020-10-21 14:56:40.260 INFO: [33] [confId=94aa7e951f5a0e42 gid=276849 stats_id=Norwood-Mz1 conf_name=test3@conference.meet.ebridgehub.net ufrag=e9bbj1el5qiu4e epId=ae27f539 local_ufrag=e9bbj1el5qiu4e] Agent.startConnectivityEstablishment#713: Start ICE connectivity establishment.
2020-10-21 14:56:40.261 INFO: [33] [confId=94aa7e951f5a0e42 gid=276849 stats_id=Norwood-Mz1 conf_name=test3@conference.meet.ebridgehub.net ufrag=e9bbj1el5qiu4e epId=ae27f539 local_ufrag=e9bbj1el5qiu4e] Agent.initCheckLists#949: Init checklist for stream stream-ae27f539
2020-10-21 14:56:40.261 INFO: [33] [confId=94aa7e951f5a0e42 gid=276849 stats_id=Norwood-Mz1 conf_name=test3@conference.meet.ebridgehub.net ufrag=e9bbj1el5qiu4e epId=ae27f539 local_ufrag=e9bbj1el5qiu4e] Agent.setState#923: ICE state changed from Waiting to Running.
2020-10-21 14:56:40.261 INFO: [33] [confId=94aa7e951f5a0e42 epId=ae27f539 local_ufrag=e9bbj1el5qiu4e gid=276849 stats_id=Norwood-Mz1 conf_name=test3@conference.meet.ebridgehub.net] IceTransport.iceStateChanged#321: ICE state changed old=Waiting new=Running
2020-10-21 14:56:40.261 INFO: [33] [confId=94aa7e951f5a0e42 gid=276849 stats_id=Norwood-Mz1 conf_name=test3@conference.meet.ebridgehub.net ufrag=e9bbj1el5qiu4e epId=ae27f539 local_ufrag=e9bbj1el5qiu4e] ConnectivityCheckClient.startChecks#142: Start connectivity checks.
2020-10-21 14:56:40.262 INFO: [33] [confId=94aa7e951f5a0e42 gid=276849 stats_id=Norwood-Mz1 componentId=1 conf_name=test3@conference.meet.ebridgehub.net ufrag=e9bbj1el5qiu4e name=stream-ae27f539 epId=ae27f539 local_ufrag=e9bbj1el5qiu4e] Component.addUpdateRemoteCandidates#347: Update remote candidate for stream-ae27f539.RTP: 172.17.97.113:50461/udp
2020-10-21 14:56:40.262 INFO: [33] [confId=94aa7e951f5a0e42 gid=276849 stats_id=Norwood-Mz1 componentId=1 conf_name=test3@conference.meet.ebridgehub.net ufrag=e9bbj1el5qiu4e name=stream-ae27f539 epId=ae27f539 local_ufrag=e9bbj1el5qiu4e] Component.addUpdateRemoteCandidates#347: Update remote candidate for stream-ae27f539.RTP: 192.168.15.145:50462/udp
2020-10-21 14:56:40.262 INFO: [33] [confId=94aa7e951f5a0e42 gid=276849 stats_id=Norwood-Mz1 componentId=1 conf_name=test3@conference.meet.ebridgehub.net ufrag=e9bbj1el5qiu4e name=stream-ae27f539 epId=ae27f539 local_ufrag=e9bbj1el5qiu4e] Component.addUpdateRemoteCandidates#347: Update remote candidate for stream-ae27f539.RTP: 192.168.219.107:50463/udp
2020-10-21 14:56:40.262 INFO: [33] [confId=94aa7e951f5a0e42 gid=276849 stats_id=Norwood-Mz1 componentId=1 conf_name=test3@conference.meet.ebridgehub.net ufrag=e9bbj1el5qiu4e name=stream-ae27f539 epId=ae27f539 local_ufrag=e9bbj1el5qiu4e] Component.updateRemoteCandidates#481: new Pair added: 172.31.11.96:10000/udp/host -> 172.17.97.113:50461/udp/host (stream-ae27f539.RTP).
2020-10-21 14:56:40.262 INFO: [33] [confId=94aa7e951f5a0e42 gid=276849 stats_id=Norwood-Mz1 componentId=1 conf_name=test3@conference.meet.ebridgehub.net ufrag=e9bbj1el5qiu4e name=stream-ae27f539 epId=ae27f539 local_ufrag=e9bbj1el5qiu4e] Component.updateRemoteCandidates#481: new Pair added: 172.31.11.96:10000/udp/host -> 192.168.15.145:50462/udp/host (stream-ae27f539.RTP).
2020-10-21 14:56:40.262 INFO: [33] [confId=94aa7e951f5a0e42 gid=276849 stats_id=Norwood-Mz1 componentId=1 conf_name=test3@conference.meet.ebridgehub.net ufrag=e9bbj1el5qiu4e name=stream-ae27f539 epId=ae27f539 local_ufrag=e9bbj1el5qiu4e] Component.updateRemoteCandidates#481: new Pair added: 172.31.11.96:10000/udp/host -> 192.168.219.107:50463/udp/host (stream-ae27f539.RTP).
2020-10-21 14:56:40.263 INFO: [33] [confId=94aa7e951f5a0e42 gid=276849 stats_id=Norwood-Mz1 componentId=1 conf_name=test3@conference.meet.ebridgehub.net ufrag=e9bbj1el5qiu4e name=stream-ae27f539 epId=ae27f539 local_ufrag=e9bbj1el5qiu4e] Component.addUpdateRemoteCandidates#347: Update remote candidate for stream-ae27f539.RTP: 172.17.97.113:50461/udp
2020-10-21 14:56:40.263 INFO: [33] [confId=94aa7e951f5a0e42 gid=276849 stats_id=Norwood-Mz1 componentId=1 conf_name=test3@conference.meet.ebridgehub.net ufrag=e9bbj1el5qiu4e name=stream-ae27f539 epId=ae27f539 local_ufrag=e9bbj1el5qiu4e] Component.addUpdateRemoteCandidates#369: Not adding duplicate remote candidate: 172.17.97.113:50461/udp
2020-10-21 14:56:40.263 INFO: [33] [confId=94aa7e951f5a0e42 gid=276849 stats_id=Norwood-Mz1 componentId=1 conf_name=test3@conference.meet.ebridgehub.net ufrag=e9bbj1el5qiu4e name=stream-ae27f539 epId=ae27f539 local_ufrag=e9bbj1el5qiu4e] Component.addUpdateRemoteCandidates#347: Update remote candidate for stream-ae27f539.RTP: 192.168.15.145:50462/udp
2020-10-21 14:56:40.263 INFO: [33] [confId=94aa7e951f5a0e42 gid=276849 stats_id=Norwood-Mz1 componentId=1 conf_name=test3@conference.meet.ebridgehub.net ufrag=e9bbj1el5qiu4e name=stream-ae27f539 epId=ae27f539 local_ufrag=e9bbj1el5qiu4e] Component.addUpdateRemoteCandidates#369: Not adding duplicate remote candidate: 192.168.15.145:50462/udp
2020-10-21 14:56:40.263 INFO: [33] [confId=94aa7e951f5a0e42 gid=276849 stats_id=Norwood-Mz1 componentId=1 conf_name=test3@conference.meet.ebridgehub.net ufrag=e9bbj1el5qiu4e name=stream-ae27f539 epId=ae27f539 local_ufrag=e9bbj1el5qiu4e] Component.addUpdateRemoteCandidates#347: Update remote candidate for stream-ae27f539.RTP: 192.168.219.107:50463/udp
2020-10-21 14:56:40.263 INFO: [33] [confId=94aa7e951f5a0e42 gid=276849 stats_id=Norwood-Mz1 componentId=1 conf_name=test3@conference.meet.ebridgehub.net ufrag=e9bbj1el5qiu4e name=stream-ae27f539 epId=ae27f539 local_ufrag=e9bbj1el5qiu4e] Component.addUpdateRemoteCandidates#369: Not adding duplicate remote candidate: 192.168.219.107:50463/udp
2020-10-21 14:56:40.277 INFO: [110] [confId=94aa7e951f5a0e42 gid=276849 stats_id=Norwood-Mz1 conf_name=test3@conference.meet.ebridgehub.net ufrag=e9bbj1el5qiu4e epId=ae27f539 local_ufrag=e9bbj1el5qiu4e] Agent.triggerCheck#1714: Add peer CandidatePair with new reflexive address to checkList: CandidatePair (State=Frozen Priority=7961553801087811583):
LocalCandidate=candidate:1 1 udp 2130706431 172.31.11.96 10000 typ host
RemoteCandidate=candidate:10000 1 udp 1853693695 14.6.45.89 50463 typ prflx
2020-10-21 14:56:40.289 INFO: [110] [confId=94aa7e951f5a0e42 gid=276849 stats_id=Norwood-Mz1 conf_name=test3@conference.meet.ebridgehub.net ufrag=e9bbj1el5qiu4e epId=ae27f539 local_ufrag=e9bbj1el5qiu4e] ConnectivityCheckClient.processSuccessResponse#630: Pair succeeded: 172.31.11.96:10000/udp/host -> 14.6.45.89:50463/udp/prflx (stream-ae27f539.RTP).
2020-10-21 14:56:40.290 INFO: [110] [confId=94aa7e951f5a0e42 gid=276849 stats_id=Norwood-Mz1 componentId=1 conf_name=test3@conference.meet.ebridgehub.net ufrag=e9bbj1el5qiu4e name=stream-ae27f539 epId=ae27f539 local_ufrag=e9bbj1el5qiu4e] ComponentSocket.addAuthorizedAddress#99: Adding allowed address: 14.6.45.89:50463/udp
2020-10-21 14:56:40.290 INFO: [110] [confId=94aa7e951f5a0e42 gid=276849 stats_id=Norwood-Mz1 conf_name=test3@conference.meet.ebridgehub.net ufrag=e9bbj1el5qiu4e epId=ae27f539 local_ufrag=e9bbj1el5qiu4e] ConnectivityCheckClient.processSuccessResponse#639: Pair validated: 3.35.55.105:10000/udp/srflx -> 14.6.45.89:50463/udp/prflx (stream-ae27f539.RTP).
2020-10-21 14:56:40.290 INFO: [110] [confId=94aa7e951f5a0e42 gid=276849 stats_id=Norwood-Mz1 conf_name=test3@conference.meet.ebridgehub.net ufrag=e9bbj1el5qiu4e epId=ae27f539 local_ufrag=e9bbj1el5qiu4e] DefaultNominator.strategyNominateFirstValid#142: Nominate (first valid): 3.35.55.105:10000/udp/srflx -> 14.6.45.89:50463/udp/prflx (stream-ae27f539.RTP).
2020-10-21 14:56:40.290 INFO: [110] [confId=94aa7e951f5a0e42 gid=276849 stats_id=Norwood-Mz1 conf_name=test3@conference.meet.ebridgehub.net ufrag=e9bbj1el5qiu4e epId=ae27f539 local_ufrag=e9bbj1el5qiu4e] Agent.nominate#1787: verify if nominated pair answer again
2020-10-21 14:56:40.290 WARNING: [110] [confId=94aa7e951f5a0e42 gid=276849 stats_id=Norwood-Mz1 componentId=1 conf_name=test3@conference.meet.ebridgehub.net ufrag=e9bbj1el5qiu4e name=stream-ae27f539 epId=ae27f539 local_ufrag=e9bbj1el5qiu4e] MergingDatagramSocket.initializeActive#599: Active socket already initialized.
2020-10-21 14:56:40.290 INFO: [110] [confId=94aa7e951f5a0e42 gid=276849 stats_id=Norwood-Mz1 conf_name=test3@conference.meet.ebridgehub.net ufrag=e9bbj1el5qiu4e epId=ae27f539 local_ufrag=e9bbj1el5qiu4e] ConnectivityCheckClient.processSuccessResponse#708: IsControlling: true USE-CANDIDATE:false.
2020-10-21 14:56:40.302 INFO: [54] [confId=94aa7e951f5a0e42 gid=276849 stats_id=Norwood-Mz1 conf_name=test3@conference.meet.ebridgehub.net ufrag=e9bbj1el5qiu4e epId=ae27f539 local_ufrag=e9bbj1el5qiu4e] ConnectivityCheckClient$PaceMaker.run#922: Pair failed: 172.31.11.96:10000/udp/host -> 172.17.97.113:50461/udp/host (stream-ae27f539.RTP)
2020-10-21 14:56:40.305 INFO: [110] [confId=94aa7e951f5a0e42 gid=276849 stats_id=Norwood-Mz1 conf_name=test3@conference.meet.ebridgehub.net ufrag=e9bbj1el5qiu4e epId=ae27f539 local_ufrag=e9bbj1el5qiu4e] ConnectivityCheckClient.processSuccessResponse#630: Pair succeeded: 3.35.55.105:10000/udp/srflx -> 14.6.45.89:50463/udp/prflx (stream-ae27f539.RTP).
2020-10-21 14:56:40.305 INFO: [110] [confId=94aa7e951f5a0e42 gid=276849 stats_id=Norwood-Mz1 conf_name=test3@conference.meet.ebridgehub.net ufrag=e9bbj1el5qiu4e epId=ae27f539 local_ufrag=e9bbj1el5qiu4e] ConnectivityCheckClient.processSuccessResponse#639: Pair validated: 3.35.55.105:10000/udp/srflx -> 14.6.45.89:50463/udp/prflx (stream-ae27f539.RTP).
2020-10-21 14:56:40.305 INFO: [110] [confId=94aa7e951f5a0e42 gid=276849 stats_id=Norwood-Mz1 conf_name=test3@conference.meet.ebridgehub.net ufrag=e9bbj1el5qiu4e epId=ae27f539 local_ufrag=e9bbj1el5qiu4e] ConnectivityCheckClient.processSuccessResponse#708: IsControlling: true USE-CANDIDATE:true.
2020-10-21 14:56:40.305 INFO: [110] [confId=94aa7e951f5a0e42 gid=276849 stats_id=Norwood-Mz1 conf_name=test3@conference.meet.ebridgehub.net ufrag=e9bbj1el5qiu4e epId=ae27f539 local_ufrag=e9bbj1el5qiu4e] ConnectivityCheckClient.processSuccessResponse#723: Nomination confirmed for pair: 3.35.55.105:10000/udp/srflx -> 14.6.45.89:50463/udp/prflx (stream-ae27f539.RTP).
2020-10-21 14:56:40.305 INFO: [110] [confId=94aa7e951f5a0e42 gid=276849 stats_id=Norwood-Mz1 conf_name=test3@conference.meet.ebridgehub.net ufrag=e9bbj1el5qiu4e name=stream-ae27f539 epId=ae27f539 local_ufrag=e9bbj1el5qiu4e] CheckList.handleNominationConfirmed#406: Selected pair for stream stream-ae27f539.RTP: 3.35.55.105:10000/udp/srflx -> 14.6.45.89:50463/udp/prflx (stream-ae27f539.RTP)
2020-10-21 14:56:40.305 INFO: [110] [confId=94aa7e951f5a0e42 gid=276849 stats_id=Norwood-Mz1 conf_name=test3@conference.meet.ebridgehub.net ufrag=e9bbj1el5qiu4e epId=ae27f539 local_ufrag=e9bbj1el5qiu4e] Agent.checkListStatesUpdated#1878: CheckList of stream stream-ae27f539 is COMPLETED
2020-10-21 14:56:40.305 INFO: [110] [confId=94aa7e951f5a0e42 gid=276849 stats_id=Norwood-Mz1 conf_name=test3@conference.meet.ebridgehub.net ufrag=e9bbj1el5qiu4e epId=ae27f539 local_ufrag=e9bbj1el5qiu4e] Agent.setState#923: ICE state changed from Running to Completed.
2020-10-21 14:56:40.306 INFO: [110] [confId=94aa7e951f5a0e42 epId=ae27f539 local_ufrag=e9bbj1el5qiu4e gid=276849 stats_id=Norwood-Mz1 conf_name=test3@conference.meet.ebridgehub.net] IceTransport.iceStateChanged#321: ICE state changed old=Running new=Completed
2020-10-21 14:56:40.306 INFO: [110] [confId=94aa7e951f5a0e42 epId=ae27f539 gid=276849 stats_id=Norwood-Mz1 conf_name=test3@conference.meet.ebridgehub.net] Endpoint$3.connected#374: ICE connected
2020-10-21 14:56:40.306 INFO: [110] [confId=94aa7e951f5a0e42 gid=276849 stats_id=Norwood-Mz1 conf_name=test3@conference.meet.ebridgehub.net ufrag=e9bbj1el5qiu4e epId=ae27f539 local_ufrag=e9bbj1el5qiu4e] Agent.logCandTypes#1986: Harvester used for selected pair for stream-ae27f539.RTP: srflx
2020-10-21 14:56:40.306 INFO: [129] [confId=94aa7e951f5a0e42 epId=ae27f539 gid=276849 stats_id=Norwood-Mz1 conf_name=test3@conference.meet.ebridgehub.net] DtlsTransport.startDtlsHandshake#102: Starting DTLS handshake
2020-10-21 14:56:40.306 INFO: [129] [confId=94aa7e951f5a0e42 epId=ae27f539 gid=276849 stats_id=Norwood-Mz1 conf_name=test3@conference.meet.ebridgehub.net] TlsServerImpl.notifyClientVersion#187: Negotiated DTLS version DTLS 1.2
2020-10-21 14:56:40.316 INFO: [129] [confId=94aa7e951f5a0e42 epId=ae27f539 gid=276849 stats_id=Norwood-Mz1 conf_name=test3@conference.meet.ebridgehub.net] Endpoint.lambda$setupDtlsTransport$2#405: DTLS handshake complete
2020-10-21 14:56:40.317 INFO: [134] [confId=94aa7e951f5a0e42 epId=ae27f539 gid=276849 stats_id=Norwood-Mz1 conf_name=test3@conference.meet.ebridgehub.net] Endpoint.lambda$acceptSctpConnection$8#907: Attempting to establish SCTP socket connection
Got sctp association state update: 1
sctp is now up. was ready? false
2020-10-21 14:56:40.417 INFO: [134] [confId=94aa7e951f5a0e42 epId=ae27f539 gid=276849 stats_id=Norwood-Mz1 conf_name=test3@conference.meet.ebridgehub.net] Endpoint$4.onReady#849: SCTP connection is ready, creating the Data channel stack
2020-10-21 14:56:40.418 INFO: [134] [confId=94aa7e951f5a0e42 epId=ae27f539 gid=276849 stats_id=Norwood-Mz1 conf_name=test3@conference.meet.ebridgehub.net] Endpoint$4.onReady#876: Will wait for the remote side to open the data channel.
2020-10-21 14:56:40.418 INFO: [129] [confId=94aa7e951f5a0e42 epId=ae27f539 gid=276849 stats_id=Norwood-Mz1 conf_name=test3@conference.meet.ebridgehub.net] DataChannelStack.onIncomingDataChannelPacket#62: Received data channel open message
2020-10-21 14:56:40.418 INFO: [129] [confId=94aa7e951f5a0e42 epId=ae27f539 gid=276849 stats_id=Norwood-Mz1 conf_name=test3@conference.meet.ebridgehub.net] Endpoint$4.lambda$onReady$1#857: Remote side opened a data channel.
2020-10-21 14:56:43.306 INFO: [52] [confId=94aa7e951f5a0e42 gid=276849 stats_id=Norwood-Mz1 conf_name=test3@conference.meet.ebridgehub.net ufrag=e9bbj1el5qiu4e epId=ae27f539 local_ufrag=e9bbj1el5qiu4e] Agent.setState#923: ICE state changed from Completed to Terminated.
2020-10-21 14:56:43.306 INFO: [52] [confId=94aa7e951f5a0e42 epId=ae27f539 local_ufrag=e9bbj1el5qiu4e gid=276849 stats_id=Norwood-Mz1 conf_name=test3@conference.meet.ebridgehub.net] IceTransport.iceStateChanged#321: ICE state changed old=Completed new=Terminated
2020-10-21 14:56:44.701 INFO: [20] HealthChecker.run#170: Performed a successful health check in PT0S. Sticky failure: false

octo enable = ture…
jvb joined to jicofo no problem…
conference is working…3 participants more…

single jms and multi jvb.
why displayed server count : 1…
What is the problem?
why not working octo?
why octo conference is 0 ?..

@damencho @Boris_Grozev @rn1984
please help me…
I’ve been struggling for a week.

i solved it…

jvb bind_address changed to 0.0.0.0

octo_conference value increased and send and receive packet increased.

but now i don’t know. why set bind ip address to 0.0.0.0

https://community.jitsi.org/t/how-to-confg-jvb-for-octo-in-docker-docker-swarm/55611/23?u=janpoo6427

@janpoo6427 Can you please share your nginx.conf file in which you have configured Geo-Location settings?

nginx.conf.txt (3.1 KB)

this is my nginx config.

now solved this problem, my jitsi otco protocol is working.

thank you~!~!

1 Like

I’m glad that its working. I never tried Octo with different Geo-Locations. Thanks for sharing the config. :slight_smile:

1 Like

if you know why set bind ip address to 0.0.0.0 ?

now i use aws. so i configure vpc peering between virginia and singapore.

so. i changed bind address to jvb private ip address then octo did not working again.

but. virginia jvb ping test to singapore jvb is ok.

i dont know this problem

Hi,
My setup enabled octo and i’m using split user to jvb: SplitBridgeSelectionStrategy strategy.
It place users in to 2 jvb I have but only the users same jvb can send video and audio. The other users still there but cant hear or see the video from users from difference jvb.
Could you suggest what setting is wrong in this case:
Jicofo log:

    Jicofo 2020-10-26 01:06:12.714 INFO: [28] org.jitsi.jicofo.JitsiMeetConferenceImpl.log() Member abc@muc.meet.jitsi/1a7b2d80 joined.
Jicofo 2020-10-26 01:06:12.715 INFO: [28] org.jitsi.jicofo.bridge.BridgeSelectionStrategy.log() Selected bridge Bridge[jid=jvbbrewery@internal-muc.meet.jitsi/jvb-0, relayId=jvb-0:4096, region=SG-1, stress=0.00] with stress=0.0011 for participantRegion=asia
Jicofo 2020-10-26 01:06:12.715 INFO: [28] org.jitsi.jicofo.JitsiMeetConferenceImpl.log() Added participant jid= abc@muc.meet.jitsi/1a7b2d80, bridge=jvbbrewery@internal-muc.meet.jitsi/jvb-0
Jicofo 2020-10-26 01:06:12.715 INFO: [28] org.jitsi.jicofo.JitsiMeetConferenceImpl.log() Region info, conference=45676 octo_enabled= true: [[SG-1, asia][SG-1, asia, asia]]
Jicofo 2020-10-26 01:06:12.715 INFO: [392] org.jitsi.jicofo.discovery.DiscoveryUtil.log() Doing feature discovery for abc@muc.meet.jitsi/1a7b2d80
Jicofo 2020-10-26 01:06:12.716 INFO: [392] org.jitsi.jicofo.discovery.DiscoveryUtil.log() Successfully discovered features for abc@muc.meet.jitsi/1a7b2d80 in 1
Jicofo 2020-10-26 01:06:12.716 INFO: [392] org.jitsi.jicofo.AbstractChannelAllocator.log() Using jvbbrewery@internal-muc.meet.jitsi/jvb-0 to allocate channels for: Participant[abc@muc.meet.jitsi/1a7b2d80]@2058681079
Jicofo 2020-10-26 01:06:12.752 INFO: [392] org.jitsi.jicofo.ParticipantChannelAllocator.log() Sending session-initiate to: abc@muc.meet.jitsi/1a7b2d80
Jicofo 2020-10-26 01:06:14.157 INFO: [28] org.jitsi.jicofo.JitsiMeetConferenceImpl.log() Got session-accept from: abc@muc.meet.jitsi/1a7b2d80
Jicofo 2020-10-26 01:06:14.158 INFO: [28] org.jitsi.jicofo.JitsiMeetConferenceImpl.log() Received session-accept from abc@muc.meet.jitsi/1a7b2d80 with accepted sources:Sources{ audio: [ssrc=2664474199 ] }@1502757851
Jicofo 2020-10-26 01:06:14.159 INFO: [28] org.jitsi.protocol.xmpp.AbstractOperationSetJingle.log() Notify add SSRC abc@muc.meet.jitsi/10d76910 SID: 7ismc0qgeiqol Sources{ audio: [ssrc=2664474199 ] }@615909116 source_Groups{ }@2010425089
Jicofo 2020-10-26 01:06:14.159 INFO: [28] org.jitsi.protocol.xmpp.AbstractOperationSetJingle.log() Notify add SSRC abc@muc.meet.jitsi/cfa8c93e SID: 10q23ijhsjbr Sources{ audio: [ssrc=2664474199 ] }@615909116 source_Groups{ }@2010425089
Jicofo 2020-10-26 01:06:39.589 INFO: [84] org.jitsi.jicofo.xmpp.FocusComponent.log() Focus request for room: abc@muc.meet.jitsi
Jicofo 2020-10-26 01:06:39.652 INFO: [28] org.jitsi.jicofo.ChatRoomRoleAndPresence.log() Chat room event ChatRoomMemberPresenceChangeEvent[type=MemberJoined sourceRoom=org.jitsi.impl.protocol.xmpp.ChatRoomImpl@61e81e73 member=ChatMember[abc@muc.meet.jitsi/64b1fdce, jid: null]@388194445]
Jicofo 2020-10-26 01:06:39.652 INFO: [28] org.jitsi.jicofo.JitsiMeetConferenceImpl.log() Member abc@muc.meet.jitsi/64b1fdce joined.
Jicofo 2020-10-26 01:06:39.653 INFO: [28] org.jitsi.jicofo.bridge.BridgeSelectionStrategy.log() Selected bridge Bridge[jid=jvbbrewery@internal-muc.meet.jitsi/jvb-1, relayId=jvb-1:4096, region=SG-1, stress=0.00] with stress=4.0E-4 for participantRegion=asia
Jicofo 2020-10-26 01:06:39.653 INFO: [28] org.jitsi.jicofo.JitsiMeetConferenceImpl.log() Added participant jid= abc@muc.meet.jitsi/64b1fdce, bridge=jvbbrewery@internal-muc.meet.jitsi/jvb-1
Jicofo 2020-10-26 01:06:39.653 INFO: [28] org.jitsi.jicofo.JitsiMeetConferenceImpl.log() Region info, conference=45676 octo_enabled= true: [[SG-1, asia, asia][SG-1, asia, asia]]
Jicofo 2020-10-26 01:06:39.653 INFO: [392] org.jitsi.jicofo.discovery.DiscoveryUtil.log() Doing feature discovery for abc@muc.meet.jitsi/64b1fdce
Jicofo 2020-10-26 01:06:39.654 INFO: [392] org.jitsi.jicofo.discovery.DiscoveryUtil.log() Successfully discovered features for abc@muc.meet.jitsi/64b1fdce in 1
Jicofo 2020-10-26 01:06:39.654 INFO: [392] org.jitsi.jicofo.AbstractChannelAllocator.log() Using jvbbrewery@internal-muc.meet.jitsi/jvb-1 to allocate channels for: Participant[abc@muc.meet.jitsi/64b1fdce]@2123067769
Jicofo 2020-10-26 01:06:39.718 INFO: [392] org.jitsi.jicofo.ParticipantChannelAllocator.log() Sending session-initiate to: abc@muc.meet.jitsi/64b1fdce
Jicofo 2020-10-26 01:06:41.133 INFO: [28] org.jitsi.jicofo.JitsiMeetConferenceImpl.log() Got session-accept from: abc@muc.meet.jitsi/64b1fdce

JVB 1 Log

Oct 26, 2020 8:04:38 AM org.jitsi.utils.logging2.LoggerImpl log
INFO: Starting with 60 second interval.
Oct 26, 2020 8:04:38 AM org.jitsi.utils.logging2.LoggerImpl log
INFO: Started with interval=10000, timeout=PT30S, maxDuration=PT3S, stickyFailures=false.
Oct 26, 2020 8:04:38 AM org.jitsi.utils.logging2.LoggerImpl log
INFO: Initialized with bind address jvb-1 and bind port 4096. Receive buffer size 212992 (asked for 10485760). Send buffer size 212992 (asked for 10485760).
Oct 26, 2020 8:04:38 AM org.jitsi.utils.logging2.LoggerImpl log
INFO: Created Octo UDP transport
Oct 26, 2020 8:04:38 AM org.jitsi.utils.logging2.LoggerImpl log
INFO: Created OctoTransport
Oct 26, 2020 8:04:38 AM org.jitsi.utils.logging2.LoggerImpl log
INFO: Not starting CallstatsService, disabled in configuration.
Oct 26, 2020 8:04:38 AM org.jitsi.utils.logging2.LoggerImpl log
INFO: Starting public http server
Oct 26, 2020 8:04:38 AM org.jitsi.utils.logging2.LoggerImpl log
INFO: Base URL: wss://localhost:8443/colibri-ws/<no value>
Oct 26, 2020 8:04:38 AM org.eclipse.jetty.util.log.Log initialized
INFO: Logging initialized @1890ms to org.eclipse.jetty.util.log.JavaUtilLog
Oct 26, 2020 8:04:38 AM org.jitsi.utils.logging2.LoggerImpl log
INFO: Connected.
Oct 26, 2020 8:04:38 AM org.jitsi.utils.logging2.LoggerImpl log
INFO: Logging in.
Oct 26, 2020 8:04:38 AM org.jitsi.utils.logging2.LoggerImpl log
INFO: Registering servlet at /colibri-ws/*, baseUrl = wss://localhost:8443/colibri-ws/<no value>
Oct 26, 2020 8:04:38 AM org.eclipse.jetty.server.Server doStart
INFO: jetty-9.4.15.v20190215; built: 2019-02-15T16:53:49.381Z; git: eb70b240169fcf1abbd86af36482d1c49826fa0b; jvm 1.8.0_265-8u265-b01-0+deb9u1-b01
Oct 26, 2020 8:04:38 AM org.jitsi.utils.logging2.LoggerImpl log
INFO: Joined MUC: jvbbrewery@internal-muc.meet.jitsi
Oct 26, 2020 8:04:38 AM org.eclipse.jetty.server.handler.ContextHandler doStart
INFO: Started o.e.j.s.ServletContextHandler@4f3bbf68{/,null,AVAILABLE}
Oct 26, 2020 8:04:38 AM org.eclipse.jetty.server.AbstractConnector doStart
INFO: Started ServerConnector@d41f816{HTTP/1.1,[http/1.1]}{0.0.0.0:9090}
Oct 26, 2020 8:04:38 AM org.eclipse.jetty.server.Server doStart
INFO: Started @2159ms

JVB 2 log

Oct 26, 2020 8:05:00 AM org.jitsi.utils.logging2.LoggerImpl log
INFO: Starting with 60 second interval.
Oct 26, 2020 8:05:00 AM org.jitsi.utils.logging2.LoggerImpl log
INFO: Started with interval=10000, timeout=PT30S, maxDuration=PT3S, stickyFailures=false.
Oct 26, 2020 8:05:00 AM org.jitsi.utils.logging2.LoggerImpl log
INFO: Initialized with bind address jvb-0 and bind port 4096. Receive buffer size 212992 (asked for 10485760). Send buffer size 212992 (asked for 10485760).
Oct 26, 2020 8:05:00 AM org.jitsi.utils.logging2.LoggerImpl log
INFO: Created Octo UDP transport
Oct 26, 2020 8:05:00 AM org.jitsi.utils.logging2.LoggerImpl log
INFO: Created OctoTransport
Oct 26, 2020 8:05:00 AM org.jitsi.utils.logging2.LoggerImpl log
INFO: Not starting CallstatsService, disabled in configuration.
Oct 26, 2020 8:05:00 AM org.jitsi.utils.logging2.LoggerImpl log
INFO: Starting public http server
Oct 26, 2020 8:05:00 AM org.jitsi.utils.logging2.LoggerImpl log
INFO: Base URL: wss://localhost:8443/colibri-ws/<no value>
Oct 26, 2020 8:05:00 AM org.eclipse.jetty.util.log.Log initialized
INFO: Logging initialized @1414ms to org.eclipse.jetty.util.log.JavaUtilLog
Oct 26, 2020 8:05:00 AM org.jitsi.utils.logging2.LoggerImpl log
INFO: Connected.
Oct 26, 2020 8:05:00 AM org.jitsi.utils.logging2.LoggerImpl log
INFO: Logging in.
Oct 26, 2020 8:05:00 AM org.jitsi.utils.logging2.LoggerImpl log
INFO: Registering servlet at /colibri-ws/*, baseUrl = wss://localhost:8443/colibri-ws/<no value>
Oct 26, 2020 8:05:00 AM org.eclipse.jetty.server.Server doStart
INFO: jetty-9.4.15.v20190215; built: 2019-02-15T16:53:49.381Z; git: eb70b240169fcf1abbd86af36482d1c49826fa0b; jvm 1.8.0_265-8u265-b01-0+deb9u1-b01
Oct 26, 2020 8:05:00 AM org.jitsi.utils.logging2.LoggerImpl log
INFO: Joined MUC: jvbbrewery@internal-muc.meet.jitsi
Oct 26, 2020 8:05:00 AM org.eclipse.jetty.server.handler.ContextHandler doStart
INFO: Started o.e.j.s.ServletContextHandler@19c65cdc{/,null,AVAILABLE}
Oct 26, 2020 8:05:00 AM org.eclipse.jetty.server.AbstractConnector doStart
INFO: Started ServerConnector@5c44c582{HTTP/1.1,[http/1.1]}{0.0.0.0:9090}
Oct 26, 2020 8:05:00 AM org.eclipse.jetty.server.Server doStart
INFO: Started @1632ms
Oct 26, 2020 8:05:00 AM org.jitsi.utils.logging2.LoggerImpl log
INFO: Starting private http server


I’m trying this setup is on Kubernetes, I have the jvb load balancing and autoscale working, only last thing is about octo :slight_smile:

Oh I got it working by maping port 4096 to the container :slight_smile:

@congthang @damencho @janpoo6427 @rn1984 can you help me with my issue ? I am struggling to get it done. I will mention the steps I have done for reference.

I have 4 virtual machines, with very high speed internet.

Machine IP Role
A 172.16.4.200 Full Docker Jitsi Meet Stack
B 172.16.4.231 JVB1
C 172.16.4.232 JVB2
D 172.16.4.233 JVB3

For the OCTO configuration, I am using this PR suggested by the community in earlier post - https://github.com/goacid/docker-jitsi-meet/tree/octo_support which I believe contains every configuration necessary to setup octo in docker-jitsi-meet instance

Using this, I have successfully setup my docker-jitsi-meet instance, listening on 443 with HTTP Redirected in Machine A. The .env file looks like this - https://pastebin.com/B2u6gJke

In the Machine B, C and D, I have cloned the same repo, and from what I understand, for the octo configuration we need to setup only jvb’s on different vm’s or physical servers ( here B, C and D ) and connect it to the prosody of machine A. So my .env in these three machines has different passwords generated by ./gen-passwords and all I have enabled JVB_ENABLE_APIS=rest,colibri in all three .env files. My docker-compose file for all these three machines contains configuration only for jvb which looks like this - https://pastebin.com/aGsygNwz
( P.S - I have removed the depends_on: prosody section, since only the container jvb will be running in all these machines. )

My questions are

1. Do I need to change the JVB_AUTH_USER=jvb setting in the .env file to something like

jvb name Machine Role
jvb1 JVB1
jvb2 JVB2
jvb3 JVB3

and then run the jvb containers ?

2. How do I register these three jvb’s from machine B,C and D to prosody of Machine A ? I believe it is done using the command below

prosodyctl register jvb $DOMAIN $PASSWORD

which in our case will be

prosodyctl register jvb1 meet.jitsi $PASSWORD
prosodyctl register jvb2 meet.jitsi $PASSWORD
prosodyctl register jvb3 meet.jitsi $PASSWORD

The $PASSWORD can be acquired from the respective .env file in machine B,C and D

3. After doing this, how do I go ahead ? How will the load-balancing happen ? Is there any step that I have missed ?

4. Correct me if I am wrong, when we add more video bridges ( not OCTO configuration ), jitsi loadbalances between the meetings

example. Meeting A will be scheduled on jvb1, Meeting B will be on jvb2

But using OCTO, participants of the same meeting can be accommodated on different jvb’s which results in increased participant capacity for the same meeting >75

5. Also in case of a single machine, lets say my Machine A has 128 CPU’s, 3 Ti of RAM , how do I setup octo for a single machine ? 4 jvb’s on a machine A with OCTO setup

Let me know if you need any more information.

References - https://github.com/jitsi/docker-jitsi-meet/pull/750#issuecomment-715323077

1 Like
  1. No
  2. Follow the setting above of /etc/jitsi/videobridge/config and /etc/jitsi/videobridge/sip-communicator.properties on jvb it will automatic register with jicofo on the main web XMPP (prosody).
    On /etc/jitsi/videobridge/config you already tell prosody the jvb secret, and on /etc/jitsi/videobridge/sip-communicator.properties the port to bind octo. Jicofo gets these params from prosody, make sure udp port open from your all jvb that jicofo can connect (check logs both jicofo and jvb if they connected).
    You also need to make sure org.jitsi.videobridge.xmpp.user.shard.MUC_NICKNAME of each jvb is unique.
  3. If Jicofo can connect fine with jvbs, so load balancer can now happen! Jicofo will select jvb by the strategy you set on jicofo sip-communicator.properties: . For testing use: org.jitsi.jicofo.BridgeSelector.BRIDGE_SELECTION_STRATEGY= SplitBridgeSelectionStrategy
    then check Jicofo logs to see what jvb selected for new participant.
    On production use RegionBasedBridgeSelectionStrategy it will select best jvb lower load and nearby. see they mention logic here: https://github.com/jitsi/jicofo/blob/master/src/main/java/org/jitsi/jicofo/bridge/RegionBasedBridgeSelectionStrategy.java
  4. For OCTO just clone the main machine A to an other region and remove the Jicofo, the XMPP_BOSH_URL_BASE on this region should point to the first region!. The jvbs on this region dont need to connect to web machine (XMPP/prosody) on this region, only need to connect to the first region where having Jicofo.
    So setting jvb on this new region nothing different from the first region except: org.jitsi.videobridge.REGION=new-region and this new-region is specify on the web config of this region:

deploymentInfo: {
shard: “shard1”,
region: “new-region”,
userRegion: “new-region”
}
You can use 1 domain only and use route53 geolocation to route client to best region they nearby, or just use 2 domain and tell people and there region to go to that domain.
Remember domain will decide the region by the deployment config above, and people in that deployment will go to jvb of that region. but they all connect to Main Jicofo so they will be same room like this:
region1.yourdomain.com/room1
region2.yourdomain.com/room1
both link go to same room1 and they can comunicate even different jvb! This so great right!

  1. In single machine you have logic above and can make more jvb one with a single udp port, as this port can be use to send direct stream to user.

I made this repo is my configuration jitsi on kubernetes with JVB autoscale and OCTO enabled.
You can check this to refer to your setting. I have it working on Digitalocean.

1 Like

Can you please share the changes you made in the database? it is really tough to find which files to change to make shared db with octo make work

Cheers,
Malav

You just need 1 Jicofo instance that all JVBs are connected to and it should be fine

@congthang This is amazing. I am gonna try it :slight_smile: I’ll update here as soon as it works for me :slight_smile:

Actually I think the only thing we actually did was to direct the clients to Jicofo. Then all was sorted out.

1 Like