Working Multi Jitsi-meet / multi Videobridge setup

Thank you for the quick response @jcfischer.
Will try it out.

Regards,
Nithin

Did u even have struggle with participants from inside restricted networks? They told me, they cant connect directly, so i had to setup turn server.
the next one is, they cant connect throught udp, so only tcp should be allowed…
it is so complicated…

For auto scaling of JVBs, I need to give a custom password, since I cannot read the JMS’s password to write into auto scaled JVB’s sip-communicator.

org.jitsi.videobridge.xmpp.user.shard.PASSWORD

Is there a way of changing this password or adding a new user on the main server JMS? I will use a pre-defined password here and copy the same on JVB

@jcfischer Hi, Jens-Christian!

Thanks for publishing your config. Just checked your coturn config on https://github.com/switch-ch/jitsi-deploy/tree/master/ansible/roles/coturn .

Some questions to your config. I’m trying to set up a stun/turn server too.

  1. coturn recommends two public IPs? There is a listening-ip=0.0.0.0 option in your /etc/turnserver.conf. Did you configure your coturn host with multiple IPs and if so do all IPs point/resolv to the same DNS name? Or do you run multiple coturn stun/turn servers with different IPs and DNS names like coturn1.domain.tld, coturn2.domain.tld etc.?

  2. there is no tls-listening-port=443 setting in your /etc/turnserver.conf but you set cert and pkey in your /etc/turnserver.conf and in prosody_config.j2 you set turns with port = "443", transport = "tcp" Could you explain please?

  3. is port 443 tcp only or tcp/udp and is port 443 the only open incoming port for coturn on your firewall or do 10000-20000/udp have to be open too?

  4. in jitsi-deploy/ansible/roles/jitsi/templates/sip-communicator.properties.j2 you set your corturn servers on the videobridges

{% if 'coturn' in groups and groups['coturn'][0] %}
org.ice4j.ice.harvest.STUN_MAPPING_HARVESTER_ADDRESSES={{ hostvars[groups['coturn'][0]].inventory_hostname }}:443
{% endif %}

Is there anything else to configure on the bridges for coturn? No password/secret settings here? Not sure, if I miss something here.

On jitsi and prosody side it seems just putting the coturn server in /jitsi-deploy/blob/master/ansible/roles/jitsi/templates/jitsi-config.j2 into the stunServers section and secondly setting the turncredentials in jitsi-deploy/ansible/roles/jitsi/templates/prosody_config.j2, right? Or did I overlook something?

  1. You set static-auth-secret in /etc/turnserver.conf. Is this secret for turn only or does it restrict stun too?

  2. Did you set SRV records in your DNS?

Cheers and thanks for your time
Marcus

I just returned from holidays (sorry, not sorry :wink: for the late reply)

The coturn configuration is something we cribbed from the internet and I don’t understand it fully. I’ll look into your questions (I saw the github issue) and will check to see what kind of obvious stupid mistakes we made.

/jc

1 Like

Hi everyone, I’m also facing the same issue as yours. Can you please have a look. Need to test that whether it's working or not after enabling OCTO / OCTO configuration Thank you very much @jcfischer @localguru @itrich @saghul @Nithin_Upadhya @kitti

Can anyone explain, how a second Jitsi front-end and Jicofo is authenticating with the shared JVBs. Our configs look the same like in this thread here Working Multi Jitsi-meet / multi Videobridge setup where you can only set one hostname, not several. Do you share then the prosody server among several jicofos?

the JVBs authenticate with the Jicofos/Prosodies. The passwords need to be identical

@jcfischer you mean to say I should set the same password in /etc/jisti/jicofo/config /etc/jitsi/videobridge/config
where is prosody config? which prosody file I should set the password, and its muc based, not pubsub, a little bit confused and in second videobridge node /etc/jitsi/vieobridge/config i should set same like the main server.
Please confirm…
THANKS in ADVANCE

Hi @jcfischer! Great work. !
I really dont understand the differences beetwen the octo strategies… this config its make jicofo split multiple users against different jvb IN THE SAME room?? Or split only rooms in differents jvb with its own users rooms? So can I use RegionBased with one region to have 1 room with 100 users inside balanced in multiple jvb?

Bascally the jvb_password that you are setting up needs to be identical across all jicofos and videobridges. Check our installation scripts:

prosodyctl register jvb auth.{{ jitsi_fqdn }} {{ jitsi_meet_videobridge_password }}

in the prosody configuration

and

org.jitsi.videobridge.xmpp.user.shard-{{ loop.index }}.PASSWORD={{ jitsi_meet_videobridge_password }}

in the videobridge config file

There are two octo strategies. One is just for testing - it splits a call immediately. The other one (region Base iirc) does a more sensible thing but also splits a call on multiple bridges in the same region.
Note: we have not enabled this any more in our configuration, so I have little experience in it

same here, after a while of using Octo, we ended up using the default settings because it is more efficient, TIP: the best tweaks are on the frontend :wink:

You could connect me via lamlt2710@gmail.com, i will help you to setup this because it is quite complicated

Hello jcfischer;
May I get your comments please?

new video biridge has been added with no problem (I see the logs on the jicofo)
When I stop jvb on jms and the start jvb2 on the second server, calls fail with error message.
all the jms and jvr are behind the NAT and Fw.
accoding to search on the Google I faced something
disable org.ice4j.ice.harvest.STUN_MAPPING_HARVESTER_ADDRESSES=meet-jit-si-turnrelay.jitsi.net:443
and add
#org.ice4j.ice.harvest.NAT_HARVESTER_LOCAL_ADDRESS=private ip of jms
#org.ice4j.ice.harvest.NAT_HARVESTER_PUBLIC_ADDRESS=public ip of jms

however on the jvb2 which IP will I add? public of jms?
Because I have only one public IP address.
By the way if needed I can provide a new Puplic IP for jvb2

on jms:
org.ice4j.ice.harvest.DISABLE_AWS_HARVESTER=true
org.ice4j.ice.harvest.STUN_MAPPING_HARVESTER_ADDRESSES=meet-jit-si-turnrelay.jitsi.net:443
org.jitsi.videobridge.ENABLE_STATISTICS=true
org.jitsi.videobridge.STATISTICS_TRANSPORT=muc
org.jitsi.videobridge.xmpp.user.shard.HOSTNAME=localhost
org.jitsi.videobridge.xmpp.user.shard.DOMAIN=auth.fqdn
org.jitsi.videobridge.xmpp.user.shard.USERNAME=jvb
org.jitsi.videobridge.xmpp.user.shard.PASSWORD=Vx2R2#mz
org.jitsi.videobridge.xmpp.user.shard.MUC_JIDS=JvbBrewery@internal.auth.fqdn
org.jitsi.videobridge.xmpp.user.shard.MUC_NICKNAME=jvb1
org.jitsi.videobridge.DISABLE_TCP_HARVESTER=true
org.jitsi.videobridge.xmpp.user.shard.DISABLE_CERTIFICATE_VERIFICATION=true

on jvb:
org.ice4j.ice.harvest.DISABLE_AWS_HARVESTER=true
org.ice4j.ice.harvest.STUN_MAPPING_HARVESTER_ADDRESSES=meet-jit-si-turnrelay.jitsi.net:443
org.jitsi.videobridge.ENABLE_STATISTICS=true
org.jitsi.videobridge.STATISTICS_TRANSPORT=muc
org.jitsi.videobridge.xmpp.user.shard.HOSTNAME=public_ip_of_jms
org.jitsi.videobridge.xmpp.user.shard.DOMAIN=auth.fqdn
org.jitsi.videobridge.xmpp.user.shard.USERNAME=jvb
org.jitsi.videobridge.xmpp.user.shard.PASSWORD=Vx2R2#mz
org.jitsi.videobridge.xmpp.user.shard.MUC_JIDS=JvbBrewery@internal.auth.fqdn
org.jitsi.videobridge.xmpp.user.shard.MUC_NICKNAME=jvb2
org.jitsi.videobridge.DISABLE_TCP_HARVESTER=true
org.jitsi.videobridge.xmpp.user.shard.DISABLE_CERTIFICATE_VERIFICATION=true

thanks

You are using public IPs - are the relevant ports open? (I think 5222)

Fixed as a new Public IP per video bridge.
Thank you.

@jcfischer
As concurrent users will increase then we can handle those load by adding multiple JVB but at some extend I think node of (prosody+jicofo) can be a bottleneck, so how can we install Nginx + Jitsi front end at one instance and we have multiple nodes of jicofo+prosody at different server and each node has their own multiple videobridges, do you have any guidance or documentation.

Check how to do so with “multiple shards” and HA-Proxy :wink:

That is correct - then the other sharding techniques come into play.