HAProxy Configuration

Hi All,

I would like to share our config for HAProxy using stick table on url params.

global
log         127.0.0.1 local2     #Log configuration
ssl-default-bind-options no-sslv3
ssl-default-bind-options no-tlsv10
tune.ssl.cachesize 100000
tune.ssl.lifetime 600
tune.ssl.maxrecord 0
tune.ssl.default-dh-param 2048
ssl-default-bind-ciphers ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:ECDHE-ECDSA-DES-CBC3-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA:!DSS
ssl-default-bind-options no-sslv3 no-tls-tickets no-tlsv11
ssl-default-server-ciphers ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:ECDHE-ECDSA-DES-CBC3-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA:!DSS
ssl-default-server-options no-sslv3 no-tls-tickets no-tlsv11
stats socket /var/run/haproxy.sock mode 660 level admin
stats timeout 30s

frontend main_bridge
	bind *:10000-20000
	timeout client 60000
	option logasap
	log global
	mode tcp
	maxconn 20000
	default_backend bridge-server

frontend main_web
	bind *:443 ssl crt /etc/ssl/web.net.pem
	log-format "client\ =\ %ci:%cp,\ server\ =\ %si:%sp%HU\ (%s),\ backend\ =\ %bi:%bp\ (%b),\ status\ =\ %ST"
	timeout client 60000
	option logasap
	log global
	mode http
	maxconn 20000
	default_backend web-server

backend bridge-server
	balance source
	stick-table type string len 256 size 200k expire 120m
	stick on url_param(room) table web-server
	option httpchk GET /about/health
	http-check expect status 200
	hash-type consistent
	mode tcp
	timeout connect 6000
	timeout server 60000
	server conf1-bridge1 172.30.1.1 check port 8080
	server conf1-bridge2 172.30.1.2 check port 8080
	server conf2-bridge1 172.30.0.1 check port 8080
	server conf2-bridge2 172.30.0.2 check port 8080
	
backend web-server
	balance source
	stick-table type string len 256 size 200k expire 120m
	stick on url_param(room) table web-server
	option httpchk GET /
	http-check expect status 200
	mode http
	timeout connect 6000
	timeout server 60000
	server conf1-meet 172.30.1.3:4444 check inter 6000 ssl verify required ca-file /etc/ssl/web.net.pem
	server conf2-meet 172.30.0.3:4444 check inter 6000 ssl verify required ca-file /etc/ssl/web.net.pem

frontend stats
	bind *:9000 ssl crt /etc/ssl/web.net.pem
	mode http
	timeout client 600
	log global
	stats enable
	stats hide-version
	stats realm Haproxy\ Statistics
	stats uri /
	stats auth haproxyadmin:haproxystatspassword

Our config includes health checks on port 8080 on the bridges by enabling --apis = rest. On our signaling nodes, coturn is enabled so we are forwarding to port 4444. Additionally there is a stats page on port 9000, and you may set your own username and password for authentication.

There are 2 caveats to our configuration:

  1. It only works for single HAProxy server. I have yet to figure out stick table synchronization. Any help on this would be great.
  2. Under heavy load (100+ participants on single conference), some users will start receiving SSL handshake failure and we have not been able to figure this out.

Hope this helps the community.

Regards,
Anthony

4 Likes

Found a possible way to have stick table synchronization

peers myhaproxies
   peer proxy-one [proxy1_ip]:1024
   peer proxy-two [proxy2_ip]:1024

From https://community.jitsi.org/t/octo-config-help/19876/8

1 Like

This is going to save me multiple days of work.
Would you mind sharing your server requirements ? cores, ram, etc.

Also, are you running haproxy on a separate server or same server as jitsi front end?

We are running on a different servers on AWS with m5.2xlarge RHEL 8.0.

I tried your config and it works great!
I only had to change the ssl verify required to “verify none” I was getting an SSL handshake failure.

A couple of questions…
Have you figure out how to make the geolocation work?
Also how do you select the closest JVB? I don’t see anywhere the suffix config.deploymentInfo.userRegion=“Region”

Our setup was only in one region but we wanted to to distribute load evenly so I followed https://github.com/jitsi/jitsi-videobridge/blob/master/doc/octo.md where

org.jitsi.videobridge.REGION=bridgexxxx

where xxxx is a random generated number
and

org.jitsi.jicofo.BridgeSelector.BRIDGE_SELECTION_STRATEGY=SplitBridgeSelectionStrategy

So “Connected to” would look something like this:

I hope this would be helpful for you.

Thank you @Anthony_Garcia. Yes, I am already using Octo.
I will post my final configuration here once I am done with regional deployment, It might help someone else.

1 Like

Please do, I am looking to implement OCTO in our jitsi server

First make sure to have OCTO working. I am going to post the HAProxy config here. If OCTO isn’t working for you, there a plenty of posts in the forum that will help you. If you need more information, feel free to PM me with any questions you might have.

1 Like

thanks mate, will ping you if in case i need help

Below is a working config that works with Octo+regional deployment with private IP space. However, it is my first time working with HAProxy so any recommendations or improvement will be appreciate it. If you want to deploy using this method with public IP space for regional deployment, google it. I’ve seen several solutions for HAProxy Geo-location. But i haven’t tried them yet. Our intention is not to deploy publicly for now.
I am also using shibboleth for SSO. so if you are not using shibboleth then there is an extra ACL that you wont need.

here is my environment

  • 2 front ends (JitsiMeet+Jicofo+prosody)
    • 1 in USA-10.0.0.1
    • 1 in China-10.0.1.1
  • 4 video bridges (JVBS)
    • 2 in USA-10.0.0.2 & 10.0.0.3
    • 2 in China-10.0.1.2 & 10.0.1.3

I had trouble selecting the closest bridge to the customer at first because i had different names in the deploymentInfo config
I changed this:

US Server

deploymentInfo: {
        shard: "shard",
        region: "Texas",
        userRegion: "United States"
    },

China Server

deploymentInfo: {
        shard: "shard1",
        region: "Shanghai",
        userRegion: "China"
    },

to matching names like this, and it worked!

deploymentInfo: {
        shard: "shard",
        region: "Texas",
        userRegion: "Texas"
    },


deploymentInfo: {
        shard: "shard1",
        region: "Shanghai",
        userRegion: "Shanghai"
    },

HAProxy.cfg

global
        log stdout format raw local0 debug
        ssl-default-bind-options no-sslv3
        ssl-default-bind-options no-tlsv10
        tune.ssl.cachesize 100000
        tune.ssl.lifetime 600
        tune.ssl.maxrecord 0
        tune.ssl.default-dh-param 2048
        ssl-default-bind-ciphers ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:ECDHE-ECDSA-DES-CBC3-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA:!DSS
        ssl-default-bind-options no-sslv3 no-tls-tickets no-tlsv11
        ssl-default-server-ciphers ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:ECDHE-ECDSA-DES-CBC3-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA:!DSS
        ssl-default-server-options no-sslv3 no-tls-tickets no-tlsv11
        stats socket /var/run/haproxy.sock mode 660 level admin
        stats timeout 30s

frontend Front-End-VideoBridge
        bind *:10000-20000
        timeout client 60000
        option logasap
        log global
        mode tcp
        maxconn 20000
        default_backend VideoBridge-Servers

backend VideoBridge-Servers
        balance source
        stick on url_param(room) table Web-Servers
        option httpchk GET /about/health
        http-check expect status 200
        hash-type consistent
        mode tcp
        timeout connect 6000
        timeout server 60000
        server jvbUS01-10.0.0.2 10.0.0.2 check port 8080
        server jvbUS02-10.0.0.3 10.0.0.3check port 8080
        server jvbCH01-10.0.1.2 10.0.1.2 check port 8080        
	    server jvbCH02-10.0.1.3 10.0.1.3 check port 8080

frontend Front-End-Web
        acl USsubnets src -f /usr/local/etc/haproxy/US.subnets
        acl CHsubnets src -f /usr/local/etc/haproxy/CH.subnets

        http-request set-var(txn.shibroom) urlp(room,&),regsub(@.*,,g)
        http-request set-var(txn.phoneroom) urlp(conference),regsub(@.*,,g)

        acl roomUS urlp(room),table_server_id(Web-Servers) -m int eq 1
        acl shibUS var(txn.shibroom),table_server_id(Web-Servers) -m int eq 1
        acl phoneUS var(txn.phoneroom),table_server_id(Web-Servers) -m int eq 1

        acl roomCH url_param(room),table_server_id(Web-Servers) -m int eq 2
        acl shibCH var(txn.shibroom),table_server_id(Web-Servers) -m int eq 2
        acl phoneCH var(txn.phoneroom),table_server_id(Web-Servers) -m int eq 2

        bind *:443 ssl crt /usr/local/etc/haproxy/cert.pem
        log-format 'client = %ci:%cp, server = %si:%sp%HU (%s), backend = %bi:%bp (%b), status = %ST'
        timeout client 60000
        option logasap
        log global
        mode http
        maxconn 20000
        use_backend US-Web-Server if roomUS OR shibUS OR phoneUS
        use_backend China-Web-Server if roomCH OR shibCH OR phoneCH
        use_backend US-Web-Server if USsubnets
        use_backend China-Web-Server if CHsubnets
        default_backend Web-Servers

backend US-Web-Server
        stick on url_param(room) table Web-Servers

        option httpchk GET /
        http-check expect status 200
        mode http
        timeout connect 6000
        timeout server 60000
        server US-10.0.0.1 10.0.0.1:443 check id 1 inter 6000 ssl verify none

backend China-Web-Server
        stick on url_param(room) table Web-Servers
        option httpchk GET /
        http-check expect status 200
        mode http
        timeout connect 6000
        timeout server 60000
        server CH-10.0.1.1 10.0.1.1:443 check id 2 inter 6000 ssl verify none

backend Web-Servers
        balance source
        stick-table type string len 256 size 200k expire 120m
        stick on urlp(room) table Web-Servers
        option httpchk GET /
        http-check expect status 200
        mode http
        timeout connect 6000
        timeout server 60000
        server US-10.0.0.1 10.0.0.1:443 check id 1 inter 6000 ssl verify none 
        server CH-10.0.1.1 10.0.1.1:443 check id 2 inter 6000 ssl verify none 

frontend stats
        bind *:9000 ssl crt /usr/local/etc/haproxy/cert.pem
        mode http
        timeout client 600
        log global
        stats enable
        stats hide-version
        stats realm Haproxy\ Statistics
        stats uri /
        stats auth haproxyadmin:haproxystatspassword

My subnet files look like this inside

US.subnet
10.0.0.0/24

CH.subnet
10.0.1.0/24

2 Likes

Hi @creativeguitar @Anthony_Garcia

thanks for sharing these. am kind of new to this forum.

I have just setup the basic Jitsi meet on AWS and it works fine.

Am now trying to get a set of concise instructions on how to setup JVB + HA deployment on AWS.

Would you have a step-by-step guide for AWS deployment? Or can you point me to some?

Thanks
-Pradeep

I used the docker version of jitsi https://github.com/jitsi/docker-jitsi-meet. But I deployed in internal servers, not AWS. There are some examples in the forum

Thanks @creativeguitar ! This step is done. It’s working fine for me on AWS.

I am now actually looking for set of instructions for High Availability / Cluster deployment using JVBs etc.

I somewhat understand the diagrams, but I am not able to find a clear sequence of steps to deploy it.

Can you point me to a good resource or your own experience pls?

Hi @Pradeep_J ,

You may look into Octo Deployment to have multiple bridges connecting to a single signalling server.

You will then have multiple signalling server load balances by HAProxies.

I’m not sure this config will work. From docs you should only have 1 Jicofo instance that connects all bridges. Otherwise clients that go through on HAProxy won’t be able to connect to the same conference of users that go through the other.

This photo is from Jitsi Meet Community Call. It is what they implemented in meet.jit.si.

You can achieve HA with setup scheme posted by @Anthony_Garcia, but as @rn1984 pointed out, this is not for load balancing.
It’s a setup, where you have one active shard with all available bridges, and if for example prosody goes down, than HAProxy will switch to another shard, that also have access to all bridges. The switch to another shard mean that clients will see service interruption “something went wrong” and after a while (15secs) will try to reconnect and this time HAProxy has already done the switching to another shard - all conferences will move to another shard.
There is no such thing as multiple signaling servers in one shard, i think, something like cluster or so. At least, all videobridges can be in all shards at once :wink:

1 Like

I guess it depends how you configure your HAProxies. Cause if two users will get to different shards they won’t be able to join the same conference.

Yeap you’re right but thats where the stick on url_param(room) comes in. It checks for the http-bind room parameters and direct to the same signaling server.