I was able to successfully deploy jitsi on my ec2 instance all 3 components i.e. JVB, jicofo and prosody are up and running without any error.
However when I tried to place a conference call between 2 or 3 users, in this case - users are not able to hear or see in the conference call.
Below is the jvb.log details of this above call :-
Yes, I enabled UDP port 10000 in my security group as well as opened in my EC2 instance.
root@jitsi:/usr/bin# sudo ufw status verbose
Logging: on (low)
Default: deny (incoming), allow (outgoing), disabled (routed)
New profiles: skip
To Action From
-- ------ ----
80/tcp ALLOW IN Anywhere
443/tcp ALLOW IN Anywhere
10000/udp ALLOW IN Anywhere
22/tcp ALLOW IN Anywhere
3478/udp ALLOW IN Anywhere
5349/tcp ALLOW IN Anywhere
80/tcp (v6) ALLOW IN Anywhere (v6)
443/tcp (v6) ALLOW IN Anywhere (v6)
10000/udp (v6) ALLOW IN Anywhere (v6)
22/tcp (v6) ALLOW IN Anywhere (v6)
3478/udp (v6) ALLOW IN Anywhere (v6)
5349/tcp (v6) ALLOW IN Anywhere (v6)
One thing here is I have a Network load balancer where I created a listener for port TCP- 443 and under that listener I registered my EC2 instance where jitsi is running.
So the flow is like below :-
Client → NLB [TCP(443)] → EC2 (jitsi) [All desired ports are open as mentioned in doc]
You need to make sure UDP packages on port 10000 reach your jvb. I have no experience with NLB to give you advice on that.
Try without NLB and assign a public address to your instance and allow port 10000 UDP in the security group.
I m trying to debug above issue and now trying to deploy it in instance without NLB.
And there I m getting error in nginx :-
you should increase server_names_hash_bucket_size: 64
root@ip-XX.XX.XX.XX:~# nginx -t
nginx: [emerg] could not build server_names_hash, you should increase server_names_hash_bucket_size: 64
nginx: configuration file /etc/nginx/nginx.conf test failed
root@ip-XX.XX.XX.XX:~# systemctl status nginx
● nginx.service - A high performance web server and a reverse proxy server
Loaded: loaded (/lib/systemd/system/nginx.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Tue 2023-04-25 15:23:08 UTC; 14min ago
Process: 18397 ExecStartPre=/usr/sbin/nginx -t -q -g daemon on; master_process on; (code=exited, status=1/FAILURE)
Apr 25 15:23:08 ip-XX.XX.XX.XX systemd: Starting A high performance web server and a reverse proxy server...
Apr 25 15:23:08 ip-XX.XX.XX.XX nginx: nginx: [emerg] could not build server_names_hash, you should increase server_names_hash_bucket_size: 64
Apr 25 15:23:08 ip-XX.XX.XX.XX nginx: nginx: configuration file /etc/nginx/nginx.conf test failed
Apr 25 15:23:08 ip-XX.XX.XX.XX systemd: nginx.service: Control process exited, code=exited, status=1/FAILURE
Apr 25 15:23:08 ip-XX.XX.XX.XX systemd: nginx.service: Failed with result 'exit-code'.
Apr 25 15:23:08 ip-XX.XX.XX.XX systemd: Failed to start A high performance web server and a reverse proxy server.
I installed jitsi on my public EC2 instance and able to access the application through URL and 2 or 3 user able to connect the conference call and access the audio and video.
However I would like to understand the role of UDP PORT 10000.
because when I m trying to do telnet to my jitsi app url - I m able to connect with 443.
But telnet is not connecting to 10000 for jitsi app ?
UDP 10000 is the port where clients send media to the bridge and from there they are getting the media for the rest of the participants.
Port 443 TCP is used to download client and to connect to prosody for the signalling.
Regarding NLB, you cannot route media to JVB through NLB, because NLB always preserves the client IP when forwarding UDP, so JVB will send media back directly to the client (not through the NLB), so the source IP will not match what the client expects.
Thanks for replying on this, I was really looking for someone to discussion on this and guide.
I did a quick setup in my AWS environment, where I have an EC2 instance (Jitsi installed) running in Public subnet and I created a Network load balancer with two listener ( TCP 443 and UDP 10000) and registered my EC2 instance on it.
I did a testing of above setup and I confirm you that It is working for 3+ user.
Users are able to join the call and access the audio/Video without any issue so far.
Most likely, the JVB is advertising its public address and media is flowing directly to it. (This is how JVB works by default — if you wanted media to flow to the NLB you would have needed to manually configure a mapping in JVB to tell it to advertise one of the NLB’s addresses — and then you would have hit the problem with packets sourcing from the “wrong” address — the JVB’s public IP — in the other direction.)
You can likely just delete the udp/10000 listener on your NLB.
It doesn’t matter how the NLB is configured, you can’t use it for JVB.
You can use it for nginx and/or prosody if you want to (web traffic for serving the frontend and XMPP websocket/BOSH for the signalling), although an ALB really makes more sense in that context.
In brief: after client connects to Prosody over XMPP websocket/BOSH, Jicofo allocates channels for them on the JVB, and tells the client the IP address of the JVB. If you don’t override it manually, that will be the public IP address of the JVB (detected automatically by JVB on startup using AWS IMDS or STUN), so your NLB will be bypassed. If you do override it manually by configuring a NAT mapping on JVB to tell it that its public IP is the NLB, then Jicofo will signal JVB’s address as the NLB, and client will connect to the NLB, but JVB will send packets in the other direction from its own IP, and connectivity will be broken.
I’m curious, why do you want to use an NLB in front of JVB?