Jitsi meet works only when I am on same network and Same Wifi in Azure Kubernetes

Hi Team,

We were able to test the system with two participants on the Same network and Same Wifi, and everything worked smoothly as expected like Audio, video, Screen sharing etc…

However, we encountered an issue when participants tried to join from different networks. We discovered from the Jitsi community that this occurs because Jitsi uses a P2P connection for two participants, but requires a JVB connection for more than two participants. To enable the JVB connection, inbound traffic needs to be allowed to port 10000/UDP from the public network. We found several articles suggesting that we set up port forwarding for this port but I am using Azure Kubernetes.

Also I am not sure what would be source and target port that need to be forwarded for JVB in Azure Kubernetes

We have deployed the Jitsi on Azure Kubernetes by using the Self-Hosting Guide - Docker .

Below are the few error messages we are getting

Blockquote<s._conference.jvbJingleSession.terminate.reason>: session-terminate for ice restart - error: undefined

[modules/connectivity/IceFailedHandling.js] <ml._actOnIceFailed>: ICE failed, enableForcedReload: undefined, enableIceRestart: undefined, supports restart by terminate: true

2023-03-09T15:04:52.712Z [modules/connectivity/IceFailedHandling.js] <s._conference.jvbJingleSession.terminate.reason>: session-terminate for ice restart - error: undefined

[modules/connectivity/IceFailedHandling.js] <ml._actOnIceFailed>: Sending ICE failed - the connection did not recover, ICE state: disconnected, use ‘session-terminate’: true

Let me know if any further information is required to get a better understanding, am stuck on this issue, Please can anyone help or suggest.

In AWS this is called security group in their web dashbord and there you allow (this is forwarding) the udp port 10000, you should have done the same for port 443.

Thank you @damencho !!!

I guess I did understand the term forwarding now. It means allowing traffic to UDP/10000.

Had exposed jitsi/Web as TCP/443 and jitsi/JVB and UDP/10000 as per the documentation.

I am using NGINX Ingress controller which routes traffic to jitsi/Web on port https://jitsi.mydomain.com:443 to the Jitis web application. If UDP service needs to be accessible from my public IP/Nginix should I just Allow traffic from my public IP to the JVB service? or It need to be routed vi some port.

You need it routed. This is NAT, it goes to one machine to another machine.

Thanks @damencho,

I have allowed traffic to TCP/443, UDP/10000 and TCP/4443 from my LoadBalancer but still getting errors. 3 more new errors have started to come after applying.

Below is the old one

<s._conference.jvbJingleSession.terminate.reason>: session-terminate for ice restart - error: undefined

New issues

JVB 2023-03-14 12:15:04.101 SEVERE: [55] [confId=98e9319b6ed95d2b conf_name=countlessenergiesbetscornfully@muc.jitsi.mydomain meeting_id=7a2fc1e7 epId=daa90a78 stats_id=Marvin-GmD] DtlsServer.accept#52: Error during DTLS connection: org.bounc
ycastle.tls.TlsTimeoutException: Handshake timed out
JVB 2023-03-14 12:15:04.101 SEVERE: [55] [confId=98e9319b6ed95d2b conf_name=countlessenergiesbetscornfully@muc.jitsi.mydomain meeting_id=7a2fc1e7 epId=daa90a78 stats_id=Marvin-GmD] DtlsTransport.startDtlsHandshake#110: Error during DTLS negot
iation, closing this transport manager
org.bouncycastle.tls.TlsTimeoutException: Handshake timed out
at org.bouncycastle.tls.DTLSReliableHandshake.receiveMessage(Unknown Source)
at org.bouncycastle.tls.DTLSServerProtocol.serverHandshake(Unknown Source)
at org.bouncycastle.tls.DTLSServerProtocol.accept(Unknown Source)
at org.bouncycastle.tls.DTLSServerProtocol.accept(Unknown Source)
at org.jitsi.nlj.dtls.DtlsServer.accept(DtlsServer.kt:45)
at org.jitsi.nlj.dtls.DtlsServer.start(DtlsServer.kt:41)
ient.processTimeout#881: timeout for pair: 10.60.0.113:10000/udp/host → 10.60.0.56:42571/udp/prflx (stream-2dfac7e0.RTP), failing

JVB 2023-03-13 15:33:35.328 INFO: [259] [confId=d07012a3d931c924 conf_name=radicalgrowthsfulfilthen@muc.jitsi.mydomain
.cloud meeting_id=22017949 epId=f811bb55 stats_id=Percy-WPW local_ufrag=1d1ff1grdpk2g6 ufrag=1d1ff1grdpk2g6] ConnectivityCheckClie
nt.processTimeout#881: timeout for pair: 20.166.XX.XX:10000/udp/srflx → 10.XX.0.XX:60797/udp/prflx (stream-f811bb55.RTP), failing.

JVB 2023-03-13 15:34:25.614 INFO: [493] [confId=d07012a3d931c924 conf_name=radicalgrowthsfulfilthen@muc.jitsi.mydomain
.cloud meeting_id=22017949 epId=d58c14fa stats_id=Abbie-tVG local_ufrag=ejd4c1grdpkt5k ufrag=ejd4c1grdpkt5k name=stream-d58c14fa c
omponentId=1] MergingDatagramSocket$SocketContainer.runInReaderThread#770: Failed to receive: java.net.SocketException: Socket closed

Can you please clarify one confusion I have, Now, I have domain https//:mydomain.com:443 from the public which is routing traffic to Jitsi/Web Pod on 443. So whenever traffic comes from 443 I am routing it to Jitsi/Web on 80/443 pod via NGINIX Ingress controller

Please help, I am stuck here I am not sure if I am missing any configuration or if it’s just a forwarding issue

443 needa to reach the nginx and port udp 10000 to reach jvb. 4443 is not used anymore.

Hi @damencho ,

Your expertise and quick support helped us to resolve the problem efficiently and effectively.

Thank you so much big thanks to you, I truly appreciate your help. :slight_smile:

Just one stuff, I don’t see any errors, but the console is being flooded with the following logs and they’re coming in rapidly every fraction of a second. Is this normal behavior?

Nope, this is not normal … what else is there in the console before this starts. Can you upload the whole log file.

Hi @damencho

Please find attached console logs. Do let me know if any other logs or configuration is required.

Console Logs.txt (187.8 KB)

Yeah, this is not normal and indeed strange - haven’t seen it.
Do you see any errors in prosody? I see a ping timeout before that.

Mistakenly updated ENABLE_COLIBRI_WEBSOCKET, ENABLE_XMPP_WEBSOCKET flags to 0, After updating it to 1 now I do not see any flooded logs.