Jitsi Meet cannot connect and keep reconnecting after starting a meeting

Hi, I install jitsi-meet on minikube using helm. I was able to expose jvb ip with metallb but my jitsi-meet keep reconnecting after starting a meeting. Browser console didn’t have any thing useful. Can any one help me??

Welcome to the community.

You’re saying you don’t see errors in the browser console?

No, it show, but to me, i dont’ see any usefull. maybe I lacked knowledge here some outputs
Sorry i cannot upload screenshots

E0606 19:49:14.880137   18132 portforward.go:346] error creating error stream for port 8080 -> 80: Timeout occurred
E0606 19:49:15.051847   18132 portforward.go:346] error creating error stream for port 8080 -> 80: Timeout occurred
E0606 19:49:15.298496   18132 portforward.go:368] error creating forwarding stream for port 8080 -> 80: Timeout occurred
Handling connection for 8080
E0606 19:49:20.906316   18132 portforward.go:346] error creating error stream for port 8080 -> 80: Timeout occurred
E0606 19:49:28.216428   18132 portforward.go:368] error creating forwarding stream for port 8080 -> 80: Timeout occurred
Handling connection for 8080
Handling connection for 8080
E0606 19:49:28.352666   18132 portforward.go:391] error copying from local connection to remote stream: read tcp6 [::1]:8080->[::1]:53032: wsarecv: An existing connection was forcibly closed by the remote host.
E0606 19:49:28.355561   18132 portforward.go:378] error copying from remote stream to local connection: readfrom tcp6 [::1]:8080->[::1]:53032: write tcp6 [::1]:8080->[::1]:53032: wsasend: An existing connection was forcibly closed by the remote host.
E0606 19:49:30.115190   18132 portforward.go:346] error creating error stream for port 8080 -> 80: Timeout occurred
E0606 19:49:30.148788   18132 portforward.go:368] error creating forwarding stream for port 8080 -> 80: Timeout occurred
E0606 19:49:50.016458   18132 portforward.go:368] error creating forwarding stream for port 8080 -> 80: Timeout occurred
E0606 19:49:58.361208   18132 portforward.go:346] error creating error stream for port 8080 -> 80: Timeout occurred
Handling connection for 8080
Handling connection for 8080
Handling connection for 8080
Handling connection for 8080
Handling connection for 8080
Handling connection for 8080
Handling connection for 8080
E0606 19:50:28.409082   18132 portforward.go:368] error creating forwarding stream for port 8080 -> 80: Timeout occurred
Handling connection for 8080
E0606 19:50:53.312644   18132 portforward.go:346] error creating error stream for port 8080 -> 80: Timeout occurred
E0606 19:50:54.696141   18132 portforward.go:368] error creating forwarding stream for port 8080 -> 80: Timeout occurred
E0606 19:50:55.429651   18132 portforward.go:346] error creating error stream for port 8080 -> 80: Timeout occurred
E0606 19:50:55.478472   18132 portforward.go:368] error creating forwarding stream for port 8080 -> 80: Timeout occurred
strophe.umd.js:5463 WebSocket connection to 'wss://localhost:8443/xmpp-websocket?room=secureandhigh123' failed: 
_connect	@	strophe.umd.js:5463
connect	@	strophe.umd.js:2368
_interceptConnectArgs	@	strophe.stream-management.js:228
connect	@	XmppConnection.js:264
_connect	@	xmpp.js:536
connect	@	xmpp.js:638
id.connect	@	JitsiConnection.js:61
OH	@	connection.js:52
(anonymous)	@	connection.js:196
RH	@	connection.js:121
zH	@	connection.js:226
jte	@	conference.js:213
init	@	conference.js:815
Logger.js:154 2022-06-06T13:26:26.997Z [features/base/tracks] Failed to create local tracks (2) ['audio', 'video'] DOMException: Could not start video source
GET http://localhost:8080/sounds/reactions-thumbs-up.mp3 net::ERR_CONNECTION_RESET

Here some link to the screenshots, there are some errors that i couldn’t catch but similar

https://drive.google.com/drive/folders/1RNActh9CmBnRqrmcZyQEGs9RUqM5wkYW?usp=sharing

Websocket is not working in your setup and Jicofo seems to be struggling to connect. Is everything hosted on the same server? Do you have anything else running on the same server besides Jitsi components? What ports have you exposed?

There is no jicofo, I used the default helm-chart, it is disable in the values.yaml so I never thought about enable it

kubernetes               ClusterIP      10.96.0.1        <none>          443/TCP
9h
myjitsi-jitsi-meet-jvb   LoadBalancer   10.110.53.214    192.168.49.20   30000:30000/UDP
8h
myjitsi-jitsi-meet-web   ClusterIP      10.110.38.229    <none>          80/TCP
8h
myjitsi-prosody          ClusterIP      10.101.115.218   <none>          5280/TCP,5281/TCP,5347/TCP,5222/TCP,5269/TCP   8h

I was running on minikube so to expose the loadbalancer, i use metallb

Sorry It is not disable but not deployed

I’m not familiar with Kubernetes, but you absolutely need Jicofo. So, you need to deploy it.

sorry again, it got deployed but there are no service, really sorry

If Jicofo is not working, you won’t be able to host a Jitsi conference successfully. If you can find out why it’s not running and get it to start, you’ll be closer to solving your issue.

Thanks, I was able to expose the jicofo as service but still not resolved

What do you get when you run the following command:

sudo systemctl status jicofo

I am using window, kubectl describe pod jicofo is

 Type     Reason          Age                     From               Message
  ----     ------          ----                    ----               -------
  Normal   Scheduled       9h                      default-scheduler  Successfully assigned default/myjitsi-jitsi-meet-jicofo-97566d96d-sssm2 to minikube
  Normal   Pulled          9h                      kubelet            Container image "jitsi/jicofo:stable-6865" already present on machine
  Normal   Created         9h                      kubelet            Created container jitsi-meet
  Normal   Started         9h                      kubelet            Started container jitsi-meet
  Warning  Unhealthy       9h (x4 over 9h)         kubelet            Readiness probe failed: dial tcp 172.17.0.5:8888: connect: connection refused
  Warning  Unhealthy       9h                      kubelet            Liveness probe failed: dial tcp 172.17.0.5:8888: connect: connection refused
  Normal   SandboxChanged  3h27m                   kubelet            Pod sandbox changed, it will be killed and re-created.
  Normal   Killing         3h26m                   kubelet            Container jitsi-meet failed liveness probe, will be restarted
  Normal   Pulled          3h26m (x2 over 3h27m)   kubelet            Container image "jitsi/jicofo:stable-6865" already present on machine
  Normal   Created         3h26m (x2 over 3h27m)   kubelet            Created container jitsi-meet
  Normal   Started         3h26m (x2 over 3h27m)   kubelet            Started container jitsi-meet
  Warning  Unhealthy       3h26m (x12 over 3h27m)  kubelet            Readiness probe failed: dial tcp 172.17.0.8:8888: connect: connection refused
  Warning  Unhealthy       3h26m (x5 over 3h27m)   kubelet            Liveness probe failed: dial tcp 172.17.0.8:8888: connect: connection refused
  Normal   SandboxChanged  94m                     kubelet            Pod sandbox changed, it will be killed and re-created.
  Normal   Killing         93m                     kubelet            Container jitsi-meet failed liveness probe, will be restarted
  Normal   Pulled          93m (x2 over 93m)       kubelet            Container image "jitsi/jicofo:stable-6865" already present on machine
  Normal   Created         93m (x2 over 93m)       kubelet            Created container jitsi-meet
  Normal   Started         93m (x2 over 93m)       kubelet            Started container jitsi-meet
  Warning  Unhealthy       93m (x5 over 93m)       kubelet            Liveness probe failed: dial tcp 172.17.0.7:8888: connect: connection refused
  Warning  Unhealthy       93m (x12 over 93m)      kubelet            Readiness probe failed: dial tcp 172.17.0.7:8888: connect: connection refused
  Warning  BackOff         89m (x13 over 91m)      kubelet            Back-off restarting failed container
  Normal   SandboxChanged  53m                     kubelet            Pod sandbox changed, it will be killed and re-created.
  Normal   Killing         52m                     kubelet            Container jitsi-meet failed liveness probe, will be restarted
  Normal   Created         52m (x2 over 53m)       kubelet            Created container jitsi-meet
  Normal   Pulled          52m (x2 over 53m)       kubelet            Container image "jitsi/jicofo:stable-6865" already present on machine
  Normal   Started         52m (x2 over 53m)       kubelet            Started container jitsi-meet
  Warning  Unhealthy       52m (x10 over 53m)      kubelet            Readiness probe failed: dial tcp 172.17.0.8:8888: connect: connection refused
  Warning  Unhealthy       52m (x5 over 53m)       kubelet            Liveness probe failed: dial tcp 172.17.0.8:8888: connect: connection refused
  Normal   SandboxChanged  2m16s                   kubelet            Pod sandbox changed, it will be killed and re-created.
  Normal   Pulled          2m11s                   kubelet            Container image "jitsi/jicofo:stable-6865" already present on machine
  Normal   Created         2m10s                   kubelet            Created container jitsi-meet
  Normal   Started         2m9s                    kubelet            Started container jitsi-meet
  Warning  Unhealthy       112s (x5 over 2m7s)     kubelet            Readiness probe failed: dial tcp 172.17.0.7:8888: connect: connection refused
  Warning  Unhealthy       112s (x2 over 2m2s)     kubelet            Liveness probe failed: dial tcp 172.17.0.7:8888: connect: connection refused

Values.yaml is

jicofo:
  replicaCount: 1
  image:
    repository: jitsi/jicofo

  xmpp:
    user: focus
    password:
    componentSecret:

  livenessProbe:
    tcpSocket:
      port: 8888
  readinessProbe:
    tcpSocket:
      port: 8888

  podLabels: {}
  podAnnotations: {}
  podSecurityContext: {}
  securityContext: {}
  resources: {}
  nodeSelector: {}
  tolerations: []
  affinity: {}
  extraEnvs: {}

What do you mean you’re using windows? You mean, you’re deploying Jitsi in a Windows environment?

Yes, with docker kubernetes environment

Ok, so the docker pods are running a linux environment. What is the problem with running the command I gave earlier?

Is it not the same with kubectl describe pod ‘pod-name’ , i am also new to kubernetes so, currently i cannot use systemctl inside the pod.