Users Can't Connect, Jicofo needs to be restarted

Hey all,

We have an Ubuntu machine running Jitsi Meet that’s been having a persistent issue with not allowing guests to join a room after some uptime. The creator of the meeting has no problems, but as users join they are immediately dropped.

Jicofo.log is showing:

Jicofo 2019-11-17 21:19:22.707 SEVERE: [37] org.jitsi.jicofo.JitsiMeetConferenceImpl.log() Can not invite participant -- no bridge available.

Jicofo 2019-11-17 21:19:22.707 SEVERE: [37] org.jitsi.jicofo.JitsiMeetConferenceImpl.log() Can not invite participant – no bridge available.
Jicofo 2019-11-17 21:19:46.170 SEVERE: [37] org.jitsi.jicofo.JitsiMeetConferenceImpl.log() Can not invite participant – no bridge available.
Jicofo 2019-11-17 21:19:46.170 SEVERE: [37] org.jitsi.jicofo.JitsiMeetConferenceImpl.log() Can not invite participant – no bridge available.
Jicofo 2019-11-17 21:20:54.309 SEVERE: [17] org.jitsi.protocol.xmpp.AbstractOperationSetJingle.wasInviteAccepted().243 Timeout waiting for RESULT response to ‘session-initiate’ request from [redacted by techy]

Restarting Jicofo fixes the problem, but this is a consistent issue often occuring several times a day. There is nothing obvious in jvb.log

Any help is appreciated.

Check jicofo logs, there is a log when jvb is removed from the bucket with jvb instances. Are jicofo and jvb on the same machine? How much ram does that machine has?

Jvb and Jicofo are on the same machine, that machine having 16GB of RAM which is almost exclusively for Jitsi Meet (We have barely used 2GB at any given point).

I’m not sure what you mean by jvb being removed from the bucket with jvb instances. Below I included a clearer line of errors leading up to the bridge failure:

[ details=“Summary”]
> Jicofo 2019-11-17 21:21:03.560 INFO: [37] org.jitsi.protocol.xmpp.AbstractOperationSetJingle.sendAddSourceIQ().478 Notify add SSRC room@conference.redactedDomain. com/9c90ddc7 SID: e1vt7ohojt0vg Sources{ audio: [ssrc=206601869 ] video: [ssrc=891512181 ssrc=1788256200 ssrc=1828398085 ssrc=332724783 ssrc=3169747847 ssrc=3295119062 ] }@67868084 source_Groups{ video:[ SourceGroup(FID)[ ssrc=891512181 ssrc=1788256200 ]SourceGroup(FID)[ ssrc=1828398085 ssrc=3169747847 ]SourceGroup(FID)[ ssrc=332724783 ssrc=3295119062 ]SourceGroup(SIM)[ ssrc=891512181 ssrc=1828398085 ssrc=332724783 ] ] }@1088718214

> Jicofo 2019-11-17 21:21:03.949 INFO: [37] org.jitsi.jicofo.LipSyncHack.log() Not merging A/V streams from room@conference.redactedDomain. com/9c90ddc7 to room@conference.redactedDomain. com/a3eec5c4

> Jicofo 2019-11-17 21:21:03.950 INFO: [37] org.jitsi.protocol.xmpp.AbstractOperationSetJingle.sendAddSourceIQ().478 Notify add SSRC room@conference.redactedDomain. com/a3eec5c4 SID: 6b5pvu5a42294 Sources{ video: [ssrc=2932741380 ] audio: [ssrc=3139376683 ] }@418219459 source_Groups{ }@981640993

> Jicofo 2019-11-18 01:19:00.879 WARNING: [81] org.jitsi.jicofo.JvbDoctor.log() Health check failed on: jitsi-videobridge.redactedDomain. com error: <error type='cancel'><internal-server-error xmlns='urn:ietf:params:xml:ns:xmpp-stanzas'/><text xmlns='urn:ietf:params:xml:ns:xmpp-stanzas' xml:lang='en'>java.io.IOException: Failed to bind even a single host candidate for component:Component id=1 parent stream=stream

> no local candidates.

> no remote candidates. preferredPort=16707 minPort=16707 maxPort=16807 foundAtLeastOneUsableInterface=false foundAtLeastOneUsableAddress=false</text></error>

> Jicofo 2019-11-18 01:19:00.880 INFO: [39] org.jitsi.jicofo.BridgeSelector.log() Removing JVB: jitsi-videobridge.redactedDomain. com
> Jicofo 2019-11-18 01:19:00.881 SEVERE: [39] org.jitsi.jicofo.JitsiMeetConferenceImpl.log() One of our bridges failed: jitsi-videobridge.redactedDomain. com

> Jicofo 2019-11-18 01:19:00.882 INFO: [39] org.jitsi.jicofo.JitsiMeetConferenceImpl.log() Region info, conference=ff57de octo_enabled= false: [[null, null]]

> Jicofo 2019-11-18 01:19:00.882 INFO: [39] org.jitsi.jicofo.JitsiMeetConferenceImpl.log() Expiring channels for: room@conference.redactedDomain. com/9c90ddc7 on: Bridge[jid=jitsi-videobridge.redactedDomain. com, relayId=null, region=null]

> Jicofo 2019-11-18 01:19:00.883 INFO: [39] org.jitsi.jicofo.JitsiMeetConferenceImpl.log() Region info, conference=ff57de octo_enabled= false: [[null]]

> Jicofo 2019-11-18 01:19:00.883 INFO: [39] org.jitsi.jicofo.JitsiMeetConferenceImpl.log() Expiring channels for: room@conference.redactedDomain. com/a3eec5c4 on: Bridge[jid=jitsi-videobridge.redactedDomain. com, relayId=null, region=null]

> Jicofo 2019-11-18 01:19:00.884 SEVERE: [39] org.jitsi.jicofo.JitsiMeetConferenceImpl.log() Can not invite participant -- no bridge available.

> Jicofo 2019-11-18 01:19:00.885 SEVERE: [39] org.jitsi.jicofo.JitsiMeetConferenceImpl.log() Can not invite participant -- no bridge available.

> Jicofo 2019-11-18 01:19:00.885 INFO: [39] org.jitsi.jicofo.JvbDoctor.log() Stopping health-check task for: jitsi-videobridge.redactedDomain. com

> Jicofo 2019-11-18 01:19:00.886 WARNING: [39] org.jitsi.jicofo.BridgeSelector.log() Unable to handle bridge event for: jitsi-videobridge.redactedDomain. com

> Jicofo 2019-11-18 01:19:00.886 WARNING: [39] org.jitsi.jicofo.BridgeSelector.log() Unable to handle bridge event for: jitsi-videobridge.redactedDomain. com
[/details]

This previous error appears representative of what we see around the time problems occur.

The bridge cannot bind to port udp 10000.

Here it is jicofo removing the bridge from the bucket of healthy bridges.

Is there any specific solution now?

I.frank, we have had two servers with this kind of issue.

The first initiated this post, which we applied the workaround of contab-bing a restart at regular intervals of 24hours. This appeared to work around the issue in that case, but when we had a second jitsi installation this did not suffice.

The second installation, which has significantly more traffic to it, seemed to stop having this issue after we stopped healthchecks since other similar problems on the forum were addressed with this fix. To clarify, we had our second install run both the cronjob restart with removed healthchecks, with the bug seeming to ‘addressed’ while the first seemed to only need the cronjob.

We will be reinstalling the first in its entirety soon, and may find some new information, but we cannot give specific reasons or information as to why we had such differences.

Thank you for your help.
I will try it. :slight_smile: