I’m running a Jitsi on a Ubuntu 18.04 server with 24 cpu-cores available in a proxmox VM.
I experience that Jicofo uses 100% of one cpu-core after the latest update. Without any conferences running.
This has to be a bug somehow, but there are no errors in any of the logs.
Can anyone confirm this?
These are the package version after the update this morning
An extract of the bottom of the jicofo-log after a restart
Jicofo 2020-05-01 07:01:18.355 INFO:  org.jitsi.xmpp.component.ComponentBase.log() ping timeout: 5000 ms
Jicofo 2020-05-01 07:01:18.355 INFO:  org.jitsi.xmpp.component.ComponentBase.log() ping threshold: 3
Jicofo 2020-05-01 07:01:18.410 INFO:  org.jitsi.jicofo.health.Health.log() org.jitsi.jicofo.health.ENABLE_HEALTH_CHECKS is not set - the health checks will auto enable on the first health REST request
Jicofo 2020-05-01 07:01:18.412 INFO:  org.jitsi.jicofo.health.Health.log() Started with interval=10000, timeout=PT30S, maxDuration=PT20S, stickyFailures=false.
Jicofo 2020-05-01 07:01:23.455 INFO:  org.jitsi.jicofo.xmpp.BaseBrewery.processInstanceStatusChanged().330 Added brewery instance: email@example.com/fe3507b5-e67a-4a34-b069-3344c31059ea
Jicofo 2020-05-01 07:01:23.455 WARNING:  org.jitsi.jicofo.bridge.BridgeSelector.log() No pub-sub node mapped for firstname.lastname@example.org/fe3507b5-e67a-4a34-b069-3344c31059ea
Jicofo 2020-05-01 07:01:23.457 INFO:  org.jitsi.jicofo.bridge.Bridge.log() Setting max total packet rate of 50800.0
Jicofo 2020-05-01 07:01:23.457 INFO:  org.jitsi.jicofo.bridge.Bridge.log() Setting average participant packet rate of 500
Jicofo 2020-05-01 07:01:23.463 INFO:  org.jitsi.jicofo.bridge.BridgeSelector.log() Added new videobridge: Bridge[email@example.com/fe3507b5-e67a-4a34-b069-3344c31059ea, relayId=null, region=null, stress=0.00]
Jicofo 2020-05-01 07:01:23.465 INFO:  org.jitsi.jicofo.bridge.JvbDoctor.log() Scheduled health-check task for: firstname.lastname@example.org/fe3507b5-e67a-4a34-b069-3344c31059ea
I can confirm this on an Ubuntu 19.10 install, likewise after an update to Jicofo 1.0-566-1.
Ok, this must be a bug.
The question is, if it will have any consequences for running the conferences…?
I can confirm this, on a bare metal install (no VM) on Ubuntu 18.04. Filtering the jicofo.log file to lines with SEVERE since the update time, I get the following lines:
Jicofo 2020-05-01 08:28:41.006 SEVERE:  org.jitsi.impl.protocol.xmpp.XmppProtocolProvider.connectionClosedOnError().647 XMPP connection closed on error: system-shutdown You can read more about the meaning of this stream error at http://xmpp.org/rfcs/rfc6120.html#streams-error-conditions
Jicofo 2020-05-01 08:28:41.025 SEVERE:  org.jitsi.impl.protocol.xmpp.XmppProtocolProvider.sendStanza().732 No connection - unable to send packet: <presence email@example.com/focus' id='RPk2A-293349'><x xmlns='http://jabber.org/protocol/muc'></x><etherpad xmlns='http://jitsi.org/jitmeet/etherpad'>a6a706bfe2cb4e73a68b36610c30128f</etherpad><versions xmlns='http://jitsi.org/jitmeet'><component xmlns='http://jitsi.org/jitmeet' name='xmpp'>Prosody(0.10.0,Linux)</component><component xmlns='http://jitsi.org/jitmeet' name='focus'>JiCoFo(1.0.549,Linux)</component></versions><c xmlns='http://jabber.org/protocol/caps' hash='sha-1' node='http://jitsi.org' ver='n+eoWkt+V9Kbk4H9z2I7uDWU+68='/><conference-properties xmlns='http://jitsi.org/protocol/focus'><property xmlns='http://jitsi.org/protocol/focus' key='created-ms' value='1588314511520'/><property xmlns='http://jitsi.org/protocol/focus' key='octo-enabled' value='false'/><property xmlns='http://jitsi.org/protocol/focus' key='bridge-count' value='0'/></conference-properties></presence>
Jicofo 2020-05-01 08:28:41.511 SEVERE:  org.jitsi.jicofo.health.Health.log() No MUC service found on XMPP domain or Jicofo has not finished initial components discovery yet
Jicofo 2020-05-01 08:28:41.512 SEVERE:  org.jitsi.jicofo.health.Health.log() Health check failed in PT0.001S:
Jicofo 2020-05-01 08:28:42.190 SEVERE:  org.jivesoftware.whack.ExternalComponentManager.error()
Jicofo 2020-05-01 08:28:47.243 SEVERE:  org.jitsi.xmpp.component.ComponentBase.log() Ping timeout for ID: RPk2A-293386
Jicofo 2020-05-01 08:28:54.777 SEVERE:  org.jitsi.jicofo.xmpp.BaseBrewery.start().191 Failed to create room: JvbBrewery@internal.auth.floridsdorf.mittenin.at
Jicofo 2020-05-01 08:28:54.840 SEVERE:  org.jitsi.impl.protocol.xmpp.OpSetSimpleCapsImpl.getFeatures().144 Failed to discover features for jitsi-videobridge.floridsdorf.mittenin.at: XMPP error reply received from jitsi-videobridge.floridsdorf.mittenin.at: XMPPError: service-unavailable - wait
Jicofo 2020-05-01 08:28:54.844 SEVERE:  org.jitsi.impl.protocol.xmpp.OpSetSimpleCapsImpl.getFeatures().144 Failed to discover features for focus.floridsdorf.mittenin.at: XMPP error reply received from focus.floridsdorf.mittenin.at: XMPPError: service-unavailable - wait
Jicofo 2020-05-01 08:51:45.998 SEVERE:  org.jitsi.jicofo.xmpp.BaseBrewery.start().191 Failed to create room: JvbBrewery@internal.auth.floridsdorf.mittenin.at
Jicofo 2020-05-01 08:51:46.060 SEVERE:  org.jitsi.impl.protocol.xmpp.OpSetSimpleCapsImpl.getFeatures().144 Failed to discover features for jitsi-videobridge.floridsdorf.mittenin.at: XMPP error reply received from jitsi-videobridge.floridsdorf.mittenin.at: XMPPError: service-unavailable - wait
Jicofo 2020-05-01 08:51:46.063 SEVERE:  org.jitsi.impl.protocol.xmpp.OpSetSimpleCapsImpl.getFeatures().144 Failed to discover features for focus.floridsdorf.mittenin.at: XMPP error reply received from focus.floridsdorf.mittenin.at: XMPPError: service-unavailable - wait
I single conference runs fine, video, audio and screensharing works as usual.
Same here - one conference with three participants seems to run ok.
BUT I don’t get any SEVERE entries in the jicofo.log - only normal INFO ones.
Still, this should be looked into by the developers, I guess
And one more report of the same issue. Running on Ubuntu 18.04 here.
Downgrading jicofo from 1.0-566 to 1.0-549 fixes the loop issue.
Thanks, this hint helped a lot, although I had to downgrade all my jitsi debian packages:
apt-get install jicofo=1.0-549-1 jitsi-meet=2.0.4468-1 jitsi-videobridge2=2.1-183-gdbddd169-1 jitsi-meet-web=1.0.4025-1 jitsi-meet-web-config=1.0.4025-1 jitsi-meet-prosody=1.0.4025-1 jitsi-meet-turnserver=1.0.4025-1
Downloading “jicofo_1.0-549-1_all.deb” and running “dpkg -i jicofo_1.0-549-1_all.deb” does the trick, without downgrading the other packages, too. At least for my installation.
Right, so forcing the downgrade of just the jicofo-package to 1.0-549-1 was possible for you.
But there must have been a reason for that package to be upgraded, one should think?
And “mistersixt” suggests above that it’s a loop-issue that causes the problem.
It would be better to fix the problem - if possible. And since it doesn’t seem to break anything I’m not quite sure what to choose here. Downgrade or leave it as it is…?
Of course a fix for the problem is needed, hopefully the developers will find and fix the bug (and adjust the package) soon. In the meantime it is up to the users whether to stick with the “new” version, whether to downgrade to the previous release, or to downgrade just the “jicofo” package.
Thanks for the report, we have reproduced the issue and it will be fixed in the next build most likely today.
Do we have a discussion here for alpha/beta testing (and reporting issues like this) ? I would like to do a test before upgrading the stable version on the production environment and submit a bug report (if there is any bug)
I can report, that the issue with jicofo is resolved on my system.
Thanks to the developers.
I am experiencing exactly the same issue now. What is the command I can use to downgrade? I am not that familiar with Linux. Thank you.
How did you fix the issue? Install the Jitsi Meet again?
Never mind, the issue is fixed after updating Jicofo to 1.0-567
The issue has been fixed in the meantime, just do an upgrade to the latest version available.
Yes, that’s what I meant by thanking the developers.
I am running Jitsi on a VPS, which forbids 100% CPU load for longer than an hour. That was the reasons for me to downgrade.
Anyway: The recent version fixes the issue, thanks!