Performance issues with Jitsi on CLIENTS (escalated by customers :/ )

As we are quite under some pressure from our large scale enterprise customers we would kindly ask, if anyone could provide us with a solution or pointers regarding CLIENT-side performance improvements.

Changing formats doesn’t make the cut as there are still significant issues with Processor consumption on Mobile phones and Desktop Clients (even with just 3 participants) as well as scaling issues with larger groups (even if you force-deactivate most videos).

Are there solutions to decrease client overhead or is it impossible to decrease that overhead? Some customers want us to switch to Zoom/other providers and we don’t want to make such changes.

2 Likes

There are some tips at https://jitsi-club.gitlab.io/jitsi-self-hosting/en/ but no self-diagnostic routine for clients. It’s usually quite hard to teach users how to diagnose a problem because most just say “it doesn’t work” and have no idea about network congestion, CPU consumption, configuration of their browsers and so on.

Often it’s enough to remind users about some common sense solutions, like reducing video feeds or their quality (don’t share a 4K screen when other users are joining over the phone and GSM data! – true story), using a PC instead of a phone etc.

Similar question here from a client, who had measured 4 MBit with Jitsi compared to 1 MBit with Zoom or Skype. We consider to launch all conferences in audio mode.

But these things should be done on a top-down level - we can’t consider users doing stuff like that themselves - this doesn’t scale to 5k+ Companies

Felix Häusler via Jitsi Community Forum - developers & users, 27/03/20
16:17:

But these things should be done on a top-down level - we can’t consider
users doing stuff like that themselves - this doesn’t scale to 5k+ Companies

Well maybe, but everyone has different needs. A school may be happy to
have 30 participants with only 1 or 2 at a time having video enabled,
while a company meeting may actually need more people with audio and
video enabled at the same time. And so on.

Federico

Of course - that’s why we are currently checking, if we can configure the thing accordingly for different customers of us – main problem is, that we are missing settings to force-optimise the clients (based on participants, etc.)

What could help us in config.js:

Default:

  • Set everyone to mid Quality for upload
  • Set everyone to mid Quality for download
  • Block HD
  • 720p max format

IF 1on1

  • go for HD
  • set everyone to HQ

If > 3

  • only show videos of 3 last people
  • start call in mute
  • start call without video

If > 20

  • set everyone to low quality and don’t allow more
  • set max format to 480p, ideal 240p
  • Only show video of person talking

Conditionals for participants and limitations on the client side would allow everyone to scale this – the manual client options will not be used by 98% of all users and should be completely managed by us imho…

Adding some info, I work work with Felix with the same customer.

  • we tested with our App (Electron-based), Firefox and Google Chrome but focused on Google Chrome because this should have the best compatibility (e.g. simulcast)
  • he clients where using VP8, we confirmed this in chrome://webrtc-internals
  • all Chrome instances (Mac, Linux, Firefox) were using libvpx to decode which is apparently software only and not GPU accelerated
  • it seems like the encoding of the user’s own video takes a lot of CPU power, because of simulcast (3 video streams?).
  • sending a screencast takes much less CPU than the video (easier to encode? no simulcast?)
  • moving the google chrome window from an external screen to the notebook screen reduced CPU load from 80-100% to 40-50% in a call with 5 videos with 480p.

configuration options we tried:

  • video to max 480p - that helped reduce the CPU load.
  • prefer h264 - our Android client didn’t show video
  • enableLayerSupension - seems to have no effect?
  • enableFirefoxSumulcast - crashes jitsi meet in the version we used but is not important because getting Google Chrome to use less CPU would be good enough

It’s still to me crazy, that it’s up to the clients, if they want to broadcast HD to the others even if it breaks everything for all participants. Like there must be a way to protect the call, especially with a technology that relies on connections between all people.

The bridge (and the client itself) manage both sides of the link: a sender’s uplink and downlink.

If there’s loss on the uplink (client -> bridge), the browser will send lower quality. On the downlink side, the bridge is constantly estimating how much bandwidth a user has and it will not send more data than we think the link can handle. This is of course hard, so not always perfect, but it is constantly happening.

Some scenarios make this more difficult; for example we don’t do simulcast on Firefox, so the bridge has no choice but to forward the only available stream when it wants to send video.

We also recently discussed changing the ‘quality slider’ to not only control the quality of media being received, but also what’s being sent, so that may land sometime soon.

This is not due to simulcast, it’s just the nature of video encoding. The libvpx simulcast encoder has minimal overhead.

Boris

As most our customer’s devices (companies not necessarily provide the best ones) fail with JITSI right now (overheating, 100% processor usage, etc.) I understand from that response that we’d have to fake package loss on the server to fix the issue, because we can’t preset the client’s send&receive behaviour.

If Firefox can destroy the performance of Video Calls, why can’t we preset Firefox clients to low quality or low format and prevent them from relaying any other video to the bridge. I understand the philosophy of the SFU, but then I must have an option to adapt the clients according to their abilities of handling X users in a call.

We also recently discussed changing the ‘quality slider’ to not only control the quality of media being received, but also what’s being sent, so that may land sometime soon.

This would still assume, that people micromanage their video-experience when most VoIP users just expect their call to magically work (due to Zoom, etc.) – they won’t use a slider every call, they just report bugs like “can’t use Jitsi” or complain to their boss. Why not have a client-side setting, allowing us to e.g. preset send and receive-behaviour based on the device of the user. Then we could optimise, even for an old-school phone.