HIgh CPU usage by JVB in Docker Server

I have server with 6 cores, 2.2GHz and 16GB RAM.
I have other containers too but they aren’t affecting much. I was testing from my local chrome tabs and the below picture’s CPU usage (60-65%) is only when there was only 4 users in 1 conference…!! :expressionless: eventually when there comes 6-7 persons it becomes nearly 90-150% and starts crashing…!!! :slight_smile:

I am using below configs with simulcast and enable layer supression :

resolution: 360,
    startBitrate: "350",
     constraints: {
         video: {
             aspectRatio: 16 / 9,
             height: {
                 ideal: 360,
                 max: 540,
                 min: 270
             },
	     width: {
	    	 ideal: 640,
		 max: 960,
		 min: 480
             }
         }
     },

cpu usage pic :

my heart skipped some beats by seeing this while inspecting crashes… I read some post and will try those method and report back, and I am not using the latest docker version,can this be causing this? or is there something fishy I am missing ? :disappointed_relieved: plz help…!! :broken_heart:
@saghul @damencho @xranby

1 Like

I did below things but same :slight_smile:

  1. added "disableAudioLevels=true" in config.js
  2. added "Disable_Video_Background=true" in interface_config.js
  3. added below lines in jvb/logging.properties
java.util.logging.FileHandler.level = OFF
.level=WARNING
org.jitsi.impl.neomedia.MediaStreamImpl.level=WARNING
org.jitsi.videobridge.DISABLE_TCP_HARVESTER=true

I understand these are for improving performance but I am still getting same behaviour as previous which indicates the problem is somewhere else…!!
jvb containers cpu usage always spike when new user comes (sometimes beyond 100%) and by the time within 6-7 user, it start crashing with around 100% (spiking always) CPU usage.


I am at a lost now. can you tell me where should I inspect?
Is this for prev jitsi docker version which is fixed now? but I just tried locally the updated latest docker version and saw the same… jvb containers cpu usage goes beyond limit within short time…!! (my pc is 16GB RAM core-i7 4.2GHz) @saghul help me plz…!!!
:sob:
Thanx in advance :heart: @xranby

Can you try to put this line into jvb/sip-communicator.properties – I think this is how I fixed a similar issue in the past. Not sure if thats still applicable to the latest docker release but may be worth a try.

@plokta Thanks for your correction… but I saw this value is already true in jvb/sip-communicator.properties :slight_smile:
actually whats the thing that I am missing as I am also facing the same issue with updated docker version locally… is there any way I can fix this?
with 6 cores, 2.2GHz and 16GB RAM config server I should atleast be able to run more than 1 concurrent conferences with more than 10 people…!! there I am just stuck with 1 conference with 5-6 people…!! this is hillarious :sob:

Even look at the network consumption… 1GB+ (IN) and 2.5GB+ (OUT) for only 5-6 people even though I have enabled simulcast and enable layer suppression…! can you tell anythng in this regard @saghul plz? (edit: I came to know it’s total amount of transmission, I just want to know any method regarding realtime IN/OUT speed…!)

plz help if you have any suggestions or if you already have faced and solved this type of issues…!! and Thanx in advance :heart:

@Fuji check if the host system for some reason is excessively swapping memory to and from disk as that may cause unnecessary CPU use.

It is unclear what is causing the high CPU usage on your system.
If you can produce a flamegraph when we may get visual clues.

to lower network usage further you may want to use lastN configuration to only transmit audio and video from the last N active speakers

@xranby thanx for your suggestion…
but i tried not only in our server but also in my local pc … in both case I saw high cpu usage for low number of persons… that’s why I was thinking that is it my pb or docker? can you plz deploy docker server in your environment and feeback?
I will study about flame graph and report back about mine.