How to check what causes CPU spike?

During our crowd testing running for 2hrs, we only used top and visualvm to monitor when there’s a 100% CPU spike but is there any other tool so that we can identify which process/function causes the CPU spike

You could try ps aux in addition to top. Also, there is more help on the web.

@damencho @bbaldino @gpolitis @emrah
We had another crowd testing this morning and we replicated the 100% CPU spike. The scenario is still the same that all 28 participants were sharing their screen, changing layout, full view running for more or less 2Hrs.

We upgraded our VM into Ubuntu20.04 LTS with java-11-openjdk, using the latest stable version for jitsi-meet and JVB, and allocated 4GB for JVB.

At 3:28 in jvb log we first encountered ByteBufferPool warning but that time JVB almost/already consumed the allocated memory and it triggers GC. There was a continuous ByteBufferPool warning and at around 3:38 GC uses 100% CPU and it didn’t go down even after an hour until we stop/restarted the jitsi-videobridge2 process.

we also noticed that live/deamon thread reaches more than 600

In case you need to check the complete stack
jstack.8566.txt (742.7 KB)

How can we improve the GC? or is it possible to improve GC?

We tried fine-tuning the GC with the following parameter:

-XX:MaxGCPauseMillis=200 -XX:ParallelGCThreads=20 -XX:ConcGCThreads=5 -XX:InitiatingHeapOccupancyPercent=45

(I think -XX:MaxGCPauseMillis=200 and -XX:InitiatingHeapOccupancyPercent=45 are same with default values)

but after running for just an hour we are now experiencing CPU spike due to ConnGC


We noticed that even after the conference was stopped the heap memory used by JVB is not dropping. After several hours I tried clicking “Perform GC” somehow it freed around 1GB memory (I think JVB GC should be improved)

The 3rd time I clicked “Perform GC” nothing was freed, at that time there was no active conference meaning JVB in idle state. It could be a memory leak.