JVB consumes all memory until OOM

Hi, we’ve been running a local Jitsi Meet installation with JVBv1 (1126-1) on Debian Buster and OpenJDK 8 (8u232-b09-1~deb9u1 from Stretch) for 20 days without an issue. Memory usage kept quite stable:


We moved to JVBv2 (2.1-169-ga28eb88e-1) a couple of days ago and we experienced two OOM situations after using the videobridge in a couple of sessions with ~30 participants:

Both sessions were held at 20:00 CET and run for around 1h each. After they were finished JVB kept using more and more RAM until it OOMed. The server setup is the same, Debian 10, with openjdk-8 from Stretch.
We’ve seen other posts related to this, Allocated memory in iddle server and JVB memory usage, but no real answer to them.
Here’s one of the dumps generated paste.debian.net/1140063/ after an OOM.
I’m happy to provide more info or do some testing.

1 Like

Do you see the same behaviour with jvb2?

We’re looking into this, but we’re not sure what is going on yet. Can you try setting a lower -Xmx? You can put it in JAVA_SYS_PROPS in /etc/jitsi/videobridge/config

Follow up: Are you using SCTP or web sockets? Can you enable the memory pool statistics and check them periodically?
To enable: curl -X POST http://localhost:8080/colibri/debug/enable/pool-stats
To query: curl http://localhost:8080/colibri/debug/stats/pool-stats

Boris

JVB1 behaved correctly. We’ve been experiencing this with JVB2 only.

Sure. I will and report back. Thanks!

The full debug output can also be useful if you get a chance to grab it with:

curl 'localhost:8080/colibri/debug?full=true'

I’m dumping (each 10 min) both the /pool-stats and debug?full=true now.
Both videobriges are getting closer to Xmx now.

Hi! I’m back with some stats. The OOM condition did NOT happen again. The Xmx did the trick! I guess the original 3072m was too close to our server’s 3544m of RAM and while java kept getting closer to its Xmx, our system got closer to no more RAM. Thanks for the tip.

Anyway, here’s some data on the memory behaviour. Videobridge running now for +24h:


Yesterday at 14:00 there was a +20 participants conference (the spike in memory consumption). Since then more conferences, with +40 participants, were held without any issues at all. \o/

I’m attaching (sorry for the file extension trick) the pool-stats and debug output (every hour). I have them every 10 min if you need them. Thanks again!
stats.zip.pcap (274.0 KB)

1 Like

Plsease, can you share what exact values did you set for JAVA_SYS_PROPS at /etc/jitsi/videobridge/config ?

Sure. Just added:

VIDEOBRIDGE_MAX_MEMORY=2072m

To /etc/jitsi/videobridge/config

I see now VIDEOBRIDGE_MAX_MEMORY value from /etc/jitsi/videobridge/config is parsed to “java -Xmx” by /usr/share/jitsi-videobridge/jvb.sh

I see some explanation here:


And use details here:
https://docs.oracle.com/javase/8/docs/technotes/tools/windows/java.html

Then I understand that, if default value for VIDEOBRIDGE_MAX_MEMORY is 3 GiB, this means that, with configurations from default setup, there must be 3 GiB of unused memory available to java (single?) process. Then a dedicated server may need 3.5 GiB of memory in this context.

As I understand from this forum thread, Jitsi VideoBridge will progressively eat RAM up to that max-memory value and, if system does not have all this to assign to Java process, JVB will break conference sessions.

With this logic, in a dedicated server with 2GiB of memory i may need VIDEOBRIDGE_MAX_MEMORY=1024m
in a dedicated server with 5GiB of memory I can set VIDEOBRIDGE_MAX_MEMORY=4500m

What is the bad side of effect of too low Xmx? What is the beneffit or useless of really high Xmx?

1 Like