HOW to force JVB2 take Memory more than 4GB

Hello all dev.

How to force JVB2 take Momory more than 4GB , I have VMware Server install jitsimeet It work perfectly, My vm Server resource vCPU 4 core RAM 16GB HD120GB , I see in WEBMIN tool for monitor Server.
I see JVB2 only use memoey less than 4GB , my server has 20-30 participant per Room.

Do you see out of memory errors in JVB? We do not advise you to touch that. Bringing more memory is not good in come cases, in this for example. This makes the gc to use more cpu as running longer … which can cause bad performance in moments. If you need more room for more users, deploy new jvb.

Do you mean , Make JVB Loadbaland ? Can i Follow this Tutorial right ?

@damencho
Before we encountered “out of memory” in jvb log so we tried increasing the JVB heap memory from default 3GB to 5GB running under n2-standard-4 (4vCPUs, 16GB mem).

Using our application, we are having 25-30 participants with a scenario of changing the layout and viewing the full screen of a single participant (vice versa) running for more or less than an hour but there was a CPU spike from 30% to 100%. Does GC causes this?

we are using jvb version 2.1-492-g5edaf7dd-1

Which java version is this?

image

We tried running the same scenario but this time we used 4GB mem for JVB. We encountered cpu spike and out of memory

here’s the heapdump

we’ve seen a lot of ByteBufferPool warning with the max size 2356

@bbaldino base from the heap dump its not just byte[] who has a lot of instance… even ArrayCache$Container

Yeah I think we’d seen ArrayCache#Container take up quite a bit as well. The send pipelines each have their own retransmission cache, so that can add up. How many participants are you testing with? You could tune that cache size to make it smaller, but that will mean a tradeoff for the success rate of retransmissions. It may just be that to do a call as large as you’re testing the JVB will need more memory.

That large size is surprising, though. If you turn on some of the bookkeeping just for some tests you can see where the large buffer was allocated and get a good idea of where it’s coming from.

We are having a crowd testing with 28-30 participants all are required to screen share… every time we change layout 2x2, 3x3, 5x4 or 7x6, our application used to turn off and on the screen sharing tracks by using remove and add track…

We’ve seen a lot of JVB warnings


About turning on the bookkeeping and checking this, we’re not really aware or how to do it.
Any suggestions, tools, etc?

Thanks!

@damencho @bbaldino
FYI
During our crowd testing with 28-30 participants, we tried using the Jitsi Meet UI bundled in JVB 2.1-492-g5edaf7dd-1 and at some point around after 30-45minutes of continuous turning on/off the screen sharing the JVB cpu reaches 400% under n2-standard-4 (4vCPUs, 16GB mem).

@Bobi_Tena Having everyone screen-share in such a call can incur some load, let’s see how much. I don’t think we have simulcast for screen-sharing so the bridge needs to broadcast the full stream to everybody. We do have some limits for screen sharing bitrate on the client side, not exactly sure what those are (/cc @damencho @Hristo_Terezov do you guys know?). Let’s assume 500kbps. One 30-peeps call will generate 15mbps ingress and 435mbps egress (assuming no last-n). The bridge should be able to handle that but if you keep adding calls then the situation will degrade.

You might want to try activating load management in the bridge but the more appropriate solution would be to autoscale and use octo and only do load management while autoscaling or after the autoscaling limits have been reached. This is how we deal with load on meet.jit.si. If you’re in a fixed environment you’re bound to reach the ceiling at some point, maybe during spike hours or due to growth.

Yeah I think we’d seen ArrayCache#Container take up quite a bit as well. The send pipelines each have their own retransmission cache, so that can add up. How many participants are you testing with? You could tune that cache size to make it smaller, but that will mean a tradeoff for the success rate of retransmissions. It may just be that to do a call as large as you’re testing the JVB will need more memory.

@bbaldino we tried increasing the memory to 12GB but why even after our crowd testing these byte[] and ArrayCache#Container is not being released? The test scenario is the same around 28-30 participants, all sharing their screen, changing layout, full screen view, etc running for 1.5hrs to 2hrs

@gpolitis Thanks for the input!

Does anyone know the cause of CPU spike when running our scenario for 2hrs. At first we taught that the GC causes it but this time we monitor the jvb using visualvm and at 8:42 there’s CPU spike and there’s no GC activity.




Health checks, maybe? It’s hard to say what caused that particular spike in such a long running scenario. Does VisualVM give you the option to select the region with the spike and limit the CPU sampling there? If not, you may want to give YourKit a try that has this functionality (it’s not free though).

VisualVM doesn’t have that kind of feature… one thing I’ve noticed, the live thread around 8:42-8:45 is more than 500+

There should be a fixed size to that cache and it should get freed up when a participant leaves, so if that’s not happening I think there’s probably a leak. Are you able to see the strong references to the cache?

Is this what you mean by references

How about the byte[]

We’ve seen a lot of ByteBufferPool and sending large locally-generated warning

Yeah, so what you want to look for there is what’s at the “top” of that. That’s showing the VP8FrameMap, which might be fine: the question is: is that frame map “leaking” and sticking around longer than it should.