I saw this issue before when the cores are shared and not performing as a real core.
Thank you, this is not that case. It starts recording and memory usage slowly increases until it went out of free RAM and than it crashes.
So on that KVM host, you have 25 Jibri instances, with EACH Jibri on a VM with 8CPU/16GB?
I mean exactly this case. When CPU performs poorly,
ffmpeg starts using RAM more aggressively.
Yes, but only 5 jibris are runnig at default.
Reducing recording resolution to 1280x720 seems to solve/workaround this issue for me. ffmpeg is using 1G of RAM.
checkout this issue
@migo What happens with prosody and CPU usage, is there any change?
Hi @damencho, it seems that prosody CPU usage went down a bit, but we had less students online today.
Prosody config should be OK and I don’t know what else can we try. Can Jicofo config affect this prosody CPU usage too?
Thank you for support!
I wonder whether the improvements we did on jicofo side introduced this. So no jicofo should have better threading and not waiting/blocking on some processing. So maybe these waits and blocks were giving room for prosody to breath, and now when jicofo is more quick you see quicker clients and more traffic from that.
If that is the case maybe adding some limits may help … mod_limits – Prosody IM.
Or just adding a second shard …
How much is your bandwidth consumption for your current user base?
Have you tried decreasing / dropping Prosody logs and increasing the threads on saslauthd?
If so many students login at the same time, saslauthd could be a bottleneck and (without checking mod_auth_ldap) maybe that causes prosody to be stuck with these auths.
Then big log files need way more CPU to be appended rather tan smaller ones. Maybe delivering prosody logs through syslog into another machine or simply dropping them could help as well.
I’ve seen webservers decrease their load by 40% by just making them rotate logs faster.
Otherwise, yes, it might be Jitsi//Prosody related, but let’s try render some other variables out of the equation.
By default, saslauth runs only 5 threads. Surely that’s not enough for 900 logins in 120 seconds.
Moreover, if you could point LDAP to a loadbalancer which spreads load or a secondary LDAP server lesser busy, it can also accelerate your logins.
Restrict ldap_search_base, make simplier filters, ensure they’re indexed and use an LB ip or define several LDAP servers instead on “ldap_servers”.
In our case, login-time degrades on very busy ADs at peak hours.
But I don’t think all those 900 students are hitting the ldap, its just the hosts that are hitting it, and students just stay sending some iqs waiting for the host to arrive … but this needs to be check.
Hi, it was 1.2-1.5Gb/s
Hi @kpeiruza, thank you for your suggestions and intention to help. As @damencho stated later, only host are authenticated against LDAP server and students are guests only. Students can’t create rooms they are waiting for room creation by teacher. So we have 70 rooms and that is only 70 ldap connects.
I’ve disabled INFO log level on Prodosy and JVBs more than one year ago when we go on production with Jitsi.
We have few LDAP servers and not encountering any performance problems with them.
Thank you, kind regards
thanks, mate. So just to get the exact numbers what is the total bandwidth consumed on a monthly basis?
Hi, sorry I don’t know. We have unlimited bandwidth on 10Gb connection so I don’t have any data to share on this.