Fork: retry: Resource temporarily unavailable

Thanks for your reply and sorry that it took me so long.

I fixed the Error regarding the SASLError. After restarting, I got only Info and some Warning log messages. After a few hours though I got the OutOfMemoryError again (repeating).

free -h

root@lvps176-28-18-58:~# free -h
          total        used        free      shared  buff/cache   available
Mem:           2,0G        1,0G        695M         30M        293M        954M
Swap:            0B          0B          0B

JVB ps -o nlwp

root@server:~# ps -o nlwp 29867
NLWP
  45

Jconfo ps -o nlwp

root@server:~# ps -o nlwp 29951
NLWP
 268

I’m not sure if this actually has any meaning but the Task IDs shown in htop go up to 31000.
As described by other users above, I am using a V-sercer as well.

Do you have any ideas on this?
Thanks in advance!

so if I understand you correctly, it works normally for a few hours then you get the exact same error message in the jicofo log ? and the limits as displayed by systemctl status are higher than the number you get ?

BTW I see you have only 2 Gb Ram, you know that 4Gb are recommended because Jicofo and Jvb are configured by default to limits appropriate for this amount of memory right ?

exactly. I was able to join a conference with 3 devices and experienced no problems at all. The next day, meetings where interrupted. (probably due to the OutOfMemoryError) I connected via ssh and at this that time, I already often got an Resource temporarily unavailable error when trying to run some bash commands. Also this stayed even though no one was trying to create / join a meeting. I was only able to resolve this (temporary) by restarting the server.

The outputs I posten where taken every time before I restarted.

Interesting. As the Self-Hosting Guide mentions that the default values are enough for up to 20 participants and we use less than 20 I didn’t take a look into this at first. Now I noticed, that the actual value is very low.

root@server:~# systemctl show --property DefaultTasksMax
DefaultTasksMax=450

/etc/systemd/system.conf

[Manager]
# ...
DefaultTasksMax=90%

All other values are commented-out.
Do you know if there could be any good reasons for this and if it’s save to change the value?

yes there are probably good reasons such as avoiding fork bombs, and the problem with jicofo is that in the Debian packaging there is no systemd configuration so the system defaults are used (for videobridge there is an explicit systemd configuration setting the task max parameter to a high value).

90% imply that the absolute maximum of tasks for a process in your system would be 500. That seems absurdly low. On an inactive system jicofo takes about 300 tasks. Raise it to 2000.

How can I do so?
I know I can edit /etc/systemd/system.conf but 500 is not enough either. Or is this a limitation of V-server provider?

# ulimit -a
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 1540589
max locked memory       (kbytes, -l) 65536
max memory size         (kbytes, -m) unlimited
open files                      (-n) 1024
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 62987
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited
# cat /proc/sys/kernel/pid_max
32768
# cat /proc/sys/kernel/threads-max
3081178

well, I said raise it to 2000, not 500 :slight_smile:

Maybe I understood something wrong but if the current value of 90% indicates a MaxTask of 500 (100%), would it have any effect to set it to 2000?

of course not, unless you reboot.

Ok so I updated it and changed it to 2000 and rebooted.

# systemctl show --property DefaultTasksMax
DefaultTasksMax=2000

However the problem remains the same :frowning:.
Any additional ideas?

I also noticed that after I stop all services with:

service jicofo stop
service jitsi-videobridge2 stop
service prosody stop

Some tasks of jicofo user are still running. (Showing up in top, showing many tasks in htop)

I uploaded the entire log files below. I just replaced my domain with domain.com and my IP with xxx.xxx.xxx.xxx

jicofo.log (241.5 KB)
jvb.log (195.8 KB)

Screenshot after I stopped all services. (There are many more entries of jiconfo)

It is recommended at least 8 GB for a typical Jitsi installation. If you don’t have enough RAM you should decrease the reserved RAM for jicofo and jvb

Add the following line to /etc/jitsi/jicofo/config

JICOFO_MAX_MEMORY=1024m

and the following to /etc/jitsi/videobridge/config

VIDEOBRIDGE_MAX_MEMORY=1024m

then restart the system.

what says
systemctl show jicofo --property TasksCurrent,TasksMax

After jitsi-meet is running for a few hours (when the error came up again):

# systemctl show jicofo --property TasksCurrent,TasksMax
TasksCurrent=273
TasksMax=2000

There was no meeting running between the reboot and the first Error.

Done so. No effect. (Error occurred after I joined a meeting with two devices)

Should it be VIDEOBRIDGE_MAX_MEMORY OR JVB_MAX_MEMORY?

is this vps a vm or a container, is there a
/proc/user_beancounters entry and if yes, what’s the value for numproc ?

First of all thanks a lot for the help and sorry for my long response time.

As I understand it’s a VM but I’m not 100% sure on that.

# cat /proc/user_beancounters | grep numproc
            numproc                       186                  186                  500                  500                    6

From what I can read on the Internet, it’s an OpenVZ container and it’s indeed limiting the handles count. It’s managed by your hoster, so changing the config at your level will not change anything.
The only possibility while keeping the same VPS would be to recompile Jicofo code to ask for less threads since it’s not a configurable parameter in Jitsi-meet.

Thank you very much! I’ll take a look into this and maybe upgrade the server however I need to find out how the handle count changes first.

Just out of curiosity: Before I upgraded jitsi, everything worked fine over many months with 10+ participants. Does the update increase the amount of threads or is there another reason for this?

not a recent change, but it seems so, yes:

 git show e8d68b021
commit e8d68b021ea84ec18ebe09770d9f518b1e1ed4ec (tag: jitsi-meet_4488, tag: jitsi-meet_4487, tag: jitsi-meet_4486, tag: jitsi-meet_4485, tag: jitsi-meet_4484, tag: jitsi-meet_4483, tag: jitsi-meet_4482, tag: jitsi-meet_4481, tag: 552)
Author: Paweł Domas <pawel.domas@jitsi.org>
Date:   Mon Apr 20 21:56:14 2020 -0500

(...)
-     * The number of threads available in the thread pool shared through OSGi.
+     * The number of threads available in the scheduled executor pool shared
+     * through OSGi.
      */
-    private static final int SHARED_POOL_SIZE = 20;
+    private static final int SHARED_SCHEDULED_POOL_SIZE = 200;

Hey Guys,
are there any news on this Issue?
I ran into exactly the same problem yesterday! Fresh installation of jitsi-meet.
It cannot be a memory problem as there always was quite enough free memory available while the problem occured. The systemload was unremarkable as well.

I do also have a V-Server from Strato!

It might be more related to the hoster than the actual application as i had the same issues when i tried to install GitLab.

Does your file have the same content?

cat /proc/user_beancounters | grep numproc
numproc 517 517 700 700 0

My Limit for processes is 700. I assume my failcnt is reset as I had to restart the server