Proposal: Fix connection issues with "unable to create new native thread" in jicofo.log

Hi together

First things first: Thank you for your great work, jitsi team!

I just had an issue with jicofo and systemd limits and wanted to share the solution I found:

When you have connection issues and your jicofo.log file reports

java.lang.OutOfMemoryError: unable to create new native thread

your jicofo might be unable to start enough tasks due to systemd task limitations.
On my Ubuntu 18.04 setup, jicofo initially just was allowed to have 60 tasks running,
which was too low to even startup properly.
I don’t know, why jvb’s systemd limits are not effective for jicofo,
but I found a proper workaround without changing global systemd config (as proposed in docu):

Instead of setting

DefaultTasksMax=65000

in

/etc/systemd/system.conf

globally, which punches a hole in any system’s security setup,
you can create a specific systemd service file for jicofo:

sudo systemctl edit jicofo.service

Add the following lines:

[Service]
# more threads for this process
TasksMax=65000
# allow more open files for this process
LimitNPROC=65000
LimitNOFILE=65000

And save/close the editor.
The jicofo service is restarted automatically afterwards.

To see that it works you can check jicofo’s systemd status:

sudo systemctl status jicofo.service

The point “Tasks” should say

Tasks: [somenumber] (limit: 65000)

I added the nproc and nofile limits as well,
just to avoid similar issues with these limits in the future.
This sets the same limits for jicofo as for the regular jitsi videobridge service.
My server worked like a charm afterwards.

Hope that helps

2 Likes

@Hobbes921 Thanks for sharing your workaround.
Can you please explain more about this part:

This helped me a lot!!
Thanks.
Fokko

Honestly, reading my post again I find “punching a hole” sounds a bit dramatically…

Anyway, here is my explanation:
Unlimited or very high task and process count limits on a system
can make it vulnerable against fork bombs.

When you set these limits globally to such a high value,
any service without explicit values in its unit file
can fork that many processes and eventually cause the system to
not respond anymore or even crash.

This happens for example when a software you run as a service forks new processes recursively due to a bug,
or when a malicious user starts a real fork bomb as a user-level systemd service on your server.

As stated in the article linked above it’s recommended to set proper limits for each service individually instead of setting limits to 65000 for any service globally.

I must admit anyway
that finding out proper limits for your services to work at all is not that simple…

1 Like