JVB fails on load test

Hi.
Have a jitsi setup in aws which work without errors if test with 2-10 test users. Now I tried to load test it and spawned about ~50 bot users on it - which leads to disconnection/“something went wrong” errors in browser and such logs:
meet-instance log:

Jicofo 2021-10-15 15:26:25.706 SEVERE: [1445] [room=loadtestroom6@conference.example.com] JitsiMeetConferenceImpl.selectBridge#673: Can not invite participant, no bridge available: c0618a1b
Jicofo 2021-10-15 15:26:25.706 SEVERE: [1445] [room=loadtestroom6@conference.example.com] JitsiMeetConferenceImpl.inviteParticipant#743: Failed to select a bridge for Participant[loadtestroom6@conference.example.com/c0618a1b]@2114576951
Jicofo 2021-10-15 15:26:25.706 SEVERE: [1479] [room=loadtestroom4@conference.example.com] AbstractChannelAllocator.allocateChannels#299: jvbbrewery@internal.auth.example.com/e6ed0069-62be-5fd8-8699-a0f7c9627dd3 - failed to allocate channels, will consider the bridge faulty: Timed out waiting for a response.
org.jitsi.protocol.xmpp.colibri.exception.TimeoutException: Timed out waiting for a response.
	at org.jitsi.impl.protocol.xmpp.colibri.ColibriConferenceImpl.maybeThrowOperationFailed(ColibriConferenceImpl.java:312)
	at org.jitsi.impl.protocol.xmpp.colibri.ColibriConferenceImpl.createColibriChannels(ColibriConferenceImpl.java:252)
	at org.jitsi.protocol.xmpp.colibri.ColibriConference.createColibriChannels(ColibriConference.java:97)
	at org.jitsi.jicofo.ParticipantChannelAllocator.doAllocateChannels(ParticipantChannelAllocator.java:100)
	at org.jitsi.jicofo.AbstractChannelAllocator.allocateChannels(AbstractChannelAllocator.java:253)
	at org.jitsi.jicofo.AbstractChannelAllocator.doRun(AbstractChannelAllocator.java:172)
	at org.jitsi.jicofo.AbstractChannelAllocator.run(AbstractChannelAllocator.java:133)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
Jicofo 2021-10-15 15:26:25.707 SEVERE: [1474] [room=loadtestroom4@conference.example.com] AbstractChannelAllocator.allocateChannels#299: jvbbrewery@internal.auth.example.com/e6ed0069-62be-5fd8-8699-a0f7c9627dd3 - failed to allocate channels, will consider the bridge faulty: Timed out waiting for a response.
org.jitsi.protocol.xmpp.colibri.exception.TimeoutException: Timed out waiting for a response.

jvb instance log:

JVB 2021-10-15 11:43:14.754 SEVERE: [233] [confId=d9803336f177642e epId=a762822b gid=80911318 stats_id=Sterling-4m5 conf_name=loadtestroom4@conference.example] Endpoint$acceptSctpConnection$
1.run#588: Timed out waiting for SCTP connection from remote side

I tested this with t3.small instance for meet (2 vcpu/2ram) and 4 jvb instances t3a.medium (2 vcpu/4ram) which should be more than enough in my opinion for 50 users. What is also interesting, this disconnection happens in first 3-5 minutes of load test and then it fixes somehow and looks that works stable rest of the time. But with 100 users on same hardware, this doesn’t work - those errors in log appear all the time and I believe 4 jvb should be able to handle 100 users.
I also use org.jitsi.jicofo.BridgeSelector.BRIDGE_SELECTION_STRATEGY=IntraRegionBridgeSelectionStrategy
so I believe users are distributed between jvb instances and I also see log entries appear on all 4 jvb instances.
So, wondering, what is wrong in my case and how much jvb(and which specs) I need to be able sucessfully test my jitsi setup with ~1000 simultaneous users.(10 user in 100 rooms).
Any help is much appreciated.

1 Like

JMS needs at least 8 GB RAM by default if there is a JVB on it.

Additional JVB needs at least 4 GB RAM by default.

the official manual recommends to use even more memory.

Yes, it’s true. but right now I’m load testing my env again from local PC and already have 50 tabs/users in browser in 3 different rooms - no errors at all, and all this only two jvb - t3a.medium (2vcpu 4 ram).
I opened connections one after one, slowly, not 50 connections at once like I have in my load test. Maybe it’s related, don’t know.

Check the Prosody CPU usage too. It can only use one core

As far as I see, meet instance load is fine and cpu is loaded to 15-20% max.

After adding timeout to bot user start, was able to test env successfully with 100 users.
So, increased number of jvb to 30 &launched more powerful meet instance and started load test with 1000 users.
Immediately got such errors on meet instance:

Jicofo 2021-10-18 14:28:44.091 WARNING: [291] FocusManager.conferenceRequest#244: Exception while trying to start the conference
org.jivesoftware.smack.SmackException$NoResponseException: No response received within reply timeout. Timeout was 15000ms (~15s). Waited for response using: AndFilter: (StanzaTypeFilter: Presence, OrFilter: (AndFilter: (FromMatchesFilter (ignoreResourcepart): loadtestroom15@conference.example.com, MUCUserStatusCodeFilter: status=110), AndFilter: (FromMatchesFilter (full): loadtestroom15@conference.example.com/focus, StanzaIdFilter: id=BmAa1-14388, PresenceTypeFilter: type=error))).
	at org.jivesoftware.smack.SmackException$NoResponseException.newWith(SmackException.java:111)
	at org.jivesoftware.smack.SmackException$NoResponseException.newWith(SmackException.java:98)
	at org.jivesoftware.smack.StanzaCollector.nextResultOrThrow(StanzaCollector.java:260)
	at org.jivesoftware.smackx.muc.MultiUserChat.enter(MultiUserChat.java:355)
	at org.jivesoftware.smackx.muc.MultiUserChat.createOrJoin(MultiUserChat.java:498)
	at org.jivesoftware.smackx.muc.MultiUserChat.createOrJoin(MultiUserChat.java:444)
	at org.jitsi.impl.protocol.xmpp.ChatRoomImpl.joinAs(ChatRoomImpl.java:234)
	at org.jitsi.impl.protocol.xmpp.ChatRoomImpl.join(ChatRoomImpl.java:215)
	at org.jitsi.jicofo.JitsiMeetConferenceImpl.joinTheRoom(JitsiMeetConferenceImpl.java:466)
	at org.jitsi.jicofo.JitsiMeetConferenceImpl.start(JitsiMeetConferenceImpl.java:316)
	at org.jitsi.jicofo.FocusManager.conferenceRequest(FocusManager.java:239)
	at org.jitsi.jicofo.FocusManager.conferenceRequest(FocusManager.java:193)
	at org.jitsi.jicofo.FocusManager.conferenceRequest(FocusManager.java:172)
	at org.jitsi.jicofo.xmpp.ConferenceIqHandler.handleConferenceIq(ConferenceIqHandler.kt:65)
	at org.jitsi.jicofo.xmpp.ConferenceIqHandler.access$handleConferenceIq(ConferenceIqHandler.kt:36)
	at org.jitsi.jicofo.xmpp.ConferenceIqHandler$handleIQRequest$2.run(ConferenceIqHandler.kt:153)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
Jicofo 2021-10-18 14:28:44.171 WARNING: [39] JvbDoctor$HealthCheckTask.doHealthCheck#258: Health check timed out for: jvbbrewery@internal.auth.example.com/f33555f2-8fe1-5773-a850-35cefafc1730
Jicofo 2021-10-18 14:28:44.464 SEVERE: [308] [room=loadtestroom43@conference.example.com] AbstractChannelAllocator.allocateChannels#299: jvbbrewery@internal.auth.example.com/b2215fe3-6d79-5337-8164-a222f572e164 - failed to allocate channels, will consider the bridge faulty: Timed out waiting for a response.
org.jitsi.protocol.xmpp.colibri.exception.TimeoutException: Timed out waiting for a response.
	at org.jitsi.impl.protocol.xmpp.colibri.ColibriConferenceImpl.maybeThrowOperationFailed(ColibriConferenceImpl.java:312)
	at org.jitsi.impl.protocol.xmpp.colibri.ColibriConferenceImpl.createColibriChannels(ColibriConferenceImpl.java:252)
	at org.jitsi.protocol.xmpp.colibri.ColibriConference.createColibriChannels(ColibriConference.java:97)
	at org.jitsi.jicofo.ParticipantChannelAllocator.doAllocateChannels(ParticipantChannelAllocator.java:100)
	at org.jitsi.jicofo.AbstractChannelAllocator.allocateChannels(AbstractChannelAllocator.java:253)
	at org.jitsi.jicofo.AbstractChannelAllocator.doRun(AbstractChannelAllocator.java:172)
	at org.jitsi.jicofo.AbstractChannelAllocator.run(AbstractChannelAllocator.java:133)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)


Not sure how can this be resolved… Can single meet instance handle 1000 users?

How many rooms did you spin?

100 rooms, 10 user per room.

You should check the CPU usage of your Prosody process specifically, not just the use of the entire Jitsi stack. Prosody is single-threaded and usually the bottleneck in large meetings. You might need to do some tweaks.

But even without tweaks, default prosody should be able to handle at least 100-200 users on single meet instance, isn’t it? Cause I see that I can’t get stable 100 users in 10 rooms without errors, using default setup without dockers on default aws h/w. So, maybe I missed something obvious…

It should, but needs tweaking with settings, using websockets and such, and a powerful instance to accommodate the CPU usage of the single prosody process as in the beginning of the call there are a lot of messages exchanged.

There is an option, where you can separate prosody instances and use one for client signaling and one for jvb, jibri, jigasi instances.

You have a client and service options there under xmpp.

2 Likes

Websockets - you mean this one?

Any other articles you can recommend about tweaking settings?

Thanks. And in general, ~1000 users on single meet instance is real or I need more complex architecture - e.g multiple meet instances behind haproxy/something like that?

Yep, I have seen up to 5k-6k in a shard on a machine like m5.xlarge in aws.
And of course, having 30-40 bridges on different machines for that shard …
But this is extreme, keeping it lower to have breathing room for peaks is recommended something like 4k.

Nope, this is the websocket for connection to the bridge. There is no guide at the moment for that, but there are a lot of posts on the subject in the forum. Clients by default use bosh to connect, which makes a new connection every 60 seconds, if you switch clients to use websockets instead of bosh, they create a connection once.

Thanks. I’ve added websockets like described here.
Also added “network_backend = “epoll”” in prosody config and raised file limits. Also instances were changed to c5a.xlarge - 1 meet instance and 4 jvb. And still the same - launching load test with 100 users, each wit mic enabled/webcam streaming in 720p leads to multiple room crash somewhere between 60-80 users. Bridges just stop answering to health checks

Jicofo 2021-10-19 09:47:25.300 WARNING: [81] JvbDoctor$HealthCheckTask.doHealthCheck#233: jvbbrewery@internal.auth.example.com/64724e0c-3360-5b2e-bb24-e04db804c6d8 health-check timed out,
 but will give it another try after: 5000

and then it’s considered as faulty

om/3be9ce9e-1340-51d2-b4a2-363d6dbda021 - failed to allocate channels, will consider the bridge faulty: Timed out waiting for a response.
org.jitsi.protocol.xmpp.colibri.exception.TimeoutException: Timed out waiting for a response.
        at org.jitsi.impl.protocol.xmpp.colibri.ColibriConferenceImpl.maybeThrowOperationFailed(ColibriConferenceImpl.java:312)
        at org.jitsi.impl.protocol.xmpp.colibri.ColibriConferenceImpl.createColibriChannels(ColibriConferenceImpl.java:252)
        at org.jitsi.protocol.xmpp.colibri.ColibriConference.createColibriChannels(ColibriConference.java:97)
        at org.jitsi.jicofo.ParticipantChannelAllocator.doAllocateChannels(ParticipantChannelAllocator.java:100)
        at org.jitsi.jicofo.AbstractChannelAllocator.allocateChannels(AbstractChannelAllocator.java:253)
        at org.jitsi.jicofo.AbstractChannelAllocator.doRun(AbstractChannelAllocator.java:172)
        at org.jitsi.jicofo.AbstractChannelAllocator.run(AbstractChannelAllocator.java:133)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
Jicofo 2021-10-19 09:47:29.339 SEVERE: [250] [room=loadtestroom4@conference.example.com] AbstractChannelAllocator.allocateChannels#299: jvbbrewery@internal.auth.example.com/3be9ce9e-1340-51d2-b4a2-363d6dbda021 - failed to allocate channels, will consider the bridge faulty: Creator thread has failed to allocate channels: Timed out waiting for a response.
org.jitsi.protocol.xmpp.colibri.exception.TimeoutException: Creator thread has failed to allocate channels: Timed out waiting for a response.
        at org.jitsi.protocol.xmpp.colibri.exception.TimeoutException.clone(TimeoutException.java:39)
        at org.jitsi.impl.protocol.xmpp.colibri.ColibriConferenceImpl$ConferenceCreationSemaphore.acquire(ColibriConferenceImpl.java:877)
        at org.jitsi.impl.protocol.xmpp.colibri.ColibriConferenceImpl.acquireCreateConferenceSemaphore(ColibriConferenceImpl.java:384)
        at org.jitsi.impl.protocol.xmpp.colibri.ColibriConferenceImpl.createColibriChannels(ColibriConferenceImpl.java:221)
        at org.jitsi.protocol.xmpp.colibri.ColibriConference.createColibriChannels(ColibriConference.java:97)
        at org.jitsi.jicofo.ParticipantChannelAllocator.doAllocateChannels(ParticipantChannelAllocator.java:100)
        at org.jitsi.jicofo.AbstractChannelAllocator.allocateChannels(AbstractChannelAllocator.java:253)
        at org.jitsi.jicofo.AbstractChannelAllocator.doRun(AbstractChannelAllocator.java:172)
        at org.jitsi.jicofo.AbstractChannelAllocator.run(AbstractChannelAllocator.java:133)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)

IT actually looks like this issue, but we have 4 bridges with 4vcpu/8ram each and load on there never exceeds 30-40%

About prosody - I checked process cpu consumption and it was something like 2-4% of cpu maximum.
I ran out of guesses actually… May this be related to OCTO somehow?

You were watching the prosody CPU in those 5 seconds when it timed out?

Use something to monitor it so you can look back, you can go with something as simple as running top every second extracting value for the lua process and then you can plot them in a spreadsheet.

The error you see is that messages timeout between prosody and jvb, so it’s either prosody hit by too many messages for short period or it is jvb having trouble replying.

What is your test, what is the join-rate of the users? Are you using the latest stable versions?

I was using 1.0.4985-1 version, updated all to latest stable but didn’t help. I watched prosody with
pidstat -h -r -u -v -p 1566 1
which gave me stats with 17% load maximum at the moment timeout started.

# Time        UID       PID    %usr %system  %guest   %wait    %CPU   CPU  minflt/s  majflt/s     VSZ     RSS   %MEM threads   fd-nr  Command
13:58:14      111      1566    9.00    1.00    0.00    0.00   10.00     1      0.00      0.00   98068   40200   1.01       1     105  lua5.2

# Time        UID       PID    %usr %system  %guest   %wait    %CPU   CPU  minflt/s  majflt/s     VSZ     RSS   %MEM threads   fd-nr  Command
13:58:15      111      1566    6.00    0.00    0.00    0.00    6.00     0      0.00      0.00   98068   40200   1.01       1      93  lua5.2

# Time        UID       PID    %usr %system  %guest   %wait    %CPU   CPU  minflt/s  majflt/s     VSZ     RSS   %MEM threads   fd-nr  Command
13:58:16      111      1566    9.00    1.00    0.00    0.00   10.00     0      0.00      0.00   98064   40196   1.01       1      88  lua5.2

# Time        UID       PID    %usr %system  %guest   %wait    %CPU   CPU  minflt/s  majflt/s     VSZ     RSS   %MEM threads   fd-nr  Command
13:58:17      111      1566    8.00    0.00    0.00    0.00    8.00     0      0.00      0.00   98064   40196   1.01       1      81  lua5.2

# Time        UID       PID    %usr %system  %guest   %wait    %CPU   CPU  minflt/s  majflt/s     VSZ     RSS   %MEM threads   fd-nr  Command
13:58:18      111      1566   15.00    2.00    0.00    1.00   17.00     0      0.00      0.00   98064   40196   1.01       1      66  lua5.2

# Time        UID       PID    %usr %system  %guest   %wait    %CPU   CPU  minflt/s  majflt/s     VSZ     RSS   %MEM threads   fd-nr  Command
13:58:19      111      1566    7.00    0.00    0.00    0.00    7.00     0      0.00      0.00   98064   40196   1.01       1      45  lua5.2

# Time        UID       PID    %usr %system  %guest   %wait    %CPU   CPU  minflt/s  majflt/s     VSZ     RSS   %MEM threads   fd-nr  Command
13:58:20      111      1566    2.00    1.00    0.00    0.00    3.00     1      0.00      0.00   98064   40196   1.01       1      33  lua5.2

# Time        UID       PID    %usr %system  %guest   %wait    %CPU   CPU  minflt/s  majflt/s     VSZ     RSS   %MEM threads   fd-nr  Command
13:58:21      111      1566    2.00    0.00    0.00    0.00    2.00     0      0.00      0.00   98064   40196   1.01       1      25  lua5.2

# Time        UID       PID    %usr %system  %guest   %wait    %CPU   CPU  minflt/s  majflt/s     VSZ     RSS   %MEM threads   fd-nr  Command
13:58:22      111      1566    0.00    0.00    0.00    0.00    0.00     0      0.00      0.00   98064   40196   1.01       1      23  lua5.2

Users are joining 10 users every 30 seconds, each to it’s own room. Test is basically a loop which launches chromium with audio/video files as micro/webcam.

    chromium-browser --autoplay-policy=no-user-gesture-required --use-fake-ui-for-media-stream --allow-file-access --use-fake-device-for-media-stream --use-file-for-fake-audio-capture=$audiofile $video_arg --user-data-dir=/tmp/chrome"$(date +%s%N)" --headless --disable-gpu --mute-audio --window-size=1024,768 --remote-debugging-port=$debug_port "${endpoint}&name=${debug_port}" 3>&1 1>"log/rtcbee_${debug_port}.log" 2>&1 &

I’m now trying to use 2 xmpp connections, as you suggest earlier. But I have such error in jicofo logs after enabling it:

Jicofo 2021-10-20 14:04:48.875 SEVERE: [14] [xmpp_connection=service] XmppProviderImpl.doConnect#225: Failed to connect/login: The following addresses failed: 'RFC 6120 A/AAAA Endpoint + [example.com:6222] (example.com/172.31.7.156:6222)' failed because: java.net.ConnectException: Connection refused (Connection refused)
org.jivesoftware.smack.SmackException$EndpointConnectionException: The following addresses failed: 'RFC 6120 A/AAAA Endpoint + [example.com:6222] (example.com/172.31.7.156:6222)' failed because: java.net.ConnectException: Connection refused (Connection refused)
	at org.jivesoftware.smack.SmackException$EndpointConnectionException.from(SmackException.java:334)
	at org.jivesoftware.smack.tcp.XMPPTCPConnection.connectUsingConfiguration(XMPPTCPConnection.java:663)
	at org.jivesoftware.smack.tcp.XMPPTCPConnection.connectInternal(XMPPTCPConnection.java:846)
	at org.jivesoftware.smack.AbstractXMPPConnection.connect(AbstractXMPPConnection.java:529)
	at org.jitsi.impl.protocol.xmpp.XmppProviderImpl.doConnect(XmppProviderImpl.java:205)
	at org.jitsi.retry.RetryStrategy$TaskRunner.run(RetryStrategy.java:167)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)

Any idea what can be wrong? Attached all jicofo configs. Actual domain was replaces with example.com
Thanks again for answering and trying to help.

root@jitsi-meet-dev:~# ls -lh /etc/jitsi/jicofo/
total 16K
-rw-r--r-- 1 jicofo jitsi 1.4K Oct 20 14:04 config
-rw-r--r-- 1 jicofo jitsi 1.5K Oct 20 13:16 jicofo.conf
-rw-r--r-- 1 jicofo jitsi 1.8K Jan 22  2021 logging.properties
-rw------- 1 jicofo jitsi  450 Oct 20 12:43 sip-communicator.properties

root@jitsi-meet-dev:~# cat /etc/jitsi/jicofo/config 
# Jitsi Conference Focus settings
# sets the host name of the XMPP server
JICOFO_HOST=localhost

# sets the XMPP domain (default: none)
JICOFO_HOSTNAME=example.com

# sets the secret used to authenticate as an XMPP component
JICOFO_SECRET=X.........X

# sets the port to use for the XMPP component connection
JICOFO_PORT=5347

# sets the XMPP domain name to use for XMPP user logins
JICOFO_AUTH_DOMAIN=auth.example.com

# sets the username to use for XMPP user logins
JICOFO_AUTH_USER=focus

# sets the password to use for XMPP user logins
JICOFO_AUTH_PASSWORD=N........q

# extra options to pass to the jicofo daemon
JICOFO_OPTS=""

# adds java system props that are passed to jicofo (default are for home and logging config file)
#JAVA_SYS_PROPS="-Dnet.java.sip.communicator.SC_HOME_DIR_LOCATION=/etc/jitsi -Dnet.java.sip.communicator.SC_HOME_DIR_NAME=jicofo -Dnet.java.sip.communicator.SC_LOG_DIR_LOCATION=/var/log/jitsi -Djava.util.logging.config.file=/etc/jitsi/jicofo/logging.properties"
JAVA_SYS_PROPS="-Dnet.java.sip.communicator.SC_HOME_DIR_LOCATION=/etc/jitsi -Dnet.java.sip.communicator.SC_HOME_DIR_NAME=jicofo -Dnet.java.sip.communicator.SC_LOG_DIR_LOCATION=/var/log/jitsi -Djava.util.logging.config.file=/etc/jitsi/jicofo/logging.properties -Dconfig.file=/etc/jitsi/jicofo/jicofo.conf"

root@jitsi-meet-dev:~# cat /etc/jitsi/jicofo/jicofo.conf
jicofo {
  octo: {
    enabled: true
    id: 1234
  }
  xmpp: {
    client: {
      client-proxy: focus.example.com
      enabled: true
      hostname: example.com
      port: 5222
      #domain:
      username: "focus"
      #password:

      // How long to wait for a response to a stanza before giving up.
      reply-timeout: 15 seconds

      // The JID/domain of the MUC service used for conferencing.
      # conference-muc-jid = conference.example.com

      // A flag to suppress the TLS certificate verification.
      disable-certificate-verification: false

      // The JID of the mod_client_proxy component if used. It will be trusted to encode the JID of the original
      // sender in the resource part of the JID.
      #client-proxy = focus.example.com

      // Use TLS between Jicofo and the XMPP server
      // Only disable this if your xmpp connection is on loopback!
      use-tls: true
    }
    service {
      enabled: true
      hostname: example.com
      port: 6222
      #domain =
      #username =
      #password =

      // How long to wait for a response to a stanza before giving up.
      reply-timeout: 15 seconds

      // A flag to suppress the TLS certificate verification.
      disable-certificate-verification: false

      // Use TLS between Jicofo and the XMPP server
      // Only disable this if your xmpp connection is on loopback!
      use-tls: true
    }
  }
}

root@jitsi-meet-dev:~# cat /etc/jitsi/jicofo/sip-communicator.properties
org.jitsi.jicofo.BRIDGE_MUC=JvbBrewery@internal.auth.example.com
org.jitsi.jicofo.jibri.BREWERY=JibriBrewery@internal.auth.example.com
org.jitsi.jicofo.jibri.PENDING_TIMEOUT=90
org.jitsi.jicofo.BridgeSelector.BRIDGE_SELECTION_STRATEGY=IntraRegionBridgeSelectionStrategy
org.jitsi.jicofo.DISABLE_AUTO_OWNER=true
org.jitsi.jicofo.SHORT_ID=1234
org.jitsi.jicofo.BRIDGE_MUC_XMPP_USER_DOMAIN=example.com

Have you configured a second prosody on the machine which is using port 6222?
This is not an easy task, you need to configure a second prosody on the machine manually and switch jicofo to use it and the bridges to use it … but if you don’t see prosody process hit the sky I have no answer why nginx timeouts the connection between both … and offloading prosody maybe is not worth it as it is not clear where is the problem.

Hum, there are many optimisations we landed after this version, you better update to latest, anyway you will need it because of unified plan.