Hitting hard limit around 600 participants, then start dropping constantly suggestions?

Greetings.

TLDR: Up to 600 participants is clear. smooth, stable, and no dropping participants. Anything around 650+ participants drop constantly, no matter the hardware increases.

The configuration, version numbers, and test scenarios details are here:
https://www2.techtalkhawke.com/news/standalone-all-in-one-jitsi-server-max-capacity-600-users-no-matter-how-much-hardware-at-least-in-aws

More complete version:
Love Jitsi, been using it for 4+? years, and now use it everywhere, at multiple for-and-non-profit organizations, and encouraging friends and family too! I am continuing to try to ramp up learning more about this great group of opensource products, and hope to be able to increasingly give back to this great community where I can be of help.

At the moment I have been tasked to perform a large number of load tests for determining capacity planning and costing.

Unfortunately, I seem to have hit a new hard barrier at around 600-650 concurrent participants, no matter how much “hardware” is thrown at it.

Summary is that around 600-700 participants, in a variety of combinations (from just a few per room, to 25 per room) with only 1 video sender per room, the participants (and speaker) increasingly get dropped from the room. Ranging from 10 to 50 times in a 5 minute test.

This was first noticed on an AWS c5a.xl (4 cpu, 8 GB ram) instance, with a 56% peak cpu load during 5 minute tests.

Increasing to c5a.2xl (8cpu 16 GB ram) cleared up the video completely, and though still had a moderately high cpu load around 50%, unfortunately still the participants keep dropping every time I try to go above ~600 participants total.

Increased to c5.4xl (32 cpu, 8 GB ram, only 21% cpu peak load, but still total of 25-50 drops (participant disconnects and tries to reconnect) during a 5 minute test.

Out of curiosity and to gather the data I need for future planning, I incrementally increased to 8xl, 9xl, 12xl, 16xl, and finally 24xl (92 cpu and 192 GB ram, 6.5% cpu peak load). This continued to provide crystal clear smooth video and faster rejoins after dropping, but the dropping numbers actually became worse (probably because users could rejoin faster) (50-100 in a 5 minute period)!

Later I will be scaling this out horizontally, but for now I need to get this working with larger participant numbers for this type of default all-in-one setup as a baseline data.

Are there any configuration tweaks I can make that might address this issue?

Is this a bug?

Appreciate any suggestions!

1 Like

You need to monitor prosody process cpu usage which normally is the weakest point. Prosody is single-threaded and you will not clearly notice it when it hits the ceiling.
Also, make sure you use the latest jitsi-meet version as we had done several improvements in the signaling to minimize the load, and we continue to work on that.

Check your open file limits for nginx, and prosody.

Are you using bosh or websocckets, bosh is bringing more load to the system as every client is making a new connection every 60 seconds, where websockets establish one connection and keep it.

You can watch Saul’s FOSDEM video on the subject

Another thing are you using epoll for prosody or select?
https://prosody.im/doc/network_backend

Thank you very kindly for the helpful response. I might need a little guidance on gathering some of this information the first time around if you wouldn’t mind.

This is a default install with no special modifications on Ubuntu 20.04, all version numbers are listed in this link:
https://www2.techtalkhawke.com/news/standalone-all-in-one-jitsi-server-max-capacity-600-users-no-matter-how-much-hardware-at-least-in-aws

But here they are copy/pasted:

  • jitsi-meet/stable,now 2.0.5870-1 - WebRTC JavaScript video conferences
  • jitsi-meet-prosody/stable,now 1.0.4985-1 - Prosody configuration for Jitsi Meet
  • prosody/focal,now 0.11.4-1 amd64
  • jitsi-meet-turnserver/stable,now 1.0.4985-1 - Configures coturn to be used with Jitsi Meet
  • jitsi-meet-web/stable,now 1.0.4985-1 - WebRTC JavaScript video conferences
  • jitsi-videobridge2/stable,now 2.1-492-g5edaf7dd-1 - WebRTC compatible Selective Forwarding Unit (SFU)
  • jigasi/stable,now 1.1-178-g3c53cf6-1 - Jitsi Gateway for SIP

Here is the information I have so far in relation to your response:
RE: Prosody:
I was aware of the Prosody single-thread being a concern, but didn’t know it kicked in below 1,000 users now.
It looks like the default install doesn’t specify, so I assume it is using default network_backend = “select”. I do see a suggestion in prosody.cfg.lua to use_libevent.
To clarify you are recommending the newer

network_backend = "epoll"

to put into the that config file, correct?

Regarding breaking out the graphing details by task, right now I’m only using the default AWS CloudWatch graphs (since this was primarily to figure out AWS costs), but I have a Grafana server I could attach, unless you have an alternative opensource solution you would recommend?

There are a lot of 1-5 year-old threads for the much older versions (so many great improvements in a year!) for trying to monitor, but they are all for the older versions.

Do you have an up to date link/resource you would particularly recommend I follow for trying to gather the Prosody-specific information, and/or performance tuning?

I was in the recent Jitsi-hackathon and excited about what is in the pipe. When you say latest version, which version do you consider recent enough for this issue?

Is what is listed above new enough? Or do I need to pull something less stable? I am supposed to try performing my analysis on stable for the capacity planning if at all possible. Configuration modifications are okay, but I am expected to avoid non-stable branches if possible, and absolutely not to use custom-compiled versions for this baseline data. For production down the road we can consider such options, but for this baseline testing I may not.

Some details from the testing Jitsi server…
File limits configs:
cat /proc/sys/fs/file-max: 9223372036854775807
ulimit -a

  • core file size (blocks, -c) 0
  • data seg size (kbytes, -d) unlimited
  • scheduling priority (-e) 0
  • file size (blocks, -f) unlimited
  • pending signals (-i) 127000
  • max locked memory (kbytes, -l) 65536
  • max memory size (kbytes, -m) unlimited
  • open files (-n) 1024
  • pipe size (512 bytes, -p) 8
  • POSIX message queues (bytes, -q) 819200
  • real-time priority (-r) 0
  • stack size (kbytes, -s) 8192
  • cpu time (seconds, -t) unlimited
  • max user processes (-u) 127000
  • virtual memory (kbytes, -v) unlimited
  • file locks (-x) unlimited

at /proc/cat /var/run/jitsi-videobridge/jitsi-videobridge.pid/limits

  • Limit Soft Limit Hard Limit Units
  • Max cpu time unlimited unlimited seconds
  • Max file size unlimited unlimited bytes
  • Max data size unlimited unlimited bytes
  • Max stack size 8388608 unlimited bytes
  • Max core file size 0 unlimited bytes
  • Max resident set unlimited unlimited bytes
  • Max processes 65000 65000 processes
  • Max open files 65000 65000 files
  • Max locked memory 65536 65536 bytes
  • Max address space unlimited unlimited bytes
  • Max file locks unlimited unlimited locks
  • Max pending signals 127000 127000 signals
  • Max msgqueue size 819200 819200 bytes
  • Max nice priority 0 0
  • Max realtime priority 0 0
  • Max realtime timeout unlimited unlimited us

prlimit

  • RESOURCE DESCRIPTION SOFT HARD UNITS
  • AS address space limit unlimited unlimited bytes
  • CORE max core file size 0 unlimited bytes
  • CPU CPU time unlimited unlimited seconds
  • DATA max data size unlimited unlimited bytes
  • FSIZE max file size unlimited unlimited bytes
  • LOCKS max number of file locks held unlimited unlimited locks
  • MEMLOCK max locked-in-memory address space 67108864 67108864 bytes
  • MSGQUEUE max bytes in POSIX mqueues 819200 819200 bytes
  • NICE max nice prio allowed to raise 0 0
  • NOFILE max number of open files 1024 1048576 files
  • NPROC max number of processes 127000 127000 processes
  • RSS max resident set size unlimited unlimited bytes
  • RTPRIO max real-time priority 0 0
  • RTTIME timeout for real-time tasks unlimited unlimited microsecs
  • SIGPENDING max number of pending signals 127000 127000 signals
  • STACK max stack size 8388608 unlimited bytes

I have watched that video before, thank you for the refresh. While it is helpful from a high-level design and planning for clustered high capacity setups we’re planning soon, unfortunately it does not have specific details to help make the tweaks to increase the nginx and prosofy capacities, just that they did them and it improved things, it just references the tweaks but no specifics.

It appears to be using default Bosh, is this the best resource you would recommend for switching to websockets?

With the stats posted above, with the goal being 1,000 users (currently stuck at 600), do you have suggestions on the appropriate ballpark numbers to try out as a starting point?

Thanks kindly!

Yes.

Grafana I see people use a lot, whatever you have.

Not really, other than the things mentioned in the fosdem talk.

Latest stable. There is for example one small fox in unstable if you use allowners module, but that’s that and probably we will push it soon to stable.

Yes switch to websockets you will lower the number of opening new sockets by fraction of two.

We are getting 4k-5k on m5.xlarge on meet.jit.si. We had been discussing with a community member a problem with secure domain and that they cannot go over 900, and its because all those hit the server in very short period, but this is another limitation how secure domain works … Maybe in that case mod_limits can help … limiting participants connections … but this is total guess … or someone can contribute a PR changing the logic and when hosts join participants wait randomly before joining …

2 Likes

Is that bare-metal, dedicated, shared, spot, reserved VM, own cloud, or third-party service provider? You used AWS measurement, if that is AWS, is it shared, dedicated, or other? That is exactly what I need to get to for these tests. Any chance to share the tweaks and config files that achieve exactly that? Thanks kindly for your help as I work through this.

AWS shared.
The OS tweaks I shared above, whatever else makes sense we are trying to add it in the default installation.

1 Like

So, back to here, and trying to get 950 users working.
Completed prosody upgrades as per this thread: [Solved] How to upgrade Prosody for Jitsi.
Upgraded it works fine at 600 users as before.
Now trying with 950 users, but still dropping users under the load, and then when dropped, either very long or unable to join back during load test at all (once test is complete then can join again).
Now that being said, it is not as bad as before. While the previous 5 minute test at 950 users on this level of hardware dropped a participant 25 times during a 5 minute test, now it is mostly just my 2 laptops and not the simulated users, and about 5-10 drops during test. But still not right.
I tried cranking up /etc/prosody/prosody.cfg.lua:
limits = {
c2s = {
rate = “512kb/s”;
};
s2sin = {
rate = “30kb/s”;
};
}

to 768 and even 1024 to see if that would help…
Still 5 to 10 drops in the 5 minutes test, and video is blurry and choppy (though text is still readable).
No noticeable improvement increasing between 512 and, 768, or 1024.

Additional tweak suggestions? This is still that vanilla install from the prosody upgrade thread, I’m trying to be very cautious on making any mods without it being clear as we go through this troubleshooting process to keep the confound variables minimal.

The logfile I captured during this is a bit long, so uploaded as text file rather than pasted into here, to see if that helps. I have a much larger text file of the logs for the entire 5 minute load test, but it is about 30MB. If you want me to zip and attach it for more info let me know, but hopefully there is enough information in here to indicate what may be the issue.
Thanks!

950-fail-20210525i.txt (227.4 KB)

Thank you for any other suggestions I can try.

Should I now try the other suggestions you made for the epoll, websockets, and other tweaks? Something else?
My config files are exactly the same as the last ones in the upgrade prosody thread we just did.
But I can repost them here if that would help.

Yep, go with it. If I remember correctly the select method was having a limit of 1024 …

Make sure the file limits of prosody user and nginx are high enough.

And the nginx workers are also bumped.

As you mentioned blurry? Are you using one bridge for all those participants? How many participants are there with video and what is the conference distribution?

For this initial baseline, AWS m5.4xl, all-in-one single instance all components.
Trying to get to at least 950 simulated users load test (ultimately trying to get up to as close to 5k as possible if AWS raises current spot instances from 1k to 5k as per our request).
Using terraform, selenium, chromedrive, AWS ECS Fargate, and Malleus Jitsificus to simulate all users (except for the 2 I am connecting as to 1 room):

  • 10 participants per room.
  • 1 participant per room (of those 10), is sending video (only, no audio for this baseline).
  • I am connected with 2 laptops (1 windows with chrome, the other with linux with chrome), both sending audio and video to each other, while I also watch the other single sender’s video quality, this has been helpful to gauge how things are behaving overall.
    Up to 600 users is smooth as can be. No dropping, and video is crystal clear with no lag, glitches, etc.

I am now trying to fiddle with the other settings to see if anything improves the behavior. So far raising the prosody limit from the 512 you suggested, to 768 and 1024 made no difference, but that was without the other settings changes. I am now going through the list you suggested for epoll, etc. and see if any combination makes it better, and will post anything notable here.
Any other information you would find handy for me to share, just ask and I will gladly provide.
Cheers!

1 Like

Okay, so far with epoll turned on, websockets configured, and proxy buggers all increased to 512k, prosody to 512k, smacks enabled (but not sharding yet), what I am seeing is that nobody is dropping, but simulated users only slowly join, so that in 5 minutes test I am only up to 7 participants per room (goal it 10) by the time the 5 minute load test completes. Also, I never see the test video send user appear, so can’t judge stream quality.

Increased to 10 minutes to see if that helps for this test, but this is definitely not workable in real-world making users wait minutes before joining. That was weirdly worse. Only 2-4 “users” joined. Test took longer and had fewer join. ???

What does this result indicate? What can be done to hold on to the users without dropping, but also make it so they don’t have to be dleayed many minutes before able to join room?

Unless I missed something, the only change I didn’t make from that websockets page is the sharding. When I turned on the sharding headers, I can’t even keep my two laptops connected at the same time for more than a minute or so:


==> /var/log/jitsi/jicofo.log <==
Jicofo 2021-05-26 00:14:44.758 INFO: [36] [room=loadtest0@conference.lmtgt1.dev2dev.net] ChatRoomRoleAndPresence.memberPresenceChanged#130: Chat room event PresenceUpdated member=ChatMember[loadtest0@conference.lmtgt1.dev2dev.net/3237c960, jid: null]@2126203043
Jicofo 2021-05-26 00:14:45.480 INFO: [36] [room=loadtest0@conference.lmtgt1.dev2dev.net] ChatRoomRoleAndPresence.memberPresenceChanged#130: Chat room event PresenceUpdated member=ChatMember[loadtest0@conference.lmtgt1.dev2dev.net/3237c960, jid: null]@2126203043

==> /var/log/jitsi/jvb.log <==
JVB 2021-05-26 00:14:46.245 INFO: [33] HealthChecker.run#171: Performed a successful health check in PT0.000005S. Sticky failure: false

==> /var/log/jitsi/jicofo.log <==
Jicofo 2021-05-26 00:14:47.236 INFO: [36] [room=loadtest0@conference.lmtgt1.dev2dev.net] ChatRoomRoleAndPresence.memberPresenceChanged#130: Chat room event PresenceUpdated member=ChatMember[loadtest0@conference.lmtgt1.dev2dev.net/bf35a4ab, jid: cwvuaqbfldxozbch@lmtgt1.dev2dev.net/A4vloZWA]@383623868
Jicofo 2021-05-26 00:14:50.485 INFO: [36] [room=loadtest0@conference.lmtgt1.dev2dev.net] ChatRoomRoleAndPresence.memberPresenceChanged#130: Chat room event PresenceUpdated member=ChatMember[loadtest0@conference.lmtgt1.dev2dev.net/bf35a4ab, jid: cwvuaqbfldxozbch@lmtgt1.dev2dev.net/A4vloZWA]@383623868

==> /var/log/jitsi/jvb.log <==
JVB 2021-05-26 00:14:56.246 INFO: [33] HealthChecker.run#171: Performed a successful health check in PT0.000008S. Sticky failure: false
JVB 2021-05-26 00:15:06.245 INFO: [33] HealthChecker.run#171: Performed a successful health check in PT0.000005S. Sticky failure: false
JVB 2021-05-26 00:15:16.245 INFO: [33] HealthChecker.run#171: Performed a successful health check in PT0.000006S. Sticky failure: false
JVB 2021-05-26 00:15:26.229 INFO: [32] VideobridgeExpireThread.expire#140: Running expire()
JVB 2021-05-26 00:15:26.245 INFO: [33] HealthChecker.run#171: Performed a successful health check in PT0.000005S. Sticky failure: false
JVB 2021-05-26 00:15:36.246 INFO: [33] HealthChecker.run#171: Performed a successful health check in PT0.000005S. Sticky failure: false

==> /var/log/jitsi/jicofo.log <==
Jicofo 2021-05-26 00:15:39.377 INFO: [36] [room=loadtest0@conference.lmtgt1.dev2dev.net] ChatRoomRoleAndPresence.memberPresenceChanged#130: Chat room event Left member=ChatMember[loadtest0@conference.lmtgt1.dev2dev.net/bf35a4ab, jid: cwvuaqbfldxozbch@lmtgt1.dev2dev.net/A4vloZWA]@383623868
Jicofo 2021-05-26 00:15:39.377 INFO: [36] [room=loadtest0@conference.lmtgt1.dev2dev.net] ChatRoomRoleAndPresence.memberPresenceChanged#150: Owner has left the room !
Jicofo 2021-05-26 00:15:39.379 INFO: [36] [room=loadtest0@conference.lmtgt1.dev2dev.net] ChatRoomRoleAndPresence.electNewOwner#224: Granted owner to 3237c960
Jicofo 2021-05-26 00:15:39.379 INFO: [36] [room=loadtest0@conference.lmtgt1.dev2dev.net] JitsiMeetConferenceImpl.onMemberLeft#1093: Member left:bf35a4ab
Jicofo 2021-05-26 00:15:39.380 INFO: [36] [room=loadtest0@conference.lmtgt1.dev2dev.net] JitsiMeetConferenceImpl.terminateParticipant#1130: Terminating bf35a4ab, reason: gone, send session-terminate: false
Jicofo 2021-05-26 00:15:39.380 INFO: [36] AbstractOperationSetJingle.terminateSession#509: Terminate session: loadtest0@conference.lmtgt1.dev2dev.net/bf35a4ab, reason: gone, send terminate: false
Jicofo 2021-05-26 00:15:39.383 INFO: [36] [room=loadtest0@conference.lmtgt1.dev2dev.net] JitsiMeetConferenceImpl.removeSources#1835: Removing sources from loadtest0@conference.lmtgt1.dev2dev.net/bf35a4ab: Sources{ audio: [ssrc=250592815 ] video: [ssrc=72352347 ssrc=405876295 ssrc=1572488122 ssrc=3532505715 ssrc=3667587789 ssrc=3790443175 ] }@560601214
Jicofo 2021-05-26 00:15:39.383 INFO: [36] [room=loadtest0@conference.lmtgt1.dev2dev.net] JitsiMeetConferenceImpl.terminateParticipant#1155: Removed participant bf35a4ab removed=true
Jicofo 2021-05-26 00:15:39.384 INFO: [36] [room=loadtest0@conference.lmtgt1.dev2dev.net] JitsiMeetConferenceImpl$BridgeSession.terminate#2675: Expiring channels for: loadtest0@conference.lmtgt1.dev2dev.net/bf35a4ab on: Bridge[jid=jvbbrewery@internal.auth.lmtgt1.dev2dev.net/f40e5aa1-1094-4e41-b913-b473039e6de7, relayId=null, region=null, stress=0.00]
Jicofo 2021-05-26 00:15:39.386 INFO: [36] [room=loadtest0@conference.lmtgt1.dev2dev.net] JitsiMeetConferenceImpl.rescheduleSingleParticipantTimeout#2418: Scheduled single person timeout.

==> /var/log/jitsi/jvb.log <==
JVB 2021-05-26 00:15:39.386 INFO: [64] [confId=422c79cfc9656cf7 epId=bf35a4ab gid=12581 stats_id=Clifford-AuR conf_name=loadtest0@conference.lmtgt1.dev2dev.net] AbstractEndpoint.expire#271: Expiring.

==> /var/log/jitsi/jicofo.log <==
Jicofo 2021-05-26 00:15:39.386 INFO: [36] [room=loadtest0@conference.lmtgt1.dev2dev.net] ChatRoomRoleAndPresence.memberPresenceChanged#130: Chat room event PresenceUpdated member=ChatMember[loadtest0@conference.lmtgt1.dev2dev.net/3237c960, jid: ilqagllzcjquaiu_@lmtgt1.dev2dev.net/aS6WoCID]@2126203043

==> /var/log/jitsi/jvb.log <==
JVB 2021-05-26 00:15:39.387 INFO: [80] [confId=422c79cfc9656cf7 gid=12581 conf_name=loadtest0@conference.lmtgt1.dev2dev.net] Conference.dominantSpeakerChanged#422: ds_change ds_id=3237c960
JVB 2021-05-26 00:15:39.387 INFO: [64] [confId=422c79cfc9656cf7 epId=bf35a4ab gid=12581 stats_id=Clifford-AuR conf_name=loadtest0@conference.lmtgt1.dev2dev.net] Endpoint.expire#996: Spent 0 seconds oversending
JVB 2021-05-26 00:15:39.387 INFO: [64] [confId=422c79cfc9656cf7 epId=bf35a4ab gid=12581 stats_id=Clifford-AuR conf_name=loadtest0@conference.lmtgt1.dev2dev.net] Transceiver.teardown#324: Tearing down
JVB 2021-05-26 00:15:39.387 INFO: [64] [confId=422c79cfc9656cf7 epId=bf35a4ab gid=12581 stats_id=Clifford-AuR conf_name=loadtest0@conference.lmtgt1.dev2dev.net] RtpReceiverImpl.tearDown#339: Tearing down
JVB 2021-05-26 00:15:39.390 INFO: [64] [confId=422c79cfc9656cf7 epId=bf35a4ab gid=12581 stats_id=Clifford-AuR conf_name=loadtest0@conference.lmtgt1.dev2dev.net] RtpSenderImpl.tearDown#311: Tearing down
JVB 2021-05-26 00:15:39.391 INFO: [64] [confId=422c79cfc9656cf7 epId=bf35a4ab gid=12581 stats_id=Clifford-AuR conf_name=loadtest0@conference.lmtgt1.dev2dev.net] DtlsTransport.stop#186: Stopping
JVB 2021-05-26 00:15:39.391 INFO: [64] [confId=422c79cfc9656cf7 epId=bf35a4ab local_ufrag=7kvpn1f6j0b7ab gid=12581 stats_id=Clifford-AuR conf_name=loadtest0@conference.lmtgt1.dev2dev.net] IceTransport.stop#237: Stopping
JVB 2021-05-26 00:15:39.392 INFO: [79] [confId=422c79cfc9656cf7 gid=12581 stats_id=Clifford-AuR componentId=1 conf_name=loadtest0@conference.lmtgt1.dev2dev.net ufrag=7kvpn1f6j0b7ab name=stream-bf35a4ab epId=bf35a4ab local_ufrag=7kvpn1f6j0b7ab] MergingDatagramSocket$SocketContainer.runInReaderThread#770: Failed to receive: java.net.SocketException: Socket closed
JVB 2021-05-26 00:15:39.392 WARNING: [79] [confId=422c79cfc9656cf7 gid=12581 stats_id=Clifford-AuR componentId=1 conf_name=loadtest0@conference.lmtgt1.dev2dev.net ufrag=7kvpn1f6j0b7ab name=stream-bf35a4ab epId=bf35a4ab local_ufrag=7kvpn1f6j0b7ab] MergingDatagramSocket.doRemove#349: Removing the active socket. Won't be able to send until a new one is elected.
JVB 2021-05-26 00:15:39.393 INFO: [64] [confId=422c79cfc9656cf7 gid=12581 stats_id=Clifford-AuR componentId=1 conf_name=loadtest0@conference.lmtgt1.dev2dev.net ufrag=7kvpn1f6j0b7ab name=stream-bf35a4ab epId=bf35a4ab local_ufrag=7kvpn1f6j0b7ab] MergingDatagramSocket.close#142: Closing.
JVB 2021-05-26 00:15:39.393 INFO: [63] [confId=422c79cfc9656cf7 epId=bf35a4ab local_ufrag=7kvpn1f6j0b7ab gid=12581 stats_id=Clifford-AuR conf_name=loadtest0@conference.lmtgt1.dev2dev.net] IceTransport.startReadingData#203: Socket closed, stopping reader
JVB 2021-05-26 00:15:39.394 INFO: [63] [confId=422c79cfc9656cf7 epId=bf35a4ab local_ufrag=7kvpn1f6j0b7ab gid=12581 stats_id=Clifford-AuR conf_name=loadtest0@conference.lmtgt1.dev2dev.net] IceTransport.startReadingData#215: No longer running, stopped reading packets
JVB 2021-05-26 00:15:39.394 INFO: [64] [confId=422c79cfc9656cf7 epId=bf35a4ab gid=12581 stats_id=Clifford-AuR conf_name=loadtest0@conference.lmtgt1.dev2dev.net] Endpoint.expire#1014: Expired.
JVB 2021-05-26 00:15:39.625 INFO: [64] [confId=422c79cfc9656cf7 gid=12581 conf_name=loadtest0@conference.lmtgt1.dev2dev.net] Conference.dominantSpeakerChanged#422: ds_change ds_id=3237c960
JVB 2021-05-26 00:15:46.245 INFO: [33] HealthChecker.run#171: Performed a successful health check in PT0.000005S. Sticky failure: false
JVB 2021-05-26 00:15:56.245 INFO: [33] HealthChecker.run#171: Performed a successful health check in PT0.000004S. Sticky failure: false

==> /var/log/jitsi/jicofo.log <==
Jicofo 2021-05-26 00:15:59.386 INFO: [50] [room=loadtest0@conference.lmtgt1.dev2dev.net] JitsiMeetConferenceImpl$SinglePersonTimeout.run#2896: Timing out single participant: 3237c960
Jicofo 2021-05-26 00:15:59.386 INFO: [50] [room=loadtest0@conference.lmtgt1.dev2dev.net] JitsiMeetConferenceImpl.terminateParticipant#1130: Terminating 3237c960, reason: expired, send session-terminate: true
Jicofo 2021-05-26 00:15:59.387 INFO: [50] AbstractOperationSetJingle.terminateSession#509: Terminate session: loadtest0@conference.lmtgt1.dev2dev.net/3237c960, reason: expired, send terminate: true
Jicofo 2021-05-26 00:15:59.389 INFO: [50] [room=loadtest0@conference.lmtgt1.dev2dev.net] JitsiMeetConferenceImpl.removeSources#1835: Removing sources from loadtest0@conference.lmtgt1.dev2dev.net/3237c960: Sources{ audio: [ssrc=3308431027 ] video: [ssrc=10207771 ssrc=1070606006 ssrc=1110736298 ssrc=2504944182 ssrc=2978667942 ssrc=3202633524 ] }@497047123
Jicofo 2021-05-26 00:15:59.389 INFO: [50] [room=loadtest0@conference.lmtgt1.dev2dev.net] JitsiMeetConferenceImpl.terminateParticipant#1155: Removed participant 3237c960 removed=true
Jicofo 2021-05-26 00:15:59.389 INFO: [50] [room=loadtest0@conference.lmtgt1.dev2dev.net] JitsiMeetConferenceImpl$BridgeSession.terminate#2675: Expiring channels for: loadtest0@conference.lmtgt1.dev2dev.net/3237c960 on: Bridge[jid=jvbbrewery@internal.auth.lmtgt1.dev2dev.net/f40e5aa1-1094-4e41-b913-b473039e6de7, relayId=null, region=null, stress=0.01]

==> /var/log/jitsi/jvb.log <==
JVB 2021-05-26 00:15:59.434 INFO: [64] [confId=422c79cfc9656cf7 epId=3237c960 gid=12581 stats_id=Willy-zkk conf_name=loadtest0@conference.lmtgt1.dev2dev.net] AbstractEndpoint.expire#271: Expiring.
JVB 2021-05-26 00:15:59.434 INFO: [63] [confId=422c79cfc9656cf7 gid=12581 conf_name=loadtest0@conference.lmtgt1.dev2dev.net] Conference.dominantSpeakerChanged#422: ds_change ds_id=null
JVB 2021-05-26 00:15:59.435 INFO: [64] [confId=422c79cfc9656cf7 epId=3237c960 gid=12581 stats_id=Willy-zkk conf_name=loadtest0@conference.lmtgt1.dev2dev.net] Endpoint.expire#996: Spent 0 seconds oversending
JVB 2021-05-26 00:15:59.435 INFO: [64] [confId=422c79cfc9656cf7 epId=3237c960 gid=12581 stats_id=Willy-zkk conf_name=loadtest0@conference.lmtgt1.dev2dev.net] Transceiver.teardown#324: Tearing down
JVB 2021-05-26 00:15:59.435 INFO: [64] [confId=422c79cfc9656cf7 epId=3237c960 gid=12581 stats_id=Willy-zkk conf_name=loadtest0@conference.lmtgt1.dev2dev.net] RtpReceiverImpl.tearDown#339: Tearing down
JVB 2021-05-26 00:15:59.436 INFO: [64] [confId=422c79cfc9656cf7 epId=3237c960 gid=12581 stats_id=Willy-zkk conf_name=loadtest0@conference.lmtgt1.dev2dev.net] RtpSenderImpl.tearDown#311: Tearing down
JVB 2021-05-26 00:15:59.437 INFO: [64] [confId=422c79cfc9656cf7 epId=3237c960 gid=12581 stats_id=Willy-zkk conf_name=loadtest0@conference.lmtgt1.dev2dev.net] DtlsTransport.stop#186: Stopping
JVB 2021-05-26 00:15:59.437 INFO: [64] [confId=422c79cfc9656cf7 epId=3237c960 local_ufrag=7u4tp1f6j0b7vc gid=12581 stats_id=Willy-zkk conf_name=loadtest0@conference.lmtgt1.dev2dev.net] IceTransport.stop#237: Stopping
JVB 2021-05-26 00:15:59.437 INFO: [59] [confId=422c79cfc9656cf7 epId=3237c960 local_ufrag=7u4tp1f6j0b7vc gid=12581 stats_id=Willy-zkk conf_name=loadtest0@conference.lmtgt1.dev2dev.net] IceTransport.startReadingData#215: No longer running, stopped reading packets
JVB 2021-05-26 00:15:59.438 INFO: [68] [confId=422c79cfc9656cf7 gid=12581 stats_id=Willy-zkk componentId=1 conf_name=loadtest0@conference.lmtgt1.dev2dev.net ufrag=7u4tp1f6j0b7vc name=stream-3237c960 epId=3237c960 local_ufrag=7u4tp1f6j0b7vc] MergingDatagramSocket$SocketContainer.runInReaderThread#770: Failed to receive: java.net.SocketException: Socket closed
JVB 2021-05-26 00:15:59.439 WARNING: [68] [confId=422c79cfc9656cf7 gid=12581 stats_id=Willy-zkk componentId=1 conf_name=loadtest0@conference.lmtgt1.dev2dev.net ufrag=7u4tp1f6j0b7vc name=stream-3237c960 epId=3237c960 local_ufrag=7u4tp1f6j0b7vc] MergingDatagramSocket.doRemove#349: Removing the active socket. Won't be able to send until a new one is elected.
JVB 2021-05-26 00:15:59.439 INFO: [64] [confId=422c79cfc9656cf7 gid=12581 stats_id=Willy-zkk componentId=1 conf_name=loadtest0@conference.lmtgt1.dev2dev.net ufrag=7u4tp1f6j0b7vc name=stream-3237c960 epId=3237c960 local_ufrag=7u4tp1f6j0b7vc] MergingDatagramSocket.close#142: Closing.
JVB 2021-05-26 00:15:59.439 INFO: [64] [confId=422c79cfc9656cf7 epId=3237c960 gid=12581 stats_id=Willy-zkk conf_name=loadtest0@conference.lmtgt1.dev2dev.net] Endpoint.expire#1014: Expired.
JVB 2021-05-26 00:15:59.439 INFO: [71] [confId=422c79cfc9656cf7 gid=12581 stats_id=Willy-zkk componentId=1 conf_name=loadtest0@conference.lmtgt1.dev2dev.net ufrag=7u4tp1f6j0b7vc name=stream-3237c960 epId=3237c960 local_ufrag=7u4tp1f6j0b7vc] MergingDatagramSocket$SocketContainer.runInReaderThread#770: Failed to receive: java.net.SocketException: Socket closed
JVB 2021-05-26 00:16:06.245 INFO: [33] HealthChecker.run#171: Performed a successful health check in PT0.000005S. Sticky failure: false

==> /var/log/jitsi/jicofo.log <==
Jicofo 2021-05-26 00:16:10.918 INFO: [36] [room=loadtest0@conference.lmtgt1.dev2dev.net] ChatRoomRoleAndPresence.memberPresenceChanged#130: Chat room event Left member=ChatMember[loadtest0@conference.lmtgt1.dev2dev.net/3237c960, jid: ilqagllzcjquaiu_@lmtgt1.dev2dev.net/aS6WoCID]@2126203043
Jicofo 2021-05-26 00:16:10.918 INFO: [36] [room=loadtest0@conference.lmtgt1.dev2dev.net] ChatRoomRoleAndPresence.memberPresenceChanged#150: Owner has left the room !
Jicofo 2021-05-26 00:16:10.919 INFO: [36] [room=loadtest0@conference.lmtgt1.dev2dev.net] JitsiMeetConferenceImpl.onMemberLeft#1093: Member left:3237c960
Jicofo 2021-05-26 00:16:10.919 WARNING: [36] [room=loadtest0@conference.lmtgt1.dev2dev.net] JitsiMeetConferenceImpl.onMemberLeft#1108: Participant not found for 3237c960. Terminated already or never started?
Jicofo 2021-05-26 00:16:10.920 INFO: [36] [room=loadtest0@conference.lmtgt1.dev2dev.net] JitsiMeetConferenceImpl.stop#428: Stopped.
Jicofo 2021-05-26 00:16:11.656 INFO: [62] ConferenceIqHandler.handleConferenceIq#56: Focus request for room: loadtest0@conference.lmtgt1.dev2dev.net
Jicofo 2021-05-26 00:16:11.656 INFO: [62] [room=loadtest0@conference.lmtgt1.dev2dev.net] JitsiMeetConferenceImpl.<init>#268: Created new conference, roomJid=loadtest0@conference.lmtgt1.dev2dev.net
Jicofo 2021-05-26 00:16:11.657 INFO: [62] [room=loadtest0@conference.lmtgt1.dev2dev.net] JitsiMeetConferenceImpl.joinTheRoom#451: Joining loadtest0@conference.lmtgt1.dev2dev.net
Jicofo 2021-05-26 00:16:11.763 INFO: [36] [room=loadtest0@conference.lmtgt1.dev2dev.net] ChatRoomRoleAndPresence.memberPresenceChanged#130: Chat room event Joined member=ChatMember[loadtest0@conference.lmtgt1.dev2dev.net/dad77acb, jid: null]@644116578
Jicofo 2021-05-26 00:16:11.763 WARNING: [36] [room=loadtest0@conference.lmtgt1.dev2dev.net] ChatRoomRoleAndPresence.electNewOwner#177: Focus role unknown
Jicofo 2021-05-26 00:16:11.763 INFO: [36] [room=loadtest0@conference.lmtgt1.dev2dev.net] ChatRoomRoleAndPresence.electNewOwner#181: Obtained focus role: OWNER
Jicofo 2021-05-26 00:16:11.765 INFO: [36] [room=loadtest0@conference.lmtgt1.dev2dev.net] ChatRoomRoleAndPresence.electNewOwner#224: Granted owner to dad77acb
Jicofo 2021-05-26 00:16:11.766 INFO: [36] [room=loadtest0@conference.lmtgt1.dev2dev.net] JitsiMeetConferenceImpl.onMemberJoined#561: Member joined:dad77acb
Jicofo 2021-05-26 00:16:11.766 INFO: [36] [room=loadtest0@conference.lmtgt1.dev2dev.net] ChatRoomRoleAndPresence.memberPresenceChanged#130: Chat room event PresenceUpdated member=ChatMember[loadtest0@conference.lmtgt1.dev2dev.net/dad77acb, jid: lwprxdqggwvp3sor@lmtgt1.dev2dev.net/ipB5-VLr]@644116578
Jicofo 2021-05-26 00:16:11.766 INFO: [36] [room=loadtest0@conference.lmtgt1.dev2dev.net] ChatRoomRoleAndPresence.memberPresenceChanged#130: Chat room event PresenceUpdated member=ChatMember[loadtest0@conference.lmtgt1.dev2dev.net/dad77acb, jid: lwprxdqggwvp3sor@lmtgt1.dev2dev.net/ipB5-VLr]@644116578

==> /var/log/jitsi/jvb.log <==
JVB 2021-05-26 00:16:16.246 INFO: [33] HealthChecker.run#171: Performed a successful health check in PT0.000005S. Sticky failure: false
^C

Here is the section of my nginx for the sharding (which I don’t understand why I woudl be using this with a single server, isn’t this just if you are clustering/load balancing multiple servers?

    # xmpp websockets
    location = /xmpp-websocket {
        proxy_pass http://127.0.0.1:5280/xmpp-websocket?prefix=$prefix&$args; # might want to try with only /xmpp-websocket after the url?
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
        proxy_set_header Host $http_host;
        tcp_nodelay on;
        # following added by Hawke as per damencho suggestions 20210525L
        # #shard & region that matches config.deploymentInfo.shard/region -  See [note 1] below
        add_header 'x-jitsi-shard' 'shard';
        add_header 'x-jitsi-region' 'us-east-2c';
        add_header 'Access-Control-Expose-Headers' 'X-Jitsi-Shard, X-Jitsi-Region';
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_buffer_size 512k; #  128k, try 512k should I try matching this to the 512kbps in Prosody? will start with these defaults then increase
        proxy_buffers 4 512k;   #  4 256k, try 4 512k  -- only 4 automated joining, I wonder if the 4 in here makes a difference? try raising to 10 while keeping the 512k the same and see if any difference? tried 10 512k as well. both didn't drop, but took long time before others alloswed to join, this is throttling, not helpful for load test, now have to run test longer to get to peak?
        proxy_busy_buffers_size  512k; #  256k, try 512k
    }

/etc/jitsi/meet/lmtgt1.dev2dev.net-config.js
    // Information about the jitsi-meet instance we are connecting to, including
    // the user region as seen by the server.
    deploymentInfo: {
        shard: "shard1",
        region: "us-east-2c",
        userRegion: "northamerica"
    },


So I remmed out the sharding since it completely broke it.

And while users don’t drop, I can’t actually get more than a few to connect at a time, so never get near the 950 mark.

Here are the logs and config files for those attempts:

And logs look like this:
nference.lmtgt1.dev2dev.net ufrag=28j981f6iv83ic] Agent.gatherCandidates#622: Gathering candidates for component stream-94062499.RTP.
JVB 2021-05-25 23:55:05.678 INFO: [6936] [confId=705f6c51cdfd21ed epId=94062499 gid=23846 conf_name=loadtest15@conference.lmtgt1.dev2dev.net] Endpoint.setTransportInfo#662: Ignoring empty DtlsFingerprint extension: <transport xmlns='urn:xmpp:jingle:transports:ice-udp:1'><fingerprint xmlns='urn:xmpp:jingle:apps:dtls:0' required='false'/></transport>

==> /var/log/jitsi/jicofo.log <==
Jicofo 2021-05-25 23:55:05.680 INFO: [464] [room=loadtest15@conference.lmtgt1.dev2dev.net participant=94062499] ParticipantChannelAllocator.doInviteOrReinvite#218: Sending session-initiate to: loadtest15@conference.lmtgt1.dev2dev.net/94062499

==> /var/log/jitsi/jvb.log <==
JVB 2021-05-25 23:55:06.431 INFO: [33] HealthChecker.run#171: Performed a successful health check in PT0.000008S. Sticky failure: false

==> /var/log/jitsi/jicofo.log <==
Jicofo 2021-05-25 23:55:07.176 INFO: [36] [room=loadtest15@conference.lmtgt1.dev2dev.net] JitsiMeetConferenceImpl.onSessionAccept#1277: Receive session-accept from loadtest15@conference.lmtgt1.dev2dev.net/94062499
Jicofo 2021-05-25 23:55:07.176 INFO: [36] [room=loadtest15@conference.lmtgt1.dev2dev.net] JitsiMeetConferenceImpl.onSessionAcceptInternal#1693: Received session-accept from 94062499 with accepted sources:Sources{ }@1440664679

==> /var/log/jitsi/jvb.log <==
JVB 2021-05-25 23:55:07.178 INFO: [6956] [confId=705f6c51cdfd21ed epId=94062499 gid=23846 stats_id=Edward-AyW conf_name=loadtest15@conference.lmtgt1.dev2dev.net] DtlsTransport.setSetupAttribute#120: The remote side is acting as DTLS client, we'll act as server
JVB 2021-05-25 23:55:07.178 INFO: [6956] [confId=705f6c51cdfd21ed epId=94062499 local_ufrag=28j981f6iv83ic gid=23846 stats_id=Edward-AyW conf_name=loadtest15@conference.lmtgt1.dev2dev.net] IceTransport.startConnectivityEstablishment#184: Starting the Agent without remote candidates.
JVB 2021-05-25 23:55:07.178 INFO: [6956] [confId=705f6c51cdfd21ed gid=23846 stats_id=Edward-AyW conf_name=loadtest15@conference.lmtgt1.dev2dev.net ufrag=28j981f6iv83ic epId=94062499 local_ufrag=28j981f6iv83ic] Agent.startConnectivityEstablishment#713: Start ICE connectivity establishment.
JVB 2021-05-25 23:55:07.178 INFO: [6956] [confId=705f6c51cdfd21ed gid=23846 stats_id=Edward-AyW conf_name=loadtest15@conference.lmtgt1.dev2dev.net ufrag=28j981f6iv83ic epId=94062499 local_ufrag=28j981f6iv83ic] Agent.initCheckLists#949: Init checklist for stream stream-94062499
JVB 2021-05-25 23:55:07.178 INFO: [6956] [confId=705f6c51cdfd21ed gid=23846 stats_id=Edward-AyW conf_name=loadtest15@conference.lmtgt1.dev2dev.net ufrag=28j981f6iv83ic epId=94062499 local_ufrag=28j981f6iv83ic] Agent.setState#923: ICE state changed from Waiting to Running.
JVB 2021-05-25 23:55:07.179 INFO: [6956] [confId=705f6c51cdfd21ed epId=94062499 local_ufrag=28j981f6iv83ic gid=23846 stats_id=Edward-AyW conf_name=loadtest15@conference.lmtgt1.dev2dev.net] IceTransport.iceStateChanged#323: ICE state changed old=Waiting new=Running
JVB 2021-05-25 23:55:07.179 INFO: [6956] [confId=705f6c51cdfd21ed gid=23846 stats_id=Edward-AyW conf_name=loadtest15@conference.lmtgt1.dev2dev.net ufrag=28j981f6iv83ic epId=94062499 local_ufrag=28j981f6iv83ic] ConnectivityCheckClient.startChecks#142: Start connectivity checks.
JVB 2021-05-25 23:55:07.271 INFO: [5519] [confId=705f6c51cdfd21ed gid=23846 stats_id=Edward-AyW conf_name=loadtest15@conference.lmtgt1.dev2dev.net ufrag=28j981f6iv83ic epId=94062499 local_ufrag=28j981f6iv83ic] Agent.triggerCheck#1714: Add peer CandidatePair with new reflexive address to checkList: CandidatePair (State=Frozen Priority=7926369428998979583):
	LocalCandidate=candidate:1 1 udp 2130706431 172.31.249.111 10000 typ host
	RemoteCandidate=candidate:10000 1 udp 1845501695 18.218.150.174 42332 typ prflx
JVB 2021-05-25 23:55:07.280 INFO: [5519] [confId=705f6c51cdfd21ed gid=23846 stats_id=Edward-AyW conf_name=loadtest15@conference.lmtgt1.dev2dev.net ufrag=28j981f6iv83ic epId=94062499 local_ufrag=28j981f6iv83ic] ConnectivityCheckClient.processSuccessResponse#630: Pair succeeded: 172.31.249.111:10000/udp/host -> 18.218.150.174:42332/udp/prflx (stream-94062499.RTP).
JVB 2021-05-25 23:55:07.280 INFO: [5519] [confId=705f6c51cdfd21ed gid=23846 stats_id=Edward-AyW componentId=1 conf_name=loadtest15@conference.lmtgt1.dev2dev.net ufrag=28j981f6iv83ic name=stream-94062499 epId=94062499 local_ufrag=28j981f6iv83ic] ComponentSocket.addAuthorizedAddress#99: Adding allowed address: 18.218.150.174:42332/udp
JVB 2021-05-25 23:55:07.281 INFO: [5519] [confId=705f6c51cdfd21ed gid=23846 stats_id=Edward-AyW conf_name=loadtest15@conference.lmtgt1.dev2dev.net ufrag=28j981f6iv83ic epId=94062499 local_ufrag=28j981f6iv83ic] ConnectivityCheckClient.processSuccessResponse#639: Pair validated: 3.18.144.164:10000/udp/srflx -> 18.218.150.174:42332/udp/prflx (stream-94062499.RTP).
JVB 2021-05-25 23:55:07.281 INFO: [5519] [confId=705f6c51cdfd21ed gid=23846 stats_id=Edward-AyW conf_name=loadtest15@conference.lmtgt1.dev2dev.net ufrag=28j981f6iv83ic epId=94062499 local_ufrag=28j981f6iv83ic] DefaultNominator.strategyNominateFirstValid#142: Nominate (first valid): 3.18.144.164:10000/udp/srflx -> 18.218.150.174:42332/udp/prflx (stream-94062499.RTP).
JVB 2021-05-25 23:55:07.281 INFO: [5519] [confId=705f6c51cdfd21ed gid=23846 stats_id=Edward-AyW conf_name=loadtest15@conference.lmtgt1.dev2dev.net ufrag=28j981f6iv83ic epId=94062499 local_ufrag=28j981f6iv83ic] Agent.nominate#1787: verify if nominated pair answer again
JVB 2021-05-25 23:55:07.281 WARNING: [5519] [confId=705f6c51cdfd21ed gid=23846 stats_id=Edward-AyW componentId=1 conf_name=loadtest15@conference.lmtgt1.dev2dev.net ufrag=28j981f6iv83ic name=stream-94062499 epId=94062499 local_ufrag=28j981f6iv83ic] MergingDatagramSocket.initializeActive#599: Active socket already initialized.
JVB 2021-05-25 23:55:07.281 INFO: [5519] [confId=705f6c51cdfd21ed gid=23846 stats_id=Edward-AyW conf_name=loadtest15@conference.lmtgt1.dev2dev.net ufrag=28j981f6iv83ic epId=94062499 local_ufrag=28j981f6iv83ic] ConnectivityCheckClient.processSuccessResponse#708: IsControlling: true USE-CANDIDATE:false.
JVB 2021-05-25 23:55:07.292 INFO: [5519] [confId=705f6c51cdfd21ed gid=23846 stats_id=Edward-AyW conf_name=loadtest15@conference.lmtgt1.dev2dev.net ufrag=28j981f6iv83ic epId=94062499 local_ufrag=28j981f6iv83ic] ConnectivityCheckClient.processSuccessResponse#630: Pair succeeded: 3.18.144.164:10000/udp/srflx -> 18.218.150.174:42332/udp/prflx (stream-94062499.RTP).
JVB 2021-05-25 23:55:07.292 INFO: [5519] [confId=705f6c51cdfd21ed gid=23846 stats_id=Edward-AyW conf_name=loadtest15@conference.lmtgt1.dev2dev.net ufrag=28j981f6iv83ic epId=94062499 local_ufrag=28j981f6iv83ic] ConnectivityCheckClient.processSuccessResponse#639: Pair validated: 3.18.144.164:10000/udp/srflx -> 18.218.150.174:42332/udp/prflx (stream-94062499.RTP).
JVB 2021-05-25 23:55:07.292 INFO: [5519] [confId=705f6c51cdfd21ed gid=23846 stats_id=Edward-AyW conf_name=loadtest15@conference.lmtgt1.dev2dev.net ufrag=28j981f6iv83ic epId=94062499 local_ufrag=28j981f6iv83ic] ConnectivityCheckClient.processSuccessResponse#708: IsControlling: true USE-CANDIDATE:true.
JVB 2021-05-25 23:55:07.292 INFO: [5519] [confId=705f6c51cdfd21ed gid=23846 stats_id=Edward-AyW conf_name=loadtest15@conference.lmtgt1.dev2dev.net ufrag=28j981f6iv83ic epId=94062499 local_ufrag=28j981f6iv83ic] ConnectivityCheckClient.processSuccessResponse#723: Nomination confirmed for pair: 3.18.144.164:10000/udp/srflx -> 18.218.150.174:42332/udp/prflx (stream-94062499.RTP).
JVB 2021-05-25 23:55:07.292 INFO: [5519] [confId=705f6c51cdfd21ed gid=23846 stats_id=Edward-AyW conf_name=loadtest15@conference.lmtgt1.dev2dev.net ufrag=28j981f6iv83ic name=stream-94062499 epId=94062499 local_ufrag=28j981f6iv83ic] CheckList.handleNominationConfirmed#406: Selected pair for stream stream-94062499.RTP: 3.18.144.164:10000/udp/srflx -> 18.218.150.174:42332/udp/prflx (stream-94062499.RTP)
JVB 2021-05-25 23:55:07.292 INFO: [5519] [confId=705f6c51cdfd21ed gid=23846 stats_id=Edward-AyW conf_name=loadtest15@conference.lmtgt1.dev2dev.net ufrag=28j981f6iv83ic epId=94062499 local_ufrag=28j981f6iv83ic] Agent.checkListStatesUpdated#1878: CheckList of stream stream-94062499 is COMPLETED
JVB 2021-05-25 23:55:07.292 INFO: [5519] [confId=705f6c51cdfd21ed gid=23846 stats_id=Edward-AyW conf_name=loadtest15@conference.lmtgt1.dev2dev.net ufrag=28j981f6iv83ic epId=94062499 local_ufrag=28j981f6iv83ic] Agent.setState#923: ICE state changed from Running to Completed.
JVB 2021-05-25 23:55:07.292 INFO: [5519] [confId=705f6c51cdfd21ed epId=94062499 local_ufrag=28j981f6iv83ic gid=23846 stats_id=Edward-AyW conf_name=loadtest15@conference.lmtgt1.dev2dev.net] IceTransport.iceStateChanged#323: ICE state changed old=Running new=Completed
JVB 2021-05-25 23:55:07.292 INFO: [5519] [confId=705f6c51cdfd21ed epId=94062499 gid=23846 stats_id=Edward-AyW conf_name=loadtest15@conference.lmtgt1.dev2dev.net] Endpoint$setupIceTransport$2.connected#303: ICE connected
JVB 2021-05-25 23:55:07.293 INFO: [6974] [confId=705f6c51cdfd21ed epId=94062499 gid=23846 stats_id=Edward-AyW conf_name=loadtest15@conference.lmtgt1.dev2dev.net] DtlsTransport.startDtlsHandshake#102: Starting DTLS handshake
JVB 2021-05-25 23:55:07.293 INFO: [5519] [confId=705f6c51cdfd21ed gid=23846 stats_id=Edward-AyW conf_name=loadtest15@conference.lmtgt1.dev2dev.net ufrag=28j981f6iv83ic epId=94062499 local_ufrag=28j981f6iv83ic] Agent.logCandTypes#1986: Harvester used for selected pair for stream-94062499.RTP: srflx
JVB 2021-05-25 23:55:07.293 INFO: [6974] [confId=705f6c51cdfd21ed epId=94062499 gid=23846 stats_id=Edward-AyW conf_name=loadtest15@conference.lmtgt1.dev2dev.net] TlsServerImpl.notifyClientVersion#191: Negotiated DTLS version DTLS 1.2
JVB 2021-05-25 23:55:07.295 INFO: [6974] [confId=705f6c51cdfd21ed epId=94062499 gid=23846 stats_id=Edward-AyW conf_name=loadtest15@conference.lmtgt1.dev2dev.net] Endpoint$setupDtlsTransport$3.handshakeComplete#341: DTLS handshake complete
JVB 2021-05-25 23:55:07.296 INFO: [6956] [confId=705f6c51cdfd21ed epId=94062499 gid=23846 stats_id=Edward-AyW conf_name=loadtest15@conference.lmtgt1.dev2dev.net] Endpoint$acceptSctpConnection$1.run#578: Attempting to establish SCTP socket connection
Got sctp association state update: 1
sctp is now up.  was ready? false
JVB 2021-05-25 23:55:07.341 SEVERE: [28] [confId=1b79e0175bb4f2b3 epId=31cd55fa gid=20384 stats_id=Jamel-O6B conf_name=loadtest10@conference.lmtgt1.dev2dev.net] Endpoint$scheduleEndpointMessageTransportTimeout$1.run#605: EndpointMessageTransport still not connected.
JVB 2021-05-25 23:55:07.396 INFO: [6956] [confId=705f6c51cdfd21ed epId=94062499 gid=23846 stats_id=Edward-AyW conf_name=loadtest15@conference.lmtgt1.dev2dev.net] Endpoint$createSctpConnection$3.onReady#526: SCTP connection is ready, creating the Data channel stack
JVB 2021-05-25 23:55:07.396 INFO: [6956] [confId=705f6c51cdfd21ed epId=94062499 gid=23846 stats_id=Edward-AyW conf_name=loadtest15@conference.lmtgt1.dev2dev.net] Endpoint$createSctpConnection$3.onReady#550: Will wait for the remote side to open the data channel.
JVB 2021-05-25 23:55:07.426 INFO: [5519] [confId=705f6c51cdfd21ed gid=23846 stats_id=Edward-AyW componentId=1 conf_name=loadtest15@conference.lmtgt1.dev2dev.net ufrag=28j981f6iv83ic name=stream-94062499 epId=94062499 local_ufrag=28j981f6iv83ic] ComponentSocket.addAuthorizedAddress#99: Adding allowed address: 3.18.144.164:57118/udp
JVB 2021-05-25 23:55:09.489 SEVERE: [28] [confId=ceceda7989d027db epId=3f0a1f1a gid=13389 stats_id=Delia-BvD conf_name=loadtest62@conference.lmtgt1.dev2dev.net] Endpoint$scheduleEndpointMessageTransportTimeout$1.run#605: EndpointMessageTransport still not connected.
JVB 2021-05-25 23:55:10.293 INFO: [64] [confId=705f6c51cdfd21ed gid=23846 stats_id=Edward-AyW conf_name=loadtest15@conference.lmtgt1.dev2dev.net ufrag=28j981f6iv83ic epId=94062499 local_ufrag=28j981f6iv83ic] Agent.setState#923: ICE state changed from Completed to Terminated.
JVB 2021-05-25 23:55:10.293 INFO: [64] [confId=705f6c51cdfd21ed epId=94062499 local_ufrag=28j981f6iv83ic gid=23846 stats_id=Edward-AyW conf_name=loadtest15@conference.lmtgt1.dev2dev.net] IceTransport.iceStateChanged#323: ICE state changed old=Completed new=Terminated
JVB 2021-05-25 23:55:15.735 SEVERE: [28] [confId=cc82a5a5b487c485 epId=bbd0b298 gid=2941 stats_id=Paris-LM4 conf_name=loadtest30@conference.lmtgt1.dev2dev.net] Endpoint$scheduleEndpointMessageTransportTimeout$1.run#605: EndpointMessageTransport still not connected.
JVB 2021-05-25 23:55:16.439 INFO: [33] HealthChecker.run#171: Performed a successful health check in PT0.000011S. Sticky failure: false
JVB 2021-05-25 23:55:26.431 INFO: [33] HealthChecker.run#171: Performed a successful health check in PT0.000008S. Sticky failure: false
JVB 2021-05-25 23:55:36.431 INFO: [33] HealthChecker.run#171: Performed a successful health check in PT0.000007S. Sticky failure: false
JVB 2021-05-25 23:55:37.296 SEVERE: [28] [confId=705f6c51cdfd21ed epId=94062499 gid=23846 stats_id=Edward-AyW conf_name=loadtest15@conference.lmtgt1.dev2dev.net] Endpoint$scheduleEndpointMessageTransportTimeout$1.run#605: EndpointMessageTransport still not connected.
JVB 2021-05-25 23:55:46.431 INFO: [33] HealthChecker.run#171: Performed a successful health check in PT0.000008S. Sticky failure: false
JVB 2021-05-25 23:55:56.416 INFO: [32] VideobridgeExpireThread.expire#140: Running expire()
JVB 2021-05-25 23:55:56.431 INFO: [33] HealthChecker.run#171: Performed a successful health check in PT0.000008S. Sticky failure: false
JVB 2021-05-25 23:56:06.432 INFO: [33] HealthChecker.run#171: Performed a successful health check in PT0.000007S. Sticky failure: false
JVB 2021-05-25 23:56:16.431 INFO: [33] HealthChecker.run#171: Performed a successful health check in PT0.000008S. Sticky failure: false
JVB 2021-05-25 23:56:26.432 INFO: [33] HealthChecker.run#171: Performed a successful health check in PT0.000006S. Sticky failure: false
JVB 2021-05-25 23:56:36.431 INFO: [33] HealthChecker.run#171: Performed a successful health check in PT0.000008S. Sticky failure: false
JVB 2021-05-25 23:56:46.431 INFO: [33] HealthChecker.run#171: Performed a successful health check in PT0.000008S. Sticky failure: false
JVB 2021-05-25 23:56:56.416 INFO: [32] VideobridgeExpireThread.expire#140: Running expire()
JVB 2021-05-25 23:56:56.431 INFO: [33] HealthChecker.run#171: Performed a successful health check in PT0.000007S. Sticky failure: false
JVB 2021-05-25 23:57:06.431 INFO: [33] HealthChecker.run#171: Performed a successful health check in PT0.000006S. Sticky failure: false
JVB 2021-05-25 23:57:16.431 INFO: [33] HealthChecker.run#171: Performed a successful health check in PT0.000009S. Sticky failure: false
JVB 2021-05-25 23:57:26.431 INFO: [33] HealthChecker.run#171: Performed a successful health check in PT0.000007S. Sticky failure: false
JVB 2021-05-25 23:57:36.431 INFO: [33] HealthChecker.run#171: Performed a successful health check in PT0.000007S. Sticky failure: false
JVB 2021-05-25 23:57:46.431 INFO: [33] HealthChecker.run#171: Performed a successful health check in PT0.000008S. Sticky failure: false
JVB 2021-05-25 23:57:56.416 INFO: [32] VideobridgeExpireThread.expire#140: Running expire()
JVB 2021-05-25 23:57:56.431 INFO: [33] HealthChecker.run#171: Performed a successful health check in PT0.000009S. Sticky failure: false
JVB 2021-05-25 23:58:06.431 INFO: [33] HealthChecker.run#171: Performed a successful health check in PT0.000009S. Sticky failure: false
JVB 2021-05-25 23:58:16.431 INFO: [33] HealthChecker.run#171: Performed a successful health check in PT0.000008S. Sticky failure: false
JVB 2021-05-25 23:58:26.431 INFO: [33] HealthChecker.run#171: Performed a successful health check in PT0.000007S. Sticky failure: false
JVB 2021-05-25 23:58:36.431 INFO: [33] HealthChecker.run#171: Performed a successful health check in PT0.000018S. Sticky failure: false
JVB 2021-05-25 23:58:46.431 INFO: [33] HealthChecker.run#171: Performed a successful health check in PT0.000007S. Sticky failure: false
JVB 2021-05-25 23:58:56.416 INFO: [32] VideobridgeExpireThread.expire#140: Running expire()
JVB 2021-05-25 23:58:56.431 INFO: [33] HealthChecker.run#171: Performed a successful health check in PT0.000007S. Sticky failure: false
JVB 2021-05-25 23:59:06.431 INFO: [33] HealthChecker.run#171: Performed a successful health check in PT0.000007S. Sticky failure: false
JVB 2021-05-25 23:59:16.431 INFO: [33] HealthChecker.run#171: Performed a successful health check in PT0.000009S. Sticky failure: false
JVB 2021-05-25 23:59:26.431 INFO: [33] HealthChecker.run#171: Performed a successful health check in PT0.000008S. Sticky failure: false
JVB 2021-05-25 23:59:36.431 INFO: [33] HealthChecker.run#171: Performed a successful health check in PT0.000009S. Sticky failure: false
JVB 2021-05-25 23:59:46.443 INFO: [33] HealthChecker.run#171: Performed a successful health check in PT0.000006S. Sticky failure: false
JVB 2021-05-25 23:59:56.416 INFO: [32] VideobridgeExpireThread.expire#140: Running expire()
JVB 2021-05-25 23:59:56.431 INFO: [33] HealthChecker.run#171: Performed a successful health check in PT0.000007S. Sticky failure: false
tail: /var/log/jitsi/jvb.log: file truncated
JVB 2021-05-26 00:00:06.431 INFO: [33] HealthChecker.run#171: Performed a successful health check in PT0.000007S. Sticky failure: false
^C

Here is what config files look like now:

 cat /etc/nginx/sites-available/lmtgt1.dev2dev.net.conf
server_names_hash_bucket_size 64;

types {
# nginx's default mime.types doesn't include a mapping for wasm
    application/wasm     wasm;
}
server {
    listen 80;
    listen [::]:80;
    server_name lmtgt1.dev2dev.net;

    location ^~ /.well-known/acme-challenge/ {
        default_type "text/plain";
        root         /usr/share/jitsi-meet;
    }
    location = /.well-known/acme-challenge/ {
        return 404;
    }
    location / {
        return 301 https://$host$request_uri;
    }
}
server {
    listen 443 ssl;
    listen [::]:443 ssl;
    server_name lmtgt1.dev2dev.net;

    # Mozilla Guideline v5.4, nginx 1.17.7, OpenSSL 1.1.1d, intermediate configuration
    ssl_protocols TLSv1.2 TLSv1.3;
    ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384;
    ssl_prefer_server_ciphers off;

    ssl_session_timeout 1d;
    ssl_session_cache shared:SSL:10m;  # about 40000 sessions
    ssl_session_tickets off;

    add_header Strict-Transport-Security "max-age=63072000" always;

    ssl_certificate /etc/letsencrypt/live/lmtgt1.dev2dev.net/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/lmtgt1.dev2dev.net/privkey.pem;

    root /usr/share/jitsi-meet;

    # ssi on with javascript for multidomain variables in config.js
    ssi on;
    ssi_types application/x-javascript application/javascript;

    index index.html index.htm;
    error_page 404 /static/404.html;

    gzip on;
    gzip_types text/plain text/css application/javascript application/json image/x-icon application/octet-stream application/wasm;
    gzip_vary on;
    gzip_proxied no-cache no-store private expired auth;
    gzip_min_length 512;

    location = /config.js {
        alias /etc/jitsi/meet/lmtgt1.dev2dev.net-config.js;
    }

    location = /external_api.js {
        alias /usr/share/jitsi-meet/libs/external_api.min.js;
    }

    # ensure all static content can always be found first
    location ~ ^/(libs|css|static|images|fonts|lang|sounds|connection_optimization|.well-known)/(.*)$
    {
        add_header 'Access-Control-Allow-Origin' '*';
        alias /usr/share/jitsi-meet/$1/$2;

        # cache all versioned files
        if ($arg_v) {
            expires 1y;
        }
    }

    # BOSH
    location = /http-bind {
        proxy_pass       http://localhost:5280/http-bind;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header Host $http_host;
    }

    # xmpp websockets
    location = /xmpp-websocket {
        proxy_pass http://127.0.0.1:5280/xmpp-websocket?prefix=$prefix&$args; # might want to try with only /xmpp-websocket after the url?
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
        proxy_set_header Host $http_host;
        tcp_nodelay on;
	# following added by Hawke as per damencho suggestions 20210525L
	# #shard & region that matches config.deploymentInfo.shard/region -  See [note 1] below
        #add_header 'x-jitsi-shard' 'shard';
        #add_header 'x-jitsi-region' 'us-east-2a';
        #add_header 'Access-Control-Expose-Headers' 'X-Jitsi-Shard, X-Jitsi-Region';
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_buffer_size 512k; #  128k, try 512k should I try matching this to the 512kbps in Prosody? will start with these defaults then increase
        proxy_buffers 4 512k;	#  4 256k, try 4 512k  -- only 4 automated joining, I wonder if the 4 in here makes a difference? try raising to 10 while keeping the 512k the same and see if any difference? tried 10 512k as well. both didn't drop, but took long time before others alloswed to join, this is throttling, not helpful for load test, now have to run test longer to get to peak?
        proxy_busy_buffers_size  512k; #  256k, try 512k
    }

    # colibri (JVB) websockets for jvb1
    location ~ ^/colibri-ws/default-id/(.*) {
        proxy_pass http://127.0.0.1:9090/colibri-ws/default-id/$1$is_args$args;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
        tcp_nodelay on;
    }

    # load test minimal client, uncomment when used
    #location ~ ^/_load-test/([^/?&:'"]+)$ {
    #    rewrite ^/_load-test/(.*)$ /load-test/index.html break;
    #}
    #location ~ ^/_load-test/libs/(.*)$ {
    #    add_header 'Access-Control-Allow-Origin' '*';
    #    alias /usr/share/jitsi-meet/load-test/libs/$1;
    #}

    location ~ ^/([^/?&:'"]+)$ {
        try_files $uri @root_path;
    }

    location @root_path {
        rewrite ^/(.*)$ / break;
    }

    location ~ ^/([^/?&:'"]+)/config.js$
    {
        set $subdomain "$1.";
        set $subdir "$1/";

        alias /etc/jitsi/meet/lmtgt1.dev2dev.net-config.js;
    }

    # Anything that didn't match above, and isn't a real file, assume it's a room name and redirect to /
    location ~ ^/([^/?&:'"]+)/(.*)$ {
        set $subdomain "$1.";
        set $subdir "$1/";
        rewrite ^/([^/?&:'"]+)/(.*)$ /$2;
    }

    # BOSH for subdomains
    location ~ ^/([^/?&:'"]+)/http-bind {
        set $subdomain "$1.";
        set $subdir "$1/";
        set $prefix "$1";

        rewrite ^/(.*)$ /http-bind;
    }

    # websockets for subdomains
    location ~ ^/([^/?&:'"]+)/xmpp-websocket {
        set $subdomain "$1.";
        set $subdir "$1/";
        set $prefix "$1";

        rewrite ^/(.*)$ /xmpp-websocket;
    }
}
 cat /etc/prosody/prosody.cfg.lua 
-- Prosody XMPP Server Configuration
--
-- Information on configuring Prosody can be found on our
-- website at https://prosody.im/doc/configure
--
-- Tip: You can check that the syntax of this file is correct
-- when you have finished by running this command:
--     prosodyctl check config
-- If there are any errors, it will let you know what and where
-- they are, otherwise it will keep quiet.
--
-- Good luck, and happy Jabbering!


---------- Server-wide settings ----------
-- Settings in this section apply to the whole server and are the default settings
-- for any virtual hosts

-- This is a (by default, empty) list of accounts that are admins
-- for the server. Note that you must create the accounts separately
-- (see https://prosody.im/doc/creating_accounts for info)
-- Example: admins = { "user1@example.com", "user2@example.net" }
admins = { }

-- Enable use of libevent for better performance under high load
-- For more information see: https://prosody.im/doc/libevent
--use_libevent = true


-- epoll added by Hawke as per jitsi dev damencho
network_backend = "epoll"

-- Prosody will always look in its source directory for modules, but
-- this option allows you to specify additional locations where Prosody
-- will look for modules first. For community modules, see https://modules.prosody.im/
--plugin_paths = {}

-- This is the list of modules Prosody will load on startup.
-- It looks for mod_modulename.lua in the plugins folder, so make sure that exists too.
-- Documentation for bundled modules can be found at: https://prosody.im/doc/modules
modules_enabled = {

	-- Generally required
		"roster"; -- Allow users to have a roster. Recommended ;)
		"saslauth"; -- Authentication for clients and servers. Recommended if you want to log in.
		"tls"; -- Add support for secure TLS on c2s/s2s connections
		"dialback"; -- s2s dialback support
		"disco"; -- Service discovery

	-- Not essential, but recommended
		"carbons"; -- Keep multiple clients in sync
		"pep"; -- Enables users to publish their avatar, mood, activity, playing music and more
		"private"; -- Private XML storage (for room bookmarks, etc.)
		"blocklist"; -- Allow users to block communications with other users
		"vcard4"; -- User profiles (stored in PEP)
		"vcard_legacy"; -- Conversion between legacy vCard and PEP Avatar, vcard
		"limits"; -- Enable bandwidth limiting for XMPP connections

	-- Nice to have
		"version"; -- Replies to server version requests
		"uptime"; -- Report how long server has been running
		"time"; -- Let others know the time here on this server
		"ping"; -- Replies to XMPP pings with pongs
		"register"; -- Allow users to register on this server using a client and change passwords
		--"mam"; -- Store messages in an archive and allow users to access it
		--"csi_simple"; -- Simple Mobile optimizations

	-- Admin interfaces
		"admin_adhoc"; -- Allows administration via an XMPP client that supports ad-hoc commands
		--"admin_telnet"; -- Opens telnet console interface on localhost port 5582

	-- HTTP modules
		--"bosh"; -- Enable BOSH clients, aka "Jabber over HTTP"
		--"websocket"; -- XMPP over WebSockets
		--"http_files"; -- Serve static files from a directory over HTTP

	-- Other specific functionality
		--"groups"; -- Shared roster support
		--"server_contact_info"; -- Publish contact information for this service
		--"announce"; -- Send announcement to all online users
		--"welcome"; -- Welcome users who register accounts
		--"watchregistrations"; -- Alert admins of registrations
		--"motd"; -- Send a message to users when they log in
		--"legacyauth"; -- Legacy authentication. Only used by some old clients and bots.
		--"proxy65"; -- Enables a file transfer proxy service which clients behind NAT can use
}

-- These modules are auto-loaded, but should you want
-- to disable them then uncomment them here:
modules_disabled = {
	-- "offline"; -- Store offline messages
	-- "c2s"; -- Handle client connections
	-- "s2s"; -- Handle server-to-server connections
	-- "posix"; -- POSIX functionality, sends server to background, enables syslog, etc.
}

-- Disable account creation by default, for security
-- For more information see https://prosody.im/doc/creating_accounts
allow_registration = false

-- Force clients to use encrypted connections? This option will
-- prevent clients from authenticating unless they are using encryption.

c2s_require_encryption = true

-- Force servers to use encrypted connections? This option will
-- prevent servers from authenticating unless they are using encryption.

s2s_require_encryption = true

-- Force certificate authentication for server-to-server connections?

s2s_secure_auth = false

-- Some servers have invalid or self-signed certificates. You can list
-- remote domains here that will not be required to authenticate using
-- certificates. They will be authenticated using DNS instead, even
-- when s2s_secure_auth is enabled.

--s2s_insecure_domains = { "insecure.example" }

-- Even if you disable s2s_secure_auth, you can still require valid
-- certificates for some domains by specifying a list here.

--s2s_secure_domains = { "jabber.org" }

-- Enable rate limits for incoming client and server connections

limits = {
  c2s = {
    rate = "512kb/s"; -- as per damencho's recommendation, Hawke changed this from 10kb/s to 512kb/s 20210525a
  };
  s2sin = {
    rate = "30kb/s";
  };
}

-- Required for init scripts and prosodyctl
pidfile = "/var/run/prosody/prosody.pid"

-- Select the authentication backend to use. The 'internal' providers
-- use Prosody's configured data storage to store the authentication data.

authentication = "internal_hashed"

-- Select the storage backend to use. By default Prosody uses flat files
-- in its configured data directory, but it also supports more backends
-- through modules. An "sql" backend is included by default, but requires
-- additional dependencies. See https://prosody.im/doc/storage for more info.

--storage = "sql" -- Default is "internal"

-- For the "sql" backend, you can uncomment *one* of the below to configure:
--sql = { driver = "SQLite3", database = "prosody.sqlite" } -- Default. 'database' is the filename.
--sql = { driver = "MySQL", database = "prosody", username = "prosody", password = "secret", host = "localhost" }
--sql = { driver = "PostgreSQL", database = "prosody", username = "prosody", password = "secret", host = "localhost" }


-- Archiving configuration
-- If mod_mam is enabled, Prosody will store a copy of every message. This
-- is used to synchronize conversations between multiple clients, even if
-- they are offline. This setting controls how long Prosody will keep
-- messages in the archive before removing them.

archive_expires_after = "1w" -- Remove archived messages after 1 week

-- You can also configure messages to be stored in-memory only. For more
-- archiving options, see https://prosody.im/doc/modules/mod_mam

-- Logging configuration
-- For advanced logging see https://prosody.im/doc/logging
log = {
	info = "/var/log/prosody/prosody.log"; -- Change 'info' to 'debug' for verbose logging
	error = "/var/log/prosody/prosody.err";
	-- "*syslog"; -- Uncomment this for logging to syslog
	-- "*console"; -- Log to the console, useful for debugging with daemonize=false
}

-- Uncomment to enable statistics
-- For more info see https://prosody.im/doc/statistics
-- statistics = "internal"

-- Certificates
-- Every virtual host and component needs a certificate so that clients and
-- servers can securely verify its identity. Prosody will automatically load
-- certificates/keys from the directory specified here.
-- For more information, including how to use 'prosodyctl' to auto-import certificates
-- (from e.g. Let's Encrypt) see https://prosody.im/doc/certificates

-- Location of directory to find certificates in (relative to main config file):
certificates = "certs"

-- HTTPS currently only supports a single certificate, specify it here:
--https_certificate = "/etc/prosody/certs/localhost.crt"

----------- Virtual hosts -----------
-- You need to add a VirtualHost entry for each domain you wish Prosody to serve.
-- Settings under each VirtualHost entry apply *only* to that host.

VirtualHost "localhost"

--VirtualHost "example.com"
--	certificate = "/path/to/example.crt"

------ Components ------
-- You can specify components to add hosts that provide special services,
-- like multi-user conferences, and transports.
-- For more information on components, see https://prosody.im/doc/components

---Set up a MUC (multi-user chat) room server on conference.example.com:
--Component "conference.example.com" "muc"
--- Store MUC messages in an archive and allow users to access it
--modules_enabled = { "muc_mam" }

---Set up an external component (default component port is 5347)
--
-- External components allow adding various services, such as gateways/
-- transports to other networks like ICQ, MSN and Yahoo. For more info
-- see: https://prosody.im/doc/components#adding_an_external_component
--
--Component "gateway.example.com"
--	component_secret = "password"

Include "conf.d/*.cfg.lua"

cat /etc/prosody/conf.avail/lmtgt1.dev2dev.net.cfg.lua
plugin_paths = { “/usr/share/jitsi-meet/prosody-plugins/” }

– domain mapper options, must at least have domain base set to use the mapper
muc_mapper_domain_base = “lmtgt1.dev2dev.net”;

external_service_secret = “io5eXkbITYPCuRHe”;
external_services = {
{ type = “stun”, host = “lmtgt1.dev2dev.net”, port = 3478 },
{ type = “turn”, host = “lmtgt1.dev2dev.net”, port = 3478, transport = “udp”, secret = true, ttl = 86400, algorithm = “turn” },
{ type = “turns”, host = “lmtgt1.dev2dev.net”, port = 5349, transport = “tcp”, secret = true, ttl = 86400, algorithm = “turn” }
};

cross_domain_bosh = false;
cross_domain_websocket = true; – added by Hawke as per damencho suggestion 20210525L
consider_bosh_secure = true;
consider_websocket_secure = true; – added by Hawke as per damench suggestion 20210525L

– https_ports = { }; – Remove this line to prevent listening on port 5284

Mozilla SSL Configuration Generator
ssl = {
protocol = “tlsv1_2+”;
ciphers = “ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384”
}

VirtualHost “lmtgt1.dev2dev.net
– enabled = false – Remove this line to enable this host
authentication = “anonymous”
– Properties below are modified by jitsi-meet-tokens package config
– and authentication above is switched to “token”
–app_id=“example_app_id”
–app_secret=“example_app_secret”
– Assign this host a certificate for TLS, otherwise it would use the one
– set in the global section (if any).
– Note that old-style SSL on port 5223 only supports one certificate, and will always
– use the global one.
ssl = {
key = “/etc/prosody/certs/lmtgt1.dev2dev.net.key”;
certificate = “/etc/prosody/certs/lmtgt1.dev2dev.net.crt”;
}
speakerstats_component = “speakerstats.lmtgt1.dev2dev.net
conference_duration_component = “conferenceduration.lmtgt1.dev2dev.net
– we need bosh
modules_enabled = {
“bosh”;
“websocket”; --added by Hawke as per damencho suggestion 20210525L
“smacks”; --added by Hawke as per damencho suggestion 20210525L, I am holding off enabling smacks parts until test with just websocket parts first
“pubsub”;
“ping”; – Enable mod_ping
“speakerstats”;
“external_services”;
“conference_duration”;
“muc_lobby_rooms”;
}
– smacks section added by Hawke as per damencho suggestion 20210525L
smacks_max_unacked_stanzas = 5;
smacks_hibernation_time = 60;
smacks_max_hibernated_sessions = 1;
smacks_max_old_sessions = 1;

c2s_require_encryption = false
lobby_muc = "lobby.lmtgt1.dev2dev.net"
main_muc = "conference.lmtgt1.dev2dev.net"
-- muc_lobby_whitelist = { "recorder.lmtgt1.dev2dev.net" } -- Here we can whitelist jibri to enter lobby enabled rooms

Component “conference.lmtgt1.dev2dev.net” “muc”
storage = “memory”
modules_enabled = {
“muc_meeting_id”;
“muc_domain_mapper”;
–“token_verification”;
}
admins = { “focus@auth.lmtgt1.dev2dev.net” }
muc_room_locking = false
muc_room_default_public_jids = true

– internal muc component
Component “internal.auth.lmtgt1.dev2dev.net” “muc”
storage = “memory”
modules_enabled = {
“ping”;
}
admins = { “focus@auth.lmtgt1.dev2dev.net”, “jvb@auth.lmtgt1.dev2dev.net” }
muc_room_locking = false
muc_room_default_public_jids = true

VirtualHost “auth.lmtgt1.dev2dev.net
ssl = {
key = “/etc/prosody/certs/auth.lmtgt1.dev2dev.net.key”;
certificate = “/etc/prosody/certs/auth.lmtgt1.dev2dev.net.crt”;
}
authentication = “internal_hashed”

– Proxy to jicofo’s user JID, so that it doesn’t have to register as a component.
Component “focus.lmtgt1.dev2dev.net” “client_proxy”
target_address = “focus@auth.lmtgt1.dev2dev.net

Component “speakerstats.lmtgt1.dev2dev.net” “speakerstats_component”
muc_component = “conference.lmtgt1.dev2dev.net

Component “conferenceduration.lmtgt1.dev2dev.net” “conference_duration_component”
muc_component = “conference.lmtgt1.dev2dev.net

Component “lobby.lmtgt1.dev2dev.net” “muc”
storage = “memory”
restrict_room_creation = true
muc_room_locking = false
muc_room_default_public_jids = true

cat /etc/jitsi/meet/lmtgt1.dev2dev.net-config.js 
/* eslint-disable no-unused-vars, no-var */

var config = {
    // Connection
    //

    hosts: {
        // XMPP domain.
        domain: 'lmtgt1.dev2dev.net',

        // When using authentication, domain for guest users.
        // anonymousdomain: 'guest.example.com',

        // Domain for authenticated users. Defaults to <domain>.
        // authdomain: 'lmtgt1.dev2dev.net',

        // Focus component domain. Defaults to focus.<domain>.
        // focus: 'focus.lmtgt1.dev2dev.net',

        // XMPP MUC domain. FIXME: use XEP-0030 to discover it.
        muc: 'conference.<!--# echo var="subdomain" default="" -->lmtgt1.dev2dev.net'
    },

    // BOSH URL. FIXME: use XEP-0156 to discover it.
    bosh: '//lmtgt1.dev2dev.net/http-bind',

    // Websocket URL  -- Unremmed by Hawke as per namencho suggestion 20210525L
    websocket: 'wss://lmtgt1.dev2dev.net/xmpp-websocket',

    // The name of client node advertised in XEP-0115 'c' stanza
    clientNode: 'http://jitsi.org/jitsimeet',

    // The real JID of focus participant - can be overridden here
    // Do not change username - FIXME: Make focus username configurable
    // https://github.com/jitsi/jitsi-meet/issues/7376
    // focusUserJid: 'focus@auth.lmtgt1.dev2dev.net',


    // Testing / experimental features.
    //

    testing: {
        // Disables the End to End Encryption feature. Useful for debugging
        // issues related to insertable streams.
        // disableE2EE: false,

        // P2P test mode disables automatic switching to P2P when there are 2
        // participants in the conference.
        p2pTestMode: false

        // Enables the test specific features consumed by jitsi-meet-torture
        // testMode: false

        // Disables the auto-play behavior of *all* newly created video element.
        // This is useful when the client runs on a host with limited resources.
        // noAutoPlayVideo: false

        // Enable / disable 500 Kbps bitrate cap on desktop tracks. When enabled,
        // simulcast is turned off for the desktop share. If presenter is turned
        // on while screensharing is in progress, the max bitrate is automatically
        // adjusted to 2.5 Mbps. This takes a value between 0 and 1 which determines
        // the probability for this to be enabled. This setting has been deprecated.
        // desktopSharingFrameRate.max now determines whether simulcast will be enabled
        // or disabled for the screenshare.
        // capScreenshareBitrate: 1 // 0 to disable - deprecated.

        // Enable callstats only for a percentage of users.
        // This takes a value between 0 and 100 which determines the probability for
        // the callstats to be enabled.
        // callStatsThreshold: 5 // enable callstats for 5% of the users.
    },

    // Disables ICE/UDP by filtering out local and remote UDP candidates in
    // signalling.
    // webrtcIceUdpDisable: false,

    // Disables ICE/TCP by filtering out local and remote TCP candidates in
    // signalling.
    // webrtcIceTcpDisable: false,


    // Media
    //

    // Audio

    // Disable measuring of audio levels.
    // disableAudioLevels: false,
    // audioLevelsInterval: 200,

    // Enabling this will run the lib-jitsi-meet no audio detection module which
    // will notify the user if the current selected microphone has no audio
    // input and will suggest another valid device if one is present.
    enableNoAudioDetection: true,

    // Enabling this will show a "Save Logs" link in the GSM popover that can be
    // used to collect debug information (XMPP IQs, SDP offer/answer cycles)
    // about the call.
    // enableSaveLogs: false,

    // Enabling this will run the lib-jitsi-meet noise detection module which will
    // notify the user if there is noise, other than voice, coming from the current
    // selected microphone. The purpose it to let the user know that the input could
    // be potentially unpleasant for other meeting participants.
    enableNoisyMicDetection: true,

    // Start the conference in audio only mode (no video is being received nor
    // sent).
    // startAudioOnly: false,

    // Every participant after the Nth will start audio muted.
    // startAudioMuted: 10,

    // Start calls with audio muted. Unlike the option above, this one is only
    // applied locally. FIXME: having these 2 options is confusing.
    // startWithAudioMuted: false,

    // Enabling it (with #params) will disable local audio output of remote
    // participants and to enable it back a reload is needed.
    // startSilent: false

    // Enables support for opus-red (redundancy for Opus).
    // enableOpusRed: false,

    // Specify audio quality stereo and opusMaxAverageBitrate values in order to enable HD audio.
    // Beware, by doing so, you are disabling echo cancellation, noise suppression and AGC.
    // audioQuality: {
    //     stereo: false,
    //     opusMaxAverageBitrate: null // Value to fit the 6000 to 510000 range.
    // },

    // Video

    // Sets the preferred resolution (height) for local video. Defaults to 720.
    // resolution: 720,

    // How many participants while in the tile view mode, before the receiving video quality is reduced from HD to SD.
    // Use -1 to disable.
    // maxFullResolutionParticipants: 2,

    // w3c spec-compliant video constraints to use for video capture. Currently
    // used by browsers that return true from lib-jitsi-meet's
    // util#browser#usesNewGumFlow. The constraints are independent from
    // this config's resolution value. Defaults to requesting an ideal
    // resolution of 720p.
    // constraints: {
    //     video: {
    //         height: {
    //             ideal: 720,
    //             max: 720,
    //             min: 240
    //         }
    //     }
    // },

    // Enable / disable simulcast support.
    // disableSimulcast: false,

    // Enable / disable layer suspension.  If enabled, endpoints whose HD
    // layers are not in use will be suspended (no longer sent) until they
    // are requested again.
    // enableLayerSuspension: false,

    // Every participant after the Nth will start video muted.
    // startVideoMuted: 10,

    // Start calls with video muted. Unlike the option above, this one is only
    // applied locally. FIXME: having these 2 options is confusing.
    // startWithVideoMuted: false,

    // If set to true, prefer to use the H.264 video codec (if supported).
    // Note that it's not recommended to do this because simulcast is not
    // supported when  using H.264. For 1-to-1 calls this setting is enabled by
    // default and can be toggled in the p2p section.
    // This option has been deprecated, use preferredCodec under videoQuality section instead.
    // preferH264: true,

    // If set to true, disable H.264 video codec by stripping it out of the
    // SDP.
    // disableH264: false,

    // Desktop sharing

    // Optional desktop sharing frame rate options. Default value: min:5, max:5.
    // desktopSharingFrameRate: {
    //     min: 5,
    //     max: 5
    // },

    // Try to start calls with screen-sharing instead of camera video.
    // startScreenSharing: false,

    // Recording

    // Whether to enable file recording or not.
    // fileRecordingsEnabled: false,
    // Enable the dropbox integration.
    // dropbox: {
    //     appKey: '<APP_KEY>' // Specify your app key here.
    //     // A URL to redirect the user to, after authenticating
    //     // by default uses:
    //     // 'https://lmtgt1.dev2dev.net/static/oauth.html'
    //     redirectURI:
    //          'https://lmtgt1.dev2dev.net/subfolder/static/oauth.html'
    // },
    // When integrations like dropbox are enabled only that will be shown,
    // by enabling fileRecordingsServiceEnabled, we show both the integrations
    // and the generic recording service (its configuration and storage type
    // depends on jibri configuration)
    // fileRecordingsServiceEnabled: false,
    // Whether to show the possibility to share file recording with other people
    // (e.g. meeting participants), based on the actual implementation
    // on the backend.
    // fileRecordingsServiceSharingEnabled: false,

    // Whether to enable live streaming or not.
    // liveStreamingEnabled: false,

    // Transcription (in interface_config,
    // subtitles and buttons can be configured)
    // transcribingEnabled: false,

    // Enables automatic turning on captions when recording is started
    // autoCaptionOnRecord: false,

    // Misc

    // Default value for the channel "last N" attribute. -1 for unlimited.
    channelLastN: -1,

    // Provides a way for the lastN value to be controlled through the UI.
    // When startLastN is present, conference starts with a last-n value of startLastN and channelLastN
    // value will be used when the quality level is selected using "Manage Video Quality" slider.
    // startLastN: 1,

    // Provides a way to use different "last N" values based on the number of participants in the conference.
    // The keys in an Object represent number of participants and the values are "last N" to be used when number of
    // participants gets to or above the number.
    //
    // For the given example mapping, "last N" will be set to 20 as long as there are at least 5, but less than
    // 29 participants in the call and it will be lowered to 15 when the 30th participant joins. The 'channelLastN'
    // will be used as default until the first threshold is reached.
    //
    // lastNLimits: {
    //     5: 20,
    //     30: 15,
    //     50: 10,
    //     70: 5,
    //     90: 2
    // },

    // Provides a way to translate the legacy bridge signaling messages, 'LastNChangedEvent',
    // 'SelectedEndpointsChangedEvent' and 'ReceiverVideoConstraint' into the new 'ReceiverVideoConstraints' message
    // that invokes the new bandwidth allocation algorithm in the bridge which is described here
    // - https://github.com/jitsi/jitsi-videobridge/blob/master/doc/allocation.md.
    // useNewBandwidthAllocationStrategy: false,

    // Specify the settings for video quality optimizations on the client.
    // videoQuality: {
    //    // Provides a way to prevent a video codec from being negotiated on the JVB connection. The codec specified
    //    // here will be removed from the list of codecs present in the SDP answer generated by the client. If the
    //    // same codec is specified for both the disabled and preferred option, the disable settings will prevail.
    //    // Note that 'VP8' cannot be disabled since it's a mandatory codec, the setting will be ignored in this case.
    //    disabledCodec: 'H264',
    //
    //    // Provides a way to set a preferred video codec for the JVB connection. If 'H264' is specified here,
    //    // simulcast will be automatically disabled since JVB doesn't support H264 simulcast yet. This will only
    //    // rearrange the the preference order of the codecs in the SDP answer generated by the browser only if the
    //    // preferred codec specified here is present. Please ensure that the JVB offers the specified codec for this
    //    // to take effect.
    //    preferredCodec: 'VP8',
    //
    //    // Provides a way to enforce the preferred codec for the conference even when the conference has endpoints
    //    // that do not support the preferred codec. For example, older versions of Safari do not support VP9 yet.
    //    // This will result in Safari not being able to decode video from endpoints sending VP9 video.
    //    // When set to false, the conference falls back to VP8 whenever there is an endpoint that doesn't support the
    //    // preferred codec and goes back to the preferred codec when that endpoint leaves.
    //    // enforcePreferredCodec: false,
    //
    //    // Provides a way to configure the maximum bitrates that will be enforced on the simulcast streams for
    //    // video tracks. The keys in the object represent the type of the stream (LD, SD or HD) and the values
    //    // are the max.bitrates to be set on that particular type of stream. The actual send may vary based on
    //    // the available bandwidth calculated by the browser, but it will be capped by the values specified here.
    //    // This is currently not implemented on app based clients on mobile.
    //    maxBitratesVideo: {
    //          H264: {
    //              low: 200000,
    //              standard: 500000,
    //              high: 1500000
    //          },
    //          VP8 : {
    //              low: 200000,
    //              standard: 500000,
    //              high: 1500000
    //          },
    //          VP9: {
    //              low: 100000,
    //              standard: 300000,
    //              high:  1200000
    //          }
    //    },
    //
    //    // The options can be used to override default thresholds of video thumbnail heights corresponding to
    //    // the video quality levels used in the application. At the time of this writing the allowed levels are:
    //    //     'low' - for the low quality level (180p at the time of this writing)
    //    //     'standard' - for the medium quality level (360p)
    //    //     'high' - for the high quality level (720p)
    //    // The keys should be positive numbers which represent the minimal thumbnail height for the quality level.
    //    //
    //    // With the default config value below the application will use 'low' quality until the thumbnails are
    //    // at least 360 pixels tall. If the thumbnail height reaches 720 pixels then the application will switch to
    //    // the high quality.
    //    minHeightForQualityLvl: {
    //        360: 'standard',
    //        720: 'high'
    //    },
    //
    //    // Provides a way to resize the desktop track to 720p (if it is greater than 720p) before creating a canvas
    //    // for the presenter mode (camera picture-in-picture mode with screenshare).
    //    resizeDesktopForPresenter: false
    // },

    // // Options for the recording limit notification.
    // recordingLimit: {
    //
    //    // The recording limit in minutes. Note: This number appears in the notification text
    //    // but doesn't enforce the actual recording time limit. This should be configured in
    //    // jibri!
    //    limit: 60,
    //
    //    // The name of the app with unlimited recordings.
    //    appName: 'Unlimited recordings APP',
    //
    //    // The URL of the app with unlimited recordings.
    //    appURL: 'https://unlimited.recordings.app.com/'
    // },

    // Disables or enables RTX (RFC 4588) (defaults to false).
    // disableRtx: false,

    // Disables or enables TCC support in this client (default: enabled).
    // enableTcc: true,

    // Disables or enables REMB support in this client (default: enabled).
    // enableRemb: true,

    // Enables ICE restart logic in LJM and displays the page reload overlay on
    // ICE failure. Current disabled by default because it's causing issues with
    // signaling when Octo is enabled. Also when we do an "ICE restart"(which is
    // not a real ICE restart), the client maintains the TCC sequence number
    // counter, but the bridge resets it. The bridge sends media packets with
    // TCC sequence numbers starting from 0.
    // enableIceRestart: false,

    // Enables forced reload of the client when the call is migrated as a result of
    // the bridge going down.
    // enableForcedReload: true,

    // Use TURN/UDP servers for the jitsi-videobridge connection (by default
    // we filter out TURN/UDP because it is usually not needed since the
    // bridge itself is reachable via UDP)
    // useTurnUdp: false

    // UI
    //

    // Disables responsive tiles.
    // disableResponsiveTiles: false,

    // Hides lobby button
    // hideLobbyButton: false,

    // Require users to always specify a display name.
    // requireDisplayName: true,

    // Whether to use a welcome page or not. In case it's false a random room
    // will be joined when no room is specified.
    enableWelcomePage: true,

    // Disable app shortcuts that are registered upon joining a conference
    // disableShortcuts: false,

    // Disable initial browser getUserMedia requests.
    // This is useful for scenarios where users might want to start a conference for screensharing only
    // disableInitialGUM: false,

    // Enabling the close page will ignore the welcome page redirection when
    // a call is hangup.
    // enableClosePage: false,

    // Disable hiding of remote thumbnails when in a 1-on-1 conference call.
    // disable1On1Mode: false,

    // Default language for the user interface.
    // defaultLanguage: 'en',

    // Disables profile and the edit of all fields from the profile settings (display name and email)
    // disableProfile: false,

    // Whether or not some features are checked based on token.
    // enableFeaturesBasedOnToken: false,

    // When enabled the password used for locking a room is restricted to up to the number of digits specified
    // roomPasswordNumberOfDigits: 10,
    // default: roomPasswordNumberOfDigits: false,

    // Message to show the users. Example: 'The service will be down for
    // maintenance at 01:00 AM GMT,
    // noticeMessage: '',

    // Enables calendar integration, depends on googleApiApplicationClientID
    // and microsoftApiApplicationClientID
    // enableCalendarIntegration: false,

    // When 'true', it shows an intermediate page before joining, where the user can configure their devices.
    // prejoinPageEnabled: false,

    // If etherpad integration is enabled, setting this to true will
    // automatically open the etherpad when a participant joins.  This
    // does not affect the mobile app since opening an etherpad
    // obscures the conference controls -- it's better to let users
    // choose to open the pad on their own in that case.
    // openSharedDocumentOnJoin: false,

    // If true, shows the unsafe room name warning label when a room name is
    // deemed unsafe (due to the simplicity in the name) and a password is not
    // set or the lobby is not enabled.
    // enableInsecureRoomNameWarning: false,

    // Whether to automatically copy invitation URL after creating a room.
    // Document should be focused for this option to work
    // enableAutomaticUrlCopy: false,

    // Base URL for a Gravatar-compatible service. Defaults to libravatar.
    // gravatarBaseURL: 'https://seccdn.libravatar.org/avatar/',

    // Moved from interfaceConfig(TOOLBAR_BUTTONS).
    // The name of the toolbar buttons to display in the toolbar, including the
    // "More actions" menu. If present, the button will display. Exceptions are
    // "livestreaming" and "recording" which also require being a moderator and
    // some other values in config.js to be enabled. Also, the "profile" button will
    // not display for users with a JWT.
    // Notes:
    // - it's impossible to choose which buttons go in the "More actions" menu
    // - it's impossible to control the placement of buttons
    // - 'desktop' controls the "Share your screen" button
    // - if `toolbarButtons` is undefined, we fallback to enabling all buttons on the UI
    // toolbarButtons: [
    //    'microphone', 'camera', 'closedcaptions', 'desktop', 'embedmeeting', 'fullscreen',
    //    'fodeviceselection', 'hangup', 'profile', 'chat', 'recording',
    //    'livestreaming', 'etherpad', 'sharedvideo', 'shareaudio', 'settings', 'raisehand',
    //    'videoquality', 'filmstrip', 'invite', 'feedback', 'stats', 'shortcuts',
    //    'tileview', 'select-background', 'download', 'help', 'mute-everyone', 'mute-video-everyone', 'security'
    // ],

    // Stats
    //

    // Whether to enable stats collection or not in the TraceablePeerConnection.
    // This can be useful for debugging purposes (post-processing/analysis of
    // the webrtc stats) as it is done in the jitsi-meet-torture bandwidth
    // estimation tests.
    // gatherStats: false,

    // The interval at which PeerConnection.getStats() is called. Defaults to 10000
    // pcStatsInterval: 10000,

    // To enable sending statistics to callstats.io you must provide the
    // Application ID and Secret.
    // callStatsID: '',
    // callStatsSecret: '',

    // Enables sending participants' display names to callstats
    // enableDisplayNameInStats: false,

    // Enables sending participants' emails (if available) to callstats and other analytics
    // enableEmailInStats: false,

    // Controls the percentage of automatic feedback shown to participants when callstats is enabled.
    // The default value is 100%. If set to 0, no automatic feedback will be requested
    // feedbackPercentage: 100,

    // Privacy
    //

    // If third party requests are disabled, no other server will be contacted.
    // This means avatars will be locally generated and callstats integration
    // will not function.
    // disableThirdPartyRequests: false,


    // Peer-To-Peer mode: used (if enabled) when there are just 2 participants.
    //

    p2p: {
        // Enables peer to peer mode. When enabled the system will try to
        // establish a direct connection when there are exactly 2 participants
        // in the room. If that succeeds the conference will stop sending data
        // through the JVB and use the peer to peer connection instead. When a
        // 3rd participant joins the conference will be moved back to the JVB
        // connection.
        enabled: true,

        // Sets the ICE transport policy for the p2p connection. At the time
        // of this writing the list of possible values are 'all' and 'relay',
        // but that is subject to change in the future. The enum is defined in
        // the WebRTC standard:
        // https://www.w3.org/TR/webrtc/#rtcicetransportpolicy-enum.
        // If not set, the effective value is 'all'.
        // iceTransportPolicy: 'all',

        // If set to true, it will prefer to use H.264 for P2P calls (if H.264
        // is supported). This setting is deprecated, use preferredCodec instead.
        // preferH264: true,

        // Provides a way to set the video codec preference on the p2p connection. Acceptable
        // codec values are 'VP8', 'VP9' and 'H264'.
        // preferredCodec: 'H264',

        // If set to true, disable H.264 video codec by stripping it out of the
        // SDP. This setting is deprecated, use disabledCodec instead.
        // disableH264: false,

        // Provides a way to prevent a video codec from being negotiated on the p2p connection.
        // disabledCodec: '',

        // How long we're going to wait, before going back to P2P after the 3rd
        // participant has left the conference (to filter out page reload).
        // backToP2PDelay: 5,

        // The STUN servers that will be used in the peer to peer connections
        stunServers: [

            // { urls: 'stun:lmtgt1.dev2dev.net:3478' },
            { urls: 'stun:meet-jit-si-turnrelay.jitsi.net:443' }
        ]
    },

    analytics: {
        // The Google Analytics Tracking ID:
        // googleAnalyticsTrackingId: 'your-tracking-id-UA-123456-1'

        // Matomo configuration:
        // matomoEndpoint: 'https://your-matomo-endpoint/',
        // matomoSiteID: '42',

        // The Amplitude APP Key:
        // amplitudeAPPKey: '<APP_KEY>'

        // Configuration for the rtcstats server:
        // By enabling rtcstats server every time a conference is joined the rtcstats
        // module connects to the provided rtcstatsEndpoint and sends statistics regarding
        // PeerConnection states along with getStats metrics polled at the specified
        // interval.
        // rtcstatsEnabled: true,

        // In order to enable rtcstats one needs to provide a endpoint url.
        // rtcstatsEndpoint: wss://rtcstats-server-pilot.jitsi.net/,

        // The interval at which rtcstats will poll getStats, defaults to 1000ms.
        // If the value is set to 0 getStats won't be polled and the rtcstats client
        // will only send data related to RTCPeerConnection events.
        // rtcstatsPolIInterval: 1000,

        // Array of script URLs to load as lib-jitsi-meet "analytics handlers".
        // scriptURLs: [
        //      "libs/analytics-ga.min.js", // google-analytics
        //      "https://example.com/my-custom-analytics.js"
        // ],
    },

    // Logs that should go be passed through the 'log' event if a handler is defined for it
    // apiLogLevels: ['warn', 'log', 'error', 'info', 'debug'],

    // Information about the jitsi-meet instance we are connecting to, including
    // the user region as seen by the server.
    deploymentInfo: {
        // shard: "shard1",
        // region: "europe",
        // userRegion: "asia"
    },

    // Decides whether the start/stop recording audio notifications should play on record.
    // disableRecordAudioNotification: false,

    // Disables the sounds that play when other participants join or leave the
    // conference (if set to true, these sounds will not be played).
    // disableJoinLeaveSounds: false,

    // Information for the chrome extension banner
    // chromeExtensionBanner: {
    //     // The chrome extension to be installed address
    //     url: 'https://chrome.google.com/webstore/detail/jitsi-meetings/kglhbbefdnlheedjiejgomgmfplipfeb',

    //     // Extensions info which allows checking if they are installed or not
    //     chromeExtensionsInfo: [
    //         {
    //             id: 'kglhbbefdnlheedjiejgomgmfplipfeb',
    //             path: 'jitsi-logo-48x48.png'
    //         }
    //     ]
    // },

    // Local Recording
    //

    // localRecording: {
    // Enables local recording.
    // Additionally, 'localrecording' (all lowercase) needs to be added to
    // TOOLBAR_BUTTONS in interface_config.js for the Local Recording
    // button to show up on the toolbar.
    //
    //     enabled: true,
    //

    // The recording format, can be one of 'ogg', 'flac' or 'wav'.
    //     format: 'flac'
    //

    // },

    // Options related to end-to-end (participant to participant) ping.
    // e2eping: {
    //   // The interval in milliseconds at which pings will be sent.
    //   // Defaults to 10000, set to <= 0 to disable.
    //   pingInterval: 10000,
    //
    //   // The interval in milliseconds at which analytics events
    //   // with the measured RTT will be sent. Defaults to 60000, set
    //   // to <= 0 to disable.
    //   analyticsInterval: 60000,
    //   },

    // If set, will attempt to use the provided video input device label when
    // triggering a screenshare, instead of proceeding through the normal flow
    // for obtaining a desktop stream.
    // NOTE: This option is experimental and is currently intended for internal
    // use only.
    // _desktopSharingSourceDevice: 'sample-id-or-label',

    // If true, any checks to handoff to another application will be prevented
    // and instead the app will continue to display in the current browser.
    // disableDeepLinking: false,

    // A property to disable the right click context menu for localVideo
    // the menu has option to flip the locally seen video for local presentations
    // disableLocalVideoFlip: false,

    // A property used to unset the default flip state of the local video.
    // When it is set to 'true', the local(self) video will not be mirrored anymore.
    // doNotFlipLocalVideo: false,

    // Mainly privacy related settings

    // Disables all invite functions from the app (share, invite, dial out...etc)
    // disableInviteFunctions: true,

    // Disables storing the room name to the recents list
    // doNotStoreRoom: true,

    // Deployment specific URLs.
    // deploymentUrls: {
    //    // If specified a 'Help' button will be displayed in the overflow menu with a link to the specified URL for
    //    // user documentation.
    //    userDocumentationURL: 'https://docs.example.com/video-meetings.html',
    //    // If specified a 'Download our apps' button will be displayed in the overflow menu with a link
    //    // to the specified URL for an app download page.
    //    downloadAppsUrl: 'https://docs.example.com/our-apps.html'
    // },

    // Options related to the remote participant menu.
    // remoteVideoMenu: {
    //     // If set to true the 'Kick out' button will be disabled.
    //     disableKick: true,
    //     // If set to true the 'Grant moderator' button will be disabled.
    //     disableGrantModerator: true
    // },

    // If set to true all muting operations of remote participants will be disabled.
    // disableRemoteMute: true,

    // Enables support for lip-sync for this client (if the browser supports it).
    // enableLipSync: false

    /**
     External API url used to receive branding specific information.
     If there is no url set or there are missing fields, the defaults are applied.
     None of the fields are mandatory and the response must have the shape:
     {
         // The hex value for the colour used as background
         backgroundColor: '#fff',
         // The url for the image used as background
         backgroundImageUrl: 'https://example.com/background-img.png',
         // The anchor url used when clicking the logo image
         logoClickUrl: 'https://example-company.org',
         // The url used for the image used as logo
         logoImageUrl: 'https://example.com/logo-img.png'
     }
    */
    // dynamicBrandingUrl: '',

    // Sets the background transparency level. '0' is fully transparent, '1' is opaque.
    // backgroundAlpha: 1,

    // The URL of the moderated rooms microservice, if available. If it
    // is present, a link to the service will be rendered on the welcome page,
    // otherwise the app doesn't render it.
    // moderatedRoomServiceUrl: 'https://moderated.lmtgt1.dev2dev.net',

    // If true, tile view will not be enabled automatically when the participants count threshold is reached.
    // disableTileView: true,

    // Hides the conference subject
    // hideConferenceSubject: true,

    // Hides the conference timer.
    // hideConferenceTimer: true,

    // Hides the participants stats
    // hideParticipantsStats: true,

    // Sets the conference subject
    // subject: 'Conference Subject',

    // This property is related to the use case when jitsi-meet is used via the IFrame API. When the property is true
    // jitsi-meet will use the local storage of the host page instead of its own. This option is useful if the browser
    // is not persisting the local storage inside the iframe.
    // useHostPageLocalStorage: true,

    // List of undocumented settings used in jitsi-meet
    /**
     _immediateReloadThreshold
     debug
     debugAudioLevels
     deploymentInfo
     dialInConfCodeUrl
     dialInNumbersUrl
     dialOutAuthUrl
     dialOutCodesUrl
     disableRemoteControl
     displayJids
     etherpad_base
     externalConnectUrl
     firefox_fake_device
     googleApiApplicationClientID
     iAmRecorder
     iAmSipGateway
     microsoftApiApplicationClientID
     peopleSearchQueryTypes
     peopleSearchUrl
     requireDisplayName
     tokenAuthUrl
     */

    /**
     * This property can be used to alter the generated meeting invite links (in combination with a branding domain
     * which is retrieved internally by jitsi meet) (e.g. https://meet.jit.si/someMeeting
     * can become https://brandedDomain/roomAlias)
     */
    // brandingRoomAlias: null,

    // List of undocumented settings used in lib-jitsi-meet
    /**
     _peerConnStatusOutOfLastNTimeout
     _peerConnStatusRtcMuteTimeout
     abTesting
     avgRtpStatsN
     callStatsConfIDNamespace
     callStatsCustomScriptUrl
     desktopSharingSources
     disableAEC
     disableAGC
     disableAP
     disableHPF
     disableNS
     enableTalkWhileMuted
     forceJVB121Ratio
     forceTurnRelay
     hiddenDomain
     ignoreStartMuted
     websocketKeepAlive
     websocketKeepAliveUrl
     */

    /**
        Use this array to configure which notifications will be shown to the user
        The items correspond to the title or description key of that notification
        Some of these notifications also depend on some other internal logic to be displayed or not,
        so adding them here will not ensure they will always be displayed

        A falsy value for this prop will result in having all notifications enabled (e.g null, undefined, false)
    */
    // notifications: [
    //     'connection.CONNFAIL', // shown when the connection fails,
    //     'dialog.cameraNotSendingData', // shown when there's no feed from user's camera
    //     'dialog.kickTitle', // shown when user has been kicked
    //     'dialog.liveStreaming', // livestreaming notifications (pending, on, off, limits)
    //     'dialog.lockTitle', // shown when setting conference password fails
    //     'dialog.maxUsersLimitReached', // shown when maximmum users limit has been reached
    //     'dialog.micNotSendingData', // shown when user's mic is not sending any audio
    //     'dialog.passwordNotSupportedTitle', // shown when setting conference password fails due to password format
    //     'dialog.recording', // recording notifications (pending, on, off, limits)
    //     'dialog.remoteControlTitle', // remote control notifications (allowed, denied, start, stop, error)
    //     'dialog.reservationError',
    //     'dialog.serviceUnavailable', // shown when server is not reachable
    //     'dialog.sessTerminated', // shown when there is a failed conference session
    //     'dialog.sessionRestarted', // show when a client reload is initiated because of bridge migration
    //     'dialog.tokenAuthFailed', // show when an invalid jwt is used
    //     'dialog.transcribing', // transcribing notifications (pending, off)
    //     'dialOut.statusMessage', // shown when dial out status is updated.
    //     'liveStreaming.busy', // shown when livestreaming service is busy
    //     'liveStreaming.failedToStart', // shown when livestreaming fails to start
    //     'liveStreaming.unavailableTitle', // shown when livestreaming service is not reachable
    //     'lobby.joinRejectedMessage', // shown when while in a lobby, user's request to join is rejected
    //     'lobby.notificationTitle', // shown when lobby is toggled and when join requests are allowed / denied
    //     'localRecording.localRecording', // shown when a local recording is started
    //     'notify.disconnected', // shown when a participant has left
    //     'notify.grantedTo', // shown when moderator rights were granted to a participant
    //     'notify.invitedOneMember', // shown when 1 participant has been invited
    //     'notify.invitedThreePlusMembers', // shown when 3+ participants have been invited
    //     'notify.invitedTwoMembers', // shown when 2 participants have been invited
    //     'notify.kickParticipant', // shown when a participant is kicked
    //     'notify.mutedRemotelyTitle', // shown when user is muted by a remote party
    //     'notify.mutedTitle', // shown when user has been muted upon joining,
    //     'notify.newDeviceAudioTitle', // prompts the user to use a newly detected audio device
    //     'notify.newDeviceCameraTitle', // prompts the user to use a newly detected camera
    //     'notify.passwordRemovedRemotely', // shown when a password has been removed remotely
    //     'notify.passwordSetRemotely', // shown when a password has been set remotely
    //     'notify.raisedHand', // shown when a partcipant used raise hand,
    //     'notify.startSilentTitle', // shown when user joined with no audio
    //     'prejoin.errorDialOut',
    //     'prejoin.errorDialOutDisconnected',
    //     'prejoin.errorDialOutFailed',
    //     'prejoin.errorDialOutStatus',
    //     'prejoin.errorStatusCode',
    //     'prejoin.errorValidation',
    //     'recording.busy', // shown when recording service is busy
    //     'recording.failedToStart', // shown when recording fails to start
    //     'recording.unavailableTitle', // shown when recording service is not reachable
    //     'toolbar.noAudioSignalTitle', // shown when a broken mic is detected
    //     'toolbar.noisyAudioInputTitle', // shown when noise is detected for the current microphone
    //     'toolbar.talkWhileMutedPopup', // shown when user tries to speak while muted
    //     'transcribing.failedToStart' // shown when transcribing fails to start
    // ]

    // Allow all above example options to include a trailing comma and
    // prevent fear when commenting out the last value.
    makeJsonParserHappy: 'even if last key had a trailing comma'

    // no configuration value should follow this line.
};

/* eslint-enable no-unused-vars, no-var */

Appreciate any additional suggestions for what I can try differently to get this workign. Seems like getting closer, but not quite there yet.
Thanks!

1 Like

Check where you get the connections dropped, check nginx logs.
Monitor prosody CPU usage.
Basically try locating the problem … it is hard to guess … it can be file handlers limits, tasks limits on some process or nginx workers limits …

1 Like

Ah! I had been so focused on watching the Jitsi and Prosody logs I had stopped looking at the nginx ones:

=> /var/log/nginx/error.log <==
2021/05/26 00:49:19 [alert] 757#757: *14448 768 worker_connections are not enough while connecting to upstream, client: 3.141.39.164, server: lmtgt1.dev2dev.net, request: "GET /xmpp-websocket?room=loadtest49 HTTP/1.1", upstream: "http://127.0.0.1:5280/xmpp-websocket?prefix=&room=loadtest49", host: "lmtgt1.dev2dev.net", referrer: "https://lmtgt1.dev2dev.net/loadtest49"

==> /var/log/nginx/access.log <==
3.141.39.164 - - [26/May/2021:00:49:19 +0000] "GET /xmpp-websocket?room=loadtest49 HTTP/1.1" 500 600 "https://lmtgt1.dev2dev.net/loadtest49" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/90.0.4430.85 Safari/537.36"

==> /var/log/nginx/error.log <==
2021/05/26 00:49:19 [alert] 757#757: *14449 768 worker_connections are not enough while connecting to upstream, client: 3.133.120.158, server: lmtgt1.dev2dev.net, request: "GET /xmpp-websocket?room=loadtest17 HTTP/1.1", upstream: "http://127.0.0.1:5280/xmpp-websocket?prefix=&room=loadtest17", host: "lmtgt1.dev2dev.net"

==> /var/log/nginx/access.log <==
3.133.120.158 - - [26/May/2021:00:49:19 +0000] "GET /xmpp-websocket?room=loadtest17 HTTP/1.1" 500 600 "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/90.0.4430.85 Safari/537.36"
18.223.241.112 - - [26/May/2021:00:49:19 +0000] "GET /loadtest41 HTTP/1.1" 200 20996 "https://lmtgt1.dev2dev.net/loadtest41" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/90.0.4430.85 Safari/537.36"

Edited /etc/nginx/nginx.conf from 768 workers to 2000…
events {
worker_connections 2000; #increased by Hawke for larger capacity scaling
# multi_accept on;
}

restarted nginx, restarted load test…

That immediately made it so that 10 attendees in the rooms (Shoud be 12 in loadtest0 and 10 in the rest).

Saw the following errors now in nginx:

2021/05/26 00:54:36 [alert] 5460#5460: *10586 socket() failed (24: Too many open files) while connecting to upstream, client: 13.58.225.151, server: lmtgt1.dev2dev.net, request: "GET /colibri-ws/default-id/88005dd23b712f1b/c52b7bdc?pwd=ug4v1o6dc548rccjeu55t4efn HTTP/1.1", upstream: "http://127.0.0.1:9090/colibri-ws/default-id/88005dd23b712f1b/c52b7bdc?pwd=ug4v1o6dc548rccjeu55t4efn", host: "lmtgt1.dev2dev.net"
2021/05/26 00:54:36 [alert] 5460#5460: *10587 socket() failed (24: Too many open files) while connecting to upstream, client: 18.222.20.104, server: lmtgt1.dev2dev.net, request: "GET /colibri-ws/default-id/2c374845c031e5f2/86976c5e?pwd=4gdmuef8v6pqbfh71lnlqsam0p HTTP/1.1", upstream: "http://127.0.0.1:9090/colibri-ws/default-id/2c374845c031e5f2/86976c5e?pwd=4gdmuef8v6pqbfh71lnlqsam0p", host: "lmtgt1.dev2dev.net"
2021/05/26 00:54:36 [alert] 5460#5460: *10588 socket() failed (24: Too many open files) while connecting to upstream, client: 18.188.6.169, server: lmtgt1.dev2dev.net, request: "GET /colibri-ws/default-id/9849ca00fb60edb7/8617953d?pwd=50i3c8esh847ttrkndqh471mk0 HTTP/1.1", upstream: "http://127.0.0.1:9090/colibri-ws/default-id/9849ca00fb60edb7/8617953d?pwd=50i3c8esh847ttrkndqh471mk0", host: "lmtgt1.dev2dev.net"
2021/05/26 00:54:36 [crit] 5462#5462: accept4() failed (24: Too many open files)

Edited /etc/security/limits.conf
to add the following:
nginx soft nofile 30000
nginx hard nofile 50000

edited (again) /etc/nginx/nginx.conf to increase workers, and add rlimit from above number.

events {
worker_connections 2000;
# multi_accept on;
worker_rlimit_nofile 300000
}

rebooted

now seeing this in nginx logs:

HTTP/1.1", upstream: "http://127.0.0.1:5280/xmpp-websocket?prefix=&room=loadtest81", host: "lmtgt1.dev2dev.net"
2021/05/26 00:59:08 [error] 5463#5463: *9913 recv() failed (104: Connection reset by peer) while proxying upgraded connection, client: 3.133.129.17, server: lmtgt1.dev2dev.net, request: "GET /xmpp-websocket?room=loadtest0 HTTP/1.1", upstream: "http://127.0.0.1:5280/xmpp-websocket?prefix=&room=loadtest0", host: "lmtgt1.dev2dev.net"
2021/05/26 01:04:47 [error] 5459#5459: *34 recv() failed (104: Connection reset by peer) while proxying upgraded connection, client: 96.79.202.21, server: lmtgt1.dev2dev.net, request: "GET /colibri-ws/default-id/d74ad537a372a336/0c18074b?pwd=6nrfkf9ohkbfj6mbkvhr7k2slo HTTP/1.1", upstream: "http://127.0.0.1:9090/colibri-ws/default-id/d74ad537a372a336/0c18074b?pwd=6nrfkf9ohkbfj6mbkvhr7k2slo", host: "lmtgt1.dev2dev.net"
2021/05/26 01:04:47 [error] 5459#5459: *30 recv() failed (104: Connection reset by peer) while proxying upgraded connection, client: 96.79.202.21, server: lmtgt1.dev2dev.net, request: "GET /colibri-ws/default-id/d74ad537a372a336/8831aa6c?pwd=6qfum0gc1uu9ubkl35fk1c3b2h HTTP/1.1", upstream: "http://127.0.0.1:9090/colibri-ws/default-id/d74ad537a372a336/8831aa6c?pwd=6qfum0gc1uu9ubkl35fk1c3b2h", host: "lmtgt1.dev2dev.net"
2021/05/26 01:05:18 [emerg] 591#591: unexpected "}" in /etc/nginx/nginx.conf:10

and jitsi is running but I can’t connect to it via web…

ah, some things in the wrong place and missing semicolon, cleaned up looks like this now in nginx.conf:

events {
worker_connections 2000;
# multi_accept on;
}

http {

    ##
    # Basic Settings
    ##

    sendfile on;
    tcp_nopush on;

Okay, now nginx working again, and no errors yet (Before next load test).
Jitsi log no errors with the 2 laptop users connected.

start load test of 950 (+2) users… on this m5a.4xl (32cpu 64gb ram) single instance running all core jitsi services, no add-ons…
running load, 10 participants per room, see that 1 is sending video clearly and smoothly, and loadtest0 has 12 people because of the two laptops both sending audio and video successfully in that room… so far no users dropping
jitsi and nginx log files calm still…
video remains clear, smooth, steady…
and 5 minute load test ends without a glitch!

YES! SUCCESS AT LAST! (at least for this hurdle, onward to the next :stuck_out_tongue: ).

Thank you so very much for your help. Greatly appreciated!

I hope my overhshare step by step here helps out any others that run into anything similar in the future.

Now I just need AWS to raise that limit from 1k to 5k spot instances, and then can try the 5k load test.

Meanwhile, now need to increase the pressure with the 1k users, add more simultaneous video senders, add audio, add CC, add recording, etc. Can do a lot at this level for now. Onward and forward.
Thank you very much again @damencho arking this as solved shortly!

4 Likes

@rpgresearch This is an excellent breakdown. We are very interested in knowing how things go when you are further along.

1 Like

@rpgresearch
Congratulation and thanks for your experience sharing.

What you mean by load test ?

real 950 users

or

special tool or script that simulate concurrent users?

And seems to missing this line :slight_smile:

worker_rlimit_nofile 300000;

1 Like

@iDLE Set of scripts combined with Malleus Jitsificus, plus 1-2 real users to observe (later test iterations will be adding other volunteers to fill out qualitative data).

Yes. Fortunately I discovered that fiddling around, but thank you for following up.
I was able to break through the limit up to the 1,000 spot Instance roof set by AWS, and Jitsi purred along nicely with that single system (Though was getting up there in cpu resources). Definitely seeing the diminishing returns in the vertical scaling, and the sweet spot for the higher-end price-wise appears to be around c5.4xl (after that doubles in price per hour). This is all very useful information for planning and budgeting across different departments and use-cases.
I have had a request for a spot instance limit raise in for a few weeks now to get that AWS limit raised, still waiting.
Meanwhile in chatting with others, thanks to a co-workers suggestion to give another shot, I took a chance and started trying to ramp up the number of nodes per instance again (attempts last year were too unreliable), by boosting the cpu and mem settings. Still unstable/unreliable at 4 nodes (selenium 3) per instance, but so far stable at 3 nodes per instance at 500 simulated users appears stable. Ramping up as far as that will go.
Later this week, after these all-in-one server baselines are as far as I can take them, I am starting work on learning/implementing/testing a single scaling/“cluster” approach (later will be trying out and learning the Octo and Kubernetes options and load testing those). Odds are you’ll be seeing a lot more of me asking questions in the group soon. :slight_smile: Thanks again for the help and the friendly community, greatly appreciated!

1 Like

Still haven’t gotten AWS to raise the 1,000 spot instance limit (they keep checking in with me ever few days to say they are looking into it). Meanwhile, I’m trying to ramp up the number of nodes per instance. I raised the cpu=512 mem=1024 per instance to cpu=1024 mem=4096. And the selenium_nodes up to 4. Unfortunately at 4 it becomes unreliable for just a 200 users baseline test. 3 was a little unstable, but I raised the hub cpu=2048 mem=4096 and doubled it, and so far 3 nodes per instance is stable at 400 (I’m trying to get to the target goal of 5,000 simulated users if possible, or at least as close as I can get this AWS environment to run reliably). I have a mandate for a system setup that this summer must support 5k reliably, including with closed captions, recording, etc. That is relatively easily do-able. But then by this Fall it needs to support 20k users with similar add-on loads. So I have a lot of ramping up to do. I am happy to share as much info as I’m allowed, and will keep folks updates. Especially since as I scale up and run into the next bottleneck, I’m sure I’ll be checking in. :slight_smile: Happy Jitsi-ing!

Hi @rpgresearch Thank you for this extensive and excellent description and the help from the Jitsi team.
Can I ask: did you also test the maximum amount of users per room. For example 100-200 users with Video/Audio off.