Maximum number of participants on a meeting on meet.jit.si server

This is not about prosody, but by the fact you see turned off videos. If you have 30 participants in the call, 15 will be turned off even if they have video.

I don’t think the value matters as the limits is with higher priority I think.

OK, thank you for clarifiaction. Now I’ve set i alike:

// channelLastN: 30, // The default value of the channel attribute last-n.
lastNLimits: {
// 5: 20,
// 30: 15,
50: 15,
70: 10,
90: 5
},

This can be protection for rooms with large audience, but it is really rare in our use case that we use more than 20 videos in one room simultaneously. Can it solve issues with participants can’t hear presenter in large rooms, but can hear other participants?

Thank you!

Nope. That is strange …

Can you save the logs from such session and send it over?

Yes indeed. I’ve few report a every day of this. :frowning: Opened bug on it, but it never moved on. Maybe it is some bug in Chrome, because changing to Firefox 95% solved this for users that reported it to me.

I don’t know for what should I look up to nail it down. I was in such a room where some cannot hear presenter, I heard him and there was nothing suspicious in web console.

The interesting part is to have the logs from the client that is not hearing it, its something between that participant and jicofo …

OK, next time it happens I’ll instruct users to save logs and send it to me. Thank you for hint.

I saw you have the save logs enabled and you can do it from the UI from the local thumbnail menu.

I’ve enabled it for this case, but is log saved by me interesting for you, when for me is all OK in that room but some others in that room have problems to hear teacher? I was about that only person encountering problems can save useful logs. Thank you!

Nope.

:+1:

1 Like

Hi @damencho , one of our students was able to make json log, she is not able to hear presenter, but other 19 students are. I’ll send you log by email.

Thank you very much for help!

Hi @damencho , it seems that changes I made last night did not helped a lot, CPU usage graph shows increased prosody usage over last week usage.

Do you have any other suggestions?

Thank you very much for patience.

Not really … What is the specs of the machine hosting prosody, which components run there?

Hi, it is older Xeon box 2CPU/8C/16T:

processor : 15
vendor_id : GenuineIntel
cpu family : 6
model : 26
model name : Intel(R) Xeon(R) CPU E5520 @ 2.27GHz
stepping : 5
microcode : 0x11
cpu MHz : 1595.938
cache size : 8192 KB
physical id : 1
siblings : 8
core id : 3
cpu cores : 4
apicid : 23
initial apicid : 23
fpu : yes
fpu_exception : yes
cpuid level : 11
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm dca sse4_1 sse4_2 popcnt lahf_lm pti tpr_shadow vnmi flexpriority ept vpid dtherm ida
bugs : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds swapgs itlb_multihit
bogomips : 4521.81
clflush size : 64
cache_alignment : 64
address sizes : 40 bits physical, 48 bits virtual
power management:

meet:~/scripts# free
total used free shared buff/cache available
Mem: 45317700 2410308 40785568 74724 2121824 42314364
Swap: 7815584 0 7815584
meet:~/scripts#

1Gb Ethernet to core switch and 10Gb ethernet to the NET.

It is running Jicofo, Prosody and Turn server. Should be OK for this purpose?

Thank you!

Seems so … but seems the single core used by aws instances m5.xlarge allows prosody to meet over 5000 simultaneous participants, and I have seen jumps of 1000 for few minutes …
I have no further explanation and you say you were seeing less load with the previous jitsi-meet, the one from January? Do you have graphs with that? Was the number of participants the same and joining the same way like 900 for 2 minutes?

I trust you! :slight_smile: When you look at CPU usage graph, there you can see few days from last week when we was on 12. jan. build and graph looks different, CPU load is lower. I’ve upgraded to new stable build at sun. 25.4. Number of users are the same ± because we have fixed lessons schedule that repeats every week of semester.

Here can you see few days on old stable (20.-25.4):

And here is actual week graph with upgraded jitsi:

It looks like those CPU peeks are higher on newer build. Can useNewBandwidthAllocationStrategy: true option have such an impact? Or is it executed on bridges only and is not taxing prosody?

Thank you for your time!

Yeah those are messages going over the websocket channel to the bridge.

Thank you, now I’m out of ideas… :frowning:

Yeah I’m also. I’m about to blame jicofo about it. As the cpu usage you show is overall. You have 16 cores and just one of them is prosody, so it is possible you were maxing out prosody and before, but those cpu jumps you see maybe are because of all the jicofo changes … and jicofo is the problem …
We checked everything on the prosody side and there is nothing I can spot. And prosody is just 1/16 of that graph, so it cannot spike it up like that … maybe.

No, it is only prosody CPU usage on that graph! Here at this graph is Jicofo (java) usage and Turn sage too:

Jicofo used less CPU than prosody on our jitsi installation and I can not see change in Jicofo CPU usage after upgrade, only Prosody. :frowning:

Thank you for your interest!