Network drops causing Ghost users

Hi there.

We’ve been running Jitsi for a while now, pretty much stock docker build.

One issue I’m working on has got to do with Client Network interruptions.
Say a client goes offline and comes back online, the bosh timeout is triggered, which then removes the user from the room.

Unfortunately, the user isn’t completely removed. Sometimes they can still here and see everything in the room, but no-one else can see them.

They don’t get disconnected from MUC though, as they can still send chat messages.

I’ve tried playing around with the xmppPing values in the JitsiConnection Options, but they don’t seem to make a difference to the trigger points of the timeout.

I’m considering going to Websocket connections with Smacks instead of BOSH for this reason, as many of the clients have flaky internet.

Are there any other considerations that I should look at?:thinking:

This is strange. Which version of jitsi-meet is that? After the automatic reload of the page they should successfully connect to the conference.

Also, what version of Prosody are you running?

Our setup is built from:

jitsi/jvb:stable-5963
jitsi/jicofo:stable-5963
jitsi/prosody:stable-5963

https://github.com/jitsi/docker-jitsi-meet/archive/refs/tags/stable-5963.tar.gz

Hope this helps?

What’s the output of this command?

dpkg -s prosody | grep Version

Hey Freddie,

it’s Version: 0.11.10-1~buster1, this is our development environment, our production system is also having this issue and that deployment is much older. (built end of last year).

I forgot to mention. We are not using the web interface that comes with the build, but in fact a custom front-end. (I should probably test whether this issue can be re-created on the supplied front-end as well)

I agree. Because I have a strong hunch this has to do with your front-end integration. One thing’s for sure though, migrating to websockets will give a better experience.

Yes, sure. I wasn’t involved in the build of the front-end, but maintaining it and still digging up all the issues.
As I said “the Issue is on both environments” it did dawn on me that the problem is in the front-end code, not with jitsi components.

Websockets do feel like the right direction to go, so we’ll give it a go!

Thanks for the speedy replies, really appreciate it :1st_place_medal:

Hi, I’ll chime in on this thread.
We have been running jitsi for a while now and have the same problem. We have been running jitsi with XMPP websockets for a long time now and unfortunately we can’t see any improvement in the problem. What we tried further to solve the problem is that we turned off p2p because we saw this sometimes in p2p connections and prevent caching of e.g. the libs. This seems to have improved things a bit, but it still doesn’t seem to have fixed the problem completely. We are currently using the release 5963 of jitsi and prosody in version 0.11.10-1. We are open for further hints or an exchange of experiences.

I tried to reproduce this issue on my side but jitsi handled it really well. The ghosts were dropped in ~ a minute.

dpkg -l "jitsi-*" "prosody*" jicofo

||/ Name                  Version             Architecture
+++-=====================-===================-============
ii  jicofo                1.0-813-1           all
ii  jitsi-meet            2.0.6433-1          all
ii  jitsi-meet-prosody    1.0.5415-1          all
un  jitsi-meet-tokens     <none>              <none>
ii  jitsi-meet-turnserver 1.0.5415-1          all
ii  jitsi-meet-web        1.0.5415-1          all
ii  jitsi-meet-web-config 1.0.5415-1          all
un  jitsi-videobridge     <none>              <none>
ii  jitsi-videobridge2    2.1-570-gb802be83-1 all
ii  prosody               0.11.9-2            amd64
un  prosody-0.11          <none>              <none>
un  prosody-modules       <none>              <none>
un  prosody-trunk         <none>              <none>

a hint ? you are 3 releases behind current stable…

Thanks Emrah and Kaip.

I understand that the replication of the bug I’m referring to is easy, nor easy to explain.
I’m definitely hearing your feedback, and will try to add that to my investigation.

Thanks for your shout out​:+1::grin:

To give a bit more depth on the situation, for whom it may concern.

THe issue we are facing has to do with when the BOSH timeout occurs after a network interruption. Bosh timeout hits at 10 seconds, which triggers a 1:30?min timeout kickout.
Rejoining within the allotted 10 seconds will have the client back in the conversation, but the timeout kickout has still been triggered. So, regardless of connection quality, the user will still b booted out after the 10 + 1:30 min timeout…

We are aiming to know exactly when the user is going to be kicked out… Doesn’t matter what time or delay we are talking about, but if a user needs to be booted after 60 seconds, then they should be booted after 60 seconds, no question.

again, thanks for the feedback, greatly appreciated :ok_hand: