Screenshare not showing for all connected users after relay/octo implementation on lib-meet-jitisi

I use the low-level library for a feature. After implementing cascaded bridges, the screen share doesn’t show for some users. I’ll appreciate any leads on what I might be missing in the configs.

On the default jitsi meet version, I don’t experience this issue.

Are your websockets to the bridge operating successfully?
Have you setup the websockets between the bridges?

Yeah, I followed Tip: websocket and the additional JVBs to configure the websockets. How do I verify whether they are working?

For now, when I’m in a session, the traffic is distributed across the 3 bridges.

Check the js console logs for errors.

Also from thе UI you can wait few seconds and see whether you receive stats from remote participants, they are shown when you hover over the gsm bars.

On the equivalent of, it works well. Screenshare and video streams work. The issue I’m facing is on my application where it shows at time and doesn’t show some other times.

There are no errors in the js console.

What about that?
You are following the relay setup from here? jitsi-videobridge/ at master · jitsi/jitsi-videobridge · GitHub
If that is the case the setup for websockets is: jitsi-videobridge/ at master · jitsi/jitsi-videobridge · GitHub

Let me try these out. Looks like I’m using an old config. This also means all the bridges will get their own blocks on nginx with different ports right?

Do I make that change in jvb.conf as well under the http-server block?

Yep there are settings in jvb.conf in jicofo.conf and in neginx config so you can make the colibri relay working.

1 Like

I’m trying to follow this but I’m seeing the following error in the logs:

JVB 2022-09-07 20:51:59.509 INFO: [23] [confId=6a5521870dfbff9c conf_name=randomactionsdateguiltily@conference.domain relayId=jitsi-videobridge local_ufrag=faoce1gccr6ril ufrag=faoce1gccr6ril] ConnectivityCheckClient.processTimeout#881: timeout for pair: [2a03:b0c0:1:d0:0:0:d18:1001]:10000/udp/host -> [2a03:b0c0:1:d0:0:0:dd0:8001]:58601/udp/host (stream-jitsi-videobridge.RTP), failing.

This is the log on the videobridge on the JMS. What could this mean?

That this ICE candidate is not succeeding, but others may succeed to establish the link. This is normal, but if all are failing that is a problem.

With the relay implementation, do I need to make any changes to the /etc/jitsi/meet/domain-config.js file?
I’m having a hard time getting things to work with the low-level library.

I’ve confirmed the issue has to do with websocket connection for the additional videobridges.

I’m unable to access 9090 of the JVB from JMS. How do I go about this if the JVB is behind Cloudflare?

IIRC, there’s some issue with Cloudflare and UDP ports. Unless you subscribe to their Spectrum or Enterprise plan, you won’t be able to use Cloudflare through the ports it doesn’t natively support. And you don’t need to have JVB behind Cloudflare anyway, there’s no value to that. It won’t cache video data and if it did, that would be horrible for latency.

@damencho do I still need to set config.deploymentInfo.userRegion for the following setup?
1 JMS with 3 external JVBs with the relay configuration for octo. My use case is just to have multiple videobridges to support an increased number of users.

What are the changes to be added to the config.js for the relay configuration?

That is needed if you need a regional selection. There is nothing that you need to add in config.js for relay

Thanks so much. I think I figured things out. Running a few tests but for now, seems everything is working fine.

@damencho after configuring the external videobridges. I deactivated the videobridge on the JMS. Currently, I’m experiencing some delays. Any recommendations to reduce the delay?

What kind of delays? Where are the new bridges located compared to the one installed from jitsi-meet?