Deploying Bridge on CDN

Dear all,
I wonder whether to improve the meeting QoS for jitsi we can scale out with multiple bridges but also scale up using CDN service ? With several bridges we can increase the number of concurrent meetings and with CDN we can warrant the quality (delay, jitter) of the meeting.

If your answer is yes for using CDN, then how can we configure the CDN ? As I know with CDN we can “cache” a http(s) (DASH, hls streaming) or RTSP addressing. The question is for a webrtc URL how we can cache with CDN ?
Many thank in advance for your help is already using CDN, but this serves only for delivering the static web content and this cannot be used for real-time data.
To configure to use CDN you just need to set in your deployment content to this file: setting html base attribute sent to your CDN copy of the web content.

Dear @damencho, I slightly disagree with you in the fact that CDN serves only for delivering the static web content. Working in the domain of digital TV, we use CDN a lot to distribute the live TV channels via RTSP and DASH/HLS streaming. In the former, it uses RTP packets to deliver to clients (and WebRTP does the same way). That is why I wonder how we can exploit the CDN benefit into the streaming feature of Jitsi.

I’m not familiar with that and how this can be of any help of adding another hop between the clients and the bridge, it will add extra delay and jitter in the packets

Thank you @damencho. You are totally right with the CDN. The requirement in the linear TV (even for live program) is still less constrained delay than the video conference :-).
We would like to build a robust topology for jitsi (just like your dedicated to our usecase: facilitating the video conference between Vietnam and US. Octo is a good solution. We deploy 2 regions one in VN and one in US. But we still experience some fluctuation in quality. Can you share some techniques to adjust the system in such circumstance, please? Leasing lines between bridge between 2 region?
MAny thanks for your help

We are currently using aws and direct connections in aws between the regions, I don’t think there is anything specific in those connections.

Dear @damencho thank you for sharing your experience with deploying If I can ask you scale the for how many concurrent participants? And you will move it to a paid service soon?
Best regards

There is no plan of changing anything around to be paid or what so ever. There is no limits for users at the moment and it can scale up on demand, so there is no number that can be given. We have shards in 6 or 7 regions, and if needed can scale up new shards and jvb instances scale up automatic.

Dear @damencho do you have some documents on good practice for deploying jitsi? You meant shard as physical server / virtual server and jvb instances are probably dockers deployment? Then you use kubernetes to load jvb dynamically?
Thank you for you answer
Best regards

No such documents. There are few videos though.

Dear @damencho then please share me such video. I can hardly find them.
Many thanks

It is in the news section on the website, you can find those about deployments, the one about octo.

Dear @damencho, you use direct connection in AWS, which is expensive and you still keep the website free?
Due to the special situation in VIetnam, can we somehow set up a new Region for Vietnam having our own servers, having direct link to your video bridges. ? In other word I would like to try the octo configuration with 2 regions our server for Vietnam and your servers (in for the rest of the world. Is it feasible ?
We really want to experience the quality of the video conference via the octo service before going further to set up a sophisticate (and expensive) Octo network as the one of for ourselves.
Many thank in advance for some guides on this issue.
Best wishes

I’m afraid this is not possible, sorry.

Dear @damencho, very interesting the video and the article on Octo. Just several points I would like to understand in depth:
-Basing on which criteria you launch new JVB in a shard ?
-In our configuration, we observe that whenever a JVB has a CPU usage exceeding 40%, we experience more quality problems in conference rooms (disconnection, lost audio…). So 40% CPU usage is the recommendation for JVB (and it is probably a trigger to launch additional JVB) or it is just our own problem somewhere in configuration / infrastructure?. If the latter is the case, can you give me some hints to debug the problem?
-In order to launch dynamically a new JVB and connect it to the current infrastructure, do you have some documentation?
-Do you have the documentation on load balancing the prosody and jicofo? From the architecture of shard I suppos that the centralized jicofo can be a bottle-neck for the whole system.
Many thanks

In the past we were using bandwidth, but recently we switched to using system load.

What instances do you use for your jvb machines? Are you testing with default configuration and simulcast enabled? What is the bandwidth available for the bridge VM?

Nope, the recommended way is to use mucs. But if you are using octo this is already the case. All JVBs have the same config except just the muc nickname, and when they launch they connect to the xmpp service and muc, using same username and password.

Nope we do not have. If you are askinng how jicofo chooses bridges, this is in the code. Jicofo currently chooses jvb based on bandwidth

Dear @damencho
Thank you very much for your detailed answers. I just want to rewrite a little bit clearer 2 of my above questions (the others you already gave me the comprehensive answers)
-In fact I would like to know the mechanism to start dynamically a new JVB instance. Do you use docker and docker management, or do you use some third party tools for that purpose ?
-Concerning the Jicofo, I know how the jicofo chooses a bridge to host a conference. My question is just what happen if the jicofo itself fails. There is only one instance of Jicofo, no load balancing for this component then how we can warrant the operation of Jicofo
Many thanks

We use aws auto-scaling groups with ruls based on cloudwatch metrics.

There are multiple shards in multiple regions, when a shard is marked as unhealthy due to health check failure, all requests to that shard are routed to a new shard. The user experience is that they will see a reload screen and after a few seconds will join the call again, but this time it will be on a different shard.

There are a number of HA Proxy instances fronting the service which gather the health of the shards and have a shared table with mapping roomName -> shard, to know where to route requests.

Dear @damencho
Thank you again for your prompt answer.
I quote here your answer
“…There are multiple shards in multiple regions, when a shard is marked as unhealthy due to health check failure, all requests to that shard are routed to a new shard. …”
It seems to me that all the healthy JVBs in an unhealthy shard cannot be reused in a healthy shard (located in the same region for the backup reason), doesn,t it? It is a little bit waste of resource ? Or JVB cannot be connected in full mesh to several jicofo services ?
Many thanks

If you are using Octo every jvb is connected to every shard. So it will be used if it is healthy.
If you are not using Octo, then the answer is yes, those will not be used. But mind that loosing a shard is a very rare situation, we had seen it on maybe twice this year … And normally you will be notified to take some action and don’t leave it like that.