Simulcast with multiple jvb


We have deployed mulitple jitsi (each one with 1 prosody, 1 jicofo and multiple jvb) on AWS ECS (docker) following the tutorial.
The stack is working correctly since july.
In order to enable simulcast we are trying to follow the tips of @emrah but couldn’t find an appropriate solution.

If we are right, we need to find a solution to map a given string (the jvb’s service-id) with an address (ip:port) in our loadbalancer.
We would like to have a solution to get this mapping dynamically because it’s quiet complicated to maintain our infrastructure with “static” mapping.
As we understand how jicofo works, it already has this mapping so we would like to know if we can retrieve these informations, or be notified if the mapping change.


Simulcast is not related to that at all. Perhaps you mean OCTO? In that case, you can have all JVBs connect to all Jicofos.

Today we are not using OCTO in our infrastructure.
We don’t know if it’s correct, but as we understand in order to use simulcast we need a connection (with colibri-websocket) between the client and our jvb used by the client. None of our jvb has a domain name specific to itself we need to have a “loadbalancer” to forward the connection from our domain name (the same as the one used to call prosody) to our jvb using the “server-id”.
We would like to known if we have any method to get the mapping from “server-id” to the jvb address dynamically (as we understand and see in the jicofo logs when we start a new jvb it register itself using the “server-id” and we would like to know if this mapping is available to use it in our “loadbalancer”) or what we need to do in our configuration to make work simulcast.

In your nginx you need this section: docker-jitsi-meet/meet.conf at cb4d9413b7481b9767ff5d2ec09e22bdc76e74e3 · jitsi/docker-jitsi-meet · GitHub

And in jvb config for the websockets server-id put your internal address that nginx can use to communicate with your jvb instance jitsi-videobridge/ at master · jitsi/jitsi-videobridge · GitHub

Thanks, we already have view this configuration but we would like to not use directly the ip of the jvb but instead some key that is mapped internally to an ip address (we think it’s the case in jicofo) in order to limit the usage of nginx as a proxy to our private network. In the worst case scenario where it’s not possible to use something like that we will use that option but we need to isolate our visio infrastructure in another network which is quiet complicated.

Can’t you use DNS for that?

Thanks for your reply, we would like to not have to maintain a mapping (in DNS record or anywhere else) of our jvb’s “server-id” and we have thought that we can use the one managed by jicofo instead. We prefer to limit the number of machines reachable through the loadbalancer (nginx) and the option of setting the ip address of the jvb in the “server-id” make it mandatory to have a private ip-range dedicated to the jvb.
We’ll do that for the moment and explore other options to further restrict the machine and ip address reachable through nginx (without maintaining a mapping ourself), if you have any clue we’ll be pleased to test it.

Can you please clarify this? I don’t understand what you are trying to solve, but you are complicating your setup on account of it :slight_smile:

You could always switch back to datachannels. We’ve found them less reliable in general, but they do work and are easier to deploy because you don’t need to proxy anything via nginx.

We think that this configuration (using nginx as a proxy to some jvb using public or private ip) could lead to a potential SSRF (it’s not exactly the same things but you can find [some example in webrtc using a TURN server]) in the future (that does not means that it could be exploited). In order to mitigate this (probably low) risk we would like to have a mapping that allow us to check that the given server-id is a valid “target” instead of blindly forward it.
If we can’t do that we are planning to isolate the jitsi infrastructure in a separate network that can’t access other private network but it’s quite painful to move the infrastructure.

I see what you mean. We currently don’t have a way to validate those addresses. In the Docker setup for example the network is a user defined network for the Jitsi components, so I think this problem doesn’t apply. On, we don’t proxy the traffic, but have the JVBs be publicly accessible with DNS.

1 Like

Thanks for your answer, we’ll try to do this