How to use TLS when connecting component (ie jicofo/videobridge)


I’ve been following the “Load Balance Jitsi Meet” tutorial so that I can add multiple videobridges. One of the steps is to change the prosody config to listen to the public interface, and then to change the JICOFO_HOST setting to point to the domain.

At this point this means that the communication is passing through the public internet, so preferably we would want this to be protected by tls right?

My setup is a bit complicated, I have installed prosody/jicofo in a docker container, and put it into a kubernetes deployment, and so I have used two domains to map to each of the exposed ports:

components[dot]example[dot]com:443 —> nginx ingress --> port 5347
video[dot]example[dot]com:443 ----> nginx ingress —> port 80

Note: the ingress is taking care of https termination, so it is all http by the time it hits the internal container.

Therefore I have setup my jicofo with the following settings:


Unfortunately I am getting the following error in the log files:

org.xmpp.component.ComponentException: org.xmlpull.v1.XmlPullParserException: only whitespace content allowed before start tag and not H (position: START_DOCUMENT seen H... @1:1)
        at org.jivesoftware.whack.ExternalComponent.connect(
        at org.jivesoftware.whack.ExternalComponentManager.addComponent(
        at org.jivesoftware.whack.ExternalComponentManager.addComponent(
        at org.jitsi.retry.RetryStrategy$
        at java.util.concurrent.Executors$
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(
        at java.util.concurrent.ScheduledThreadPoolExecutor$
        at java.util.concurrent.ThreadPoolExecutor.runWorker(
        at java.util.concurrent.ThreadPoolExecutor$

I get the feeling this is because jicofo is still trying to communicate with http only… For instance, I think it is equivalent to me trying to do the following curl command:

$ curl
<head><title>400 The plain HTTP request was sent to HTTPS port</title></head>
<center><h1>400 Bad Request</h1></center>
<center>The plain HTTP request was sent to HTTPS port</center>

Is there any way to tell jicofo (and presumably videobridge as well) to use TLS when using the component connection?



I should probably mention that if I do the curl command properly that it seems to be getting through OK to that 5347 port in the container:

$ curl
<?xml version='1.0'?><stream:stream id='' xmlns:stream='' version='1.0' xmlns='jabber:component:accept'><stream:error><not-well-formed xmlns='urn:ietf:params:xml:ns:xmpp-streams'/></stream:error></stream:stream>


I’m a little confused here.
Jicofo creates a direct connection to JICOFO_HOST using JICOFO_PORT, it is not using http or https, it is direct socket connection.

You don’t need a nginx here, you just need to forward the port 443 to 5347.


Right… that makes sense! I guess I don’t understand how the socket connection is preventing MITM in that case? I presume once the connection is started that it is sending encrypted data through the stream and not plain text?

Is this categorized as a s2s connection as defined this this page - ie do I need to open up a specific port on the videobridge side apart from 10000-20000 for the video streams so that it can do a back-lookup or something?


Well nginx is an http server, not a transparent proxy, but you don’t need anything in the middle to access a port, I suppose.
The fact we use nginx to proxy the bosh connection is to have one front facing service that will do the ssl termination, the only place you will add a valid certificate for the connection.
No this is not s2s connection.
For jvb you need udp 10000-20000 and tcp 443 (if jvb is started so it can listen on lower ports, no idea how that works with docker, if that is not the case it will bind to port 4443 tcp).


Right, I think everything is ok with regards to the BOSH side (that is going via the other domain), and actually our videobridge will be on its own server (Fun Fact: Kubernetes can’t handle large port ranges - so we will have the signaling on our kuber cloud, and the videobridges will be on their own VMs).

I’m mostly just worried about that specific component connection (The one that is usually on localhost:5347) that connects the videobridge and jicofo to prosody. Let say I do open up -> 5347 directly, how is that specific connection protected? I know there is a secret key involved, but if the connection is plain text then that could be MITM’ed pretty easily.

I think I probably found a better page here:
It looks like you can use certificates for each component connection by adding an ssl section (or it looks like it will use the ssl from the general prosody config by default). So my assumption here is that it will use this to connect securely over the socket. Is that right?


What I am getting at is that even though I managed to move the SSL termination off to the nginx ingress, we would still need to install the tls certificate on the box because it would be used for this specific connection it seems?

My initial plan was that I was hoping that I could not have to deal with certs on the signaling instance itself, like you said having one place that you needed a valid certificate, but it looks like this is a second place that it is needed (on the prosody config)?


I’m not sure that our components support that secure component connection. You own the videobridge and the signaling node and you know the ip addresses, you can protect the port and connection by a simple firewall rule, there is no need to change the port from 5347 to 443.

Are you using our docker instances?
There the connection between the videobridge and prosody is internal and is not open to the world. The docker instances just got let’s encrypt support and you only have one certificate used in nginx and that’s it.


Ok, cool, you are right keep it simple!

It is cool you have a docker container but the problem is more to do with Kubernetes (This is the main reason I can’t put the videobridge and signaling together). Also I do want to support multiple videobridges, so this would become a problem anyways the moment that we add the second videobridge. Also I use tokens and have also added code to patch websockets so that it will work with the tokens, but admittedly I could use the image as a base and modify that.

Maybe what I will do instead is move signalling out of Kubernetes and make a VNet in Azure, and put all the VMs inside that, that way they could connect to each other internally in the VNet using that port and not hit the outside internet between components, and then punch a hole just for the udp ports and the nginx web stuff.

Thanks so much for your help! I think I have a pretty good handle on what I need to do.