Is it possible to make UDP streams pass by a gateway

I’ve deployed the Jitsi on my backend server and it worked as expected. Basically I just open the port 10000 and bind the port 443 to the port 6443, then I can create and join a video meeting successfully. In other words, I use the port 6443 to create the TLS connection and use the port 10000 to accept the UDP streams.

Now, I use another machine as my gateway, in which deployed a gateway component Envoy. My goal is to make all of requests and data stream pass by the gateway.

Here is the config of my Envoy:

static_resources:
listeners:

  • name: listener_jitsi_https
    address:
    socket_address:
    address: 0.0.0.0
    port_value: 6443
    filter_chains:
    • filters:
      • name: envoy.filters.network.http_connection_manager
        typed_config:
        @type”: HttpConnectionManager
        stat_prefix: ingress_http
        upgrade_configs:
  • upgrade_type: websocket
    http_filters:
    * name: envoy.filters.http.router
    route_config:
    name: local_route
    virtual_hosts:
    * name: local_service
    domains: [“*”]
    routes:
    * match:
    prefix: “/”
    route:
    cluster: cluster_jitsi_https
    transport_socket:
    name: envoy.transport_sockets.tls
    typed_config:
    @type”: DownstreamTlsContext
    common_tls_context:
    tls_certificates:
    * certificate_chain:
    filename: “/home/administrator/envoy/xxx.crt”
    private_key:
    filename: “/home/administrator/envoy/xxx.key”
  • name: listener_jitsi_udp
    reuse_port: true
    address:
    socket_address:
    protocol: UDP
    address: 0.0.0.0
    port_value: 10000
    udp_listener_config:
    downstream_socket_config:
    max_rx_datagram_size: 9000
    access_log:
    • name: envoy.access_loggers.file
      typed_config:FileAccessLog
      path: “/home/administrator/envoy/udp.log”
      listener_filters:
    • name: envoy.filters.udp_listener.udp_proxy
      typed_config:
      @type’: UdpProxyConfig
      stat_prefix: example_ingress_udp
      cluster: cluster_jitsi_udp
      upstream_socket_config:
      max_rx_datagram_size: 9000
      clusters:
  • name: cluster_jitsi_https
    connect_timeout: 30s
    type: LOGICAL_DNS
    load_assignment:
    cluster_name: jitsi_cluster_https
    endpoints:
    • lb_endpoints:
      • endpoint:
        address:
        socket_address:
        address: xxx
        port_value: 6443
        transport_socket:
        name: envoy.transport_sockets.tls
        typed_config:
        @type”: UpstreamTlsContext
  • name: cluster_jitsi_udp
    connect_timeout: 30s
    type: LOGICAL_DNS
    load_assignment:
    cluster_name: jitsi_cluster_udp
    endpoints:
    • lb_endpoints:
      • endpoint:
        address:
        socket_address:
        address: xxx
        port_value: 10000

As you see, I route HTTPS(supporting WSS) request from IP_Gateway:6443 toxxx:6443, and I route the UDP request from IP_Gateway:10000 to xxx:10000. (xxx is the domain name of my backend server.)

Here is how my configuration looks like:

client:6443 —> gateway:6443 —> Jitsi:6443(443)
client:10000 —> gateway:10000 —> Jitsi:10000

Now, I could make a meeting but the video streams DON’T pass by the gateway. The client and the Jitsi service still communicate directly with each other.

Then, I add a new line in the file /etc/hosts on my client machine: IP_Gateway xxx so that all of requests to xxx will be redirected to my Gateway machine.

But still, the client and the Jitsi service still communicate directly with each other.

It seems that after Jitsi service getting the IP of the client, they two start to communicate with each other directly.

Can someone help me? Is it possible to make all of requests and responses pass by my gateway? Need I change the source code of Jitsi if I want to do so?

You’ll want to read up on how ICE works in a WebRTC context and how candidates are signalled.

At startup, JVB learns its external IP. If it’s behind NAT, that happens via AWS IMDS lookup (for AWS instances), STUN (for other environments) or manual configuration.

When Jicofo allocates a channel on JVB for a participant, that external IP is given to the participant as an ICE candidate, via the signalling that happens over the XMPP websocket. The client in return gives its own IP address(es).

You can configure JVB with your “gateway” as its external IP, which will cause that IP to be given to the user as JVB’s ICE candidate, and the client → JVB traffic will come through your gateway. I think that’s org.ice4j.ice.harvest.NAT_HARVESTER_PUBLIC_ADDRESS in the old sip-communicator.properties, maybe it has a corresponding key in the new config.

But remember that UDP is connectionless, and you have to think about each flow individually. The above only applies to the traffic from client → JVB. In the other direction, JVB → client, JVB will send packets to the IP that the client signalled to JVB. You will only be able to make that flow go through your gateway if you either implement something to rewrite the candidates, or perform some rewriting of the packets at the network layer (e.g. via iptables/nftables).

What are you trying to achieve?

Thanks for your help.

What am I trying to achieve?

Well, I’m trying to make all of data packet between client and backend services pass by my gateway(Envoy) so that I can monitor all of metrics coming from the gateway(Envoy could report metrics data.)

For example, if there are some huge meeting rooms or some exceptions of communication, my gateway can report me directly.

You may consider extracting needed metrics at each individual JVB instead. The UDP media traffic is very sensitive to delay & loss. Envoy is quite fast, but if you have any significant traffic you may find that it becomes a bottleneck.

Additionally, the metrics you will be able to capture in Envoy are all at the network layer and not all that useful compared to the metrics JVB reports, since Envoy doesn’t have contextual information about the RTP session. If you want those lower-level network metrics you can easily capture them on the JVBs as well as higher-level RTP metrics anyway.

Nonetheless, if you still want to get the media traffic to go through Envoy, it will be quite difficult. You would need to intercept the Jingle session-initiate and session-accept IQs in Prosody and replace the candidate IPs. Replacing the JVB one is quite easy, just replace it with the IP and port of your Envoy listener. But in the other direction, you need to know which ephemeral port Envoy will assign for that specific UDP flow, in order to replace the client’s candidate with the corresponding Envoy listener, but since the client hasn’t connected yet you don’t know what that port will be. You’d be better off using something that observes but does not proxy the traffic, so that you can leave the candidates unmolested.