Jitsi with CoTurn on 443:UDP

Hello,

In my first setup I used CoTurn with 443:TCP and it works without any special settings. In my second case, to optimize performance, I switched from TCP to UDP and…surprise…it doesn’t work anymore.

On edge://webrtc-internals, I noticed that when I join a meeting with the third tab, the client loses the turn information and there is no offer to coturn. Although all the information was previously available for the P2P connection. The client only sends a STUN request to the loopback interface.
I hope someone has an idea? Do I need some special settings for Coturn on UDP?

Thanks in advance!

There is no special handling between turn amd turns candidates.

You don’t see the turn servers listed on top of the webrtc-internals page?

The turn servers are coming from here, what does this look like in your config?

In the P2P session i will see the turn server listed on top of the webrtc-internal, but after a third open tab they are gone.

My config:

...
plugin_paths = { "/prosody-plugins/", "/prosody-plugins-custom" }

muc_mapper_domain_base = "meet.jitsi";
muc_mapper_domain_prefix = "muc";

http_default_host = "meet.jitsi"


external_service_secret = "xxxxxxxxxx";



external_services = {
  
    
      
     { type = "turn", host = "turn.xxxxxxxx.de", port = 443, transport = "udp", secret = true, ttl = 86400, algorithm = "turn" }
    
  
  
  
};






consider_bosh_secure = true;
consider_websocket_secure = true;



VirtualHost "meet.jitsi"
...

Hum, turn is coming for bothe from prosody. You don’t have listed turn in confog.js right?
You xan open the network tab in the browser before connecting and inspect one by one the messages checking do you see the turn server coming from the server as reply to the disco-info request.

If you mean the custom-config.js for the web pod. Yes, there is no turn entry.
What do you mean with “coming for bothe from prosody”?

I checked the disco-info request, but can’t found it. I only see the turn information in de xmpp-websocket messages (between two clients). If the third person joins, there will be a new xmpp-websocket connection, with no turn information. You will also see, that there is no connection to the jvb after that.

Is it normal, that the Link changes like this…

P2P:
wss://meet.jitsixxxxxx.de/xmpp-websocket?room=wrongbananasstumblefortunately
3-Person-Conf.:
wss://meet.jitsixxxxxx.de/xmpp-websocket?room=wrongbananasstumblefortunately&previd=c6fc3043-7fb5-4e8d-a406-d8e49d0d0f56

Can you upload the console logs from the browser for the third person?

This is what I see on the prejoin screen of the third person:
Outgoing

<iq from="c06931df-d726-4067-b3a6-c773e5a05f45@mydomain.com/ewiLOlhijxIq"
    id="012c8151-3f0a-44c0-9a3b-e2d087e9ef5d:sendIQ" to="mydomain.com" type="get" xmlns="jabber:client">
    <query xmlns="http://jabber.org/protocol/disco#info"/>
</iq>

Incoming:

<iq id='0447e9b5-494b-4413-bb54-beec28dbf77d:sendIQ' type='result' from='mydomain.com' xmlns='jabber:client'
    to='c06931df-d726-4067-b3a6-c773e5a05f45@mydomain.com/ewiLOlhijxIq'>
    <services xmlns='urn:xmpp:extdisco:2'>
        <service port='3478' host='mydomain.com' type='stun'/>
        <service restricted='1' transport='udp' username='1680700023' type='turn' port='3478'
                 password='some=' expires='2023-04-05T13:07:03Z' host='mydomain.com'/>
        <service restricted='1' transport='tcp' username='1680700023' type='turns' port='5349'
                 password='some=' expires='2023-04-05T13:07:03Z' host='mydomain.com'/>
    </services>
</iq>

Not sure why disco info will be missing. Which versions of jitsi-meet and ljm do you run?

These messages also in my browser logs for the third one in the prejoin page

Here my logs…
jitsi_browser.log (124.5 KB)

missing is the wrong word. Why will this info get lost.

docker jitsi-meet version: stable-8218
and what ist ljm?

lib-jitsi-meet. But jitsi-meet version is enough in this case, if you don’t have modifications.

There are some errors in the logs, but I don’t see any indications of why those requests will not be sent. Those are always sent once you see Strophe connected.

Also, the jvb ice failure means you are missing the port forwarding to the bridge or the bridge is reporting a wrong public address, which is needed for the turnserver to work.

You mean the public IP, which is set with the env JVB_ADVETISE_IP?
In my setting this is empty and will work if i use TCP with the CoTurn.
p.s. i changed the config and no improvement

I hope i will find the right setting or someone here can share their configuration.

Thanks for your support!

In my futher analysis I saw that the coturn pod transmit his link-local-address and his internal pod IP.
e.g. in the “Allocate Request UDP” and “Binding Request” package…
Any idea why?

Cause the bridge is advertising it.
I think, you can disable that with this:

ok yeah, this config changed something. But i also get the link-local address e.g. in the coturn logs. Like this…
194: : session 015000000000000001: closed (2nd stage), user <1681473658> realm <turn.xxxxxxxxx.de> origin <>, local 0.0.0.0:3478, remote 169.254.xxx.x:4411, reason: allocation timeout

Isn’t that the private address of the client?
The turn server is not advertising its addresses, it is a config of turn’s DNS provided to webrtc when creating a peer connection.

It was the ip of an loadbalancer. So I think it wasn’t the problem.

After an analyse of wireshark traces between tcp and udp I found some differences.

with tcp:

  • there are two CreatePermission messages. Src: LB Dest: coturn
    1.) XOR-PEER-ADDRESS: videobridge-ip:1000 and a response CreatePermission Success
    2.) XOR-PEER-ADDRESS: <127.0.0.1:10000> and a response CreatePermission Error

So all fine and it works.

with tcp:

  • there are also two CreatePermission messages. Src: LB Dest: coturn
    1.) XOR-PEER-ADDRESS: client-ip:61333 and a response CreatePermission Success
    2.) XOR-PEER-ADDRESS: coturn-ip:50411 and a response CreatePermission Success

after there are some more messages. One “Refresh request” and one “Refresh Success” between LB and coturn and then the communication is terminated without further information.

I hope you have an idea with the additional informations.

Here is some complementary intel. I work on the same team as @JustITisso.

The only configuration change we apply within Jitsi is changing the external_services entry in the Prodody config from transport = "tcp" to transport = "udp".

external_services = {
   { type = "turn", host = "turn.xx.yy", port = 443, transport = "udp", secret = true, ttl = 86400, algorithm = "turn" }
}

In the working TCP case, the browser’s webrtc-internals shows a successfully connected ICE candidate grid between the CoTURN pod as Protocol / candidate type “relay(tcp)” and a JVB pod IP as “host”.

See (1) in below screenshot.

In the dysfunctional UDP case, the webrtc-internals show two scenarios:

  1. An empty iceServers[] array and a pairing attempt between the client and the JVB on port 10000/udp, which is blocked and cannot succeed. That’s why we have the CoTURN server.

See (2) in below screenshot.

  1. An iceServers[turn: ... ] array showing the correct CoTURN endpoint, and two unsuccessful pairing attempts between
  • The client IP and the CoTURN pod IP
  • The client IP and link-local IP (the IP facing the CoTURN pod network) of the CoTURN LoadBalancer

See (3) in below screenshot

There is no pairing attempt that involves a JVB, as in the successful TCP scenario. This matches the observations in @JustITisso’s previous post: there are no “CreatePermission” messages towards a JVB.


-> Why would the JVB not be involved as an ICE candidate when we switch to UDP transport for turn?

Again, the only Jitsi config change is the “transport” in the Prosody external_services.

It’s pretty wicked, but maybe these insights trigger a new thought with someone.

Screenshots (IPs and domains are redacted):

What is the coturn config? By default the client filters and leave only turns.