Is aws global accelerator also used on the UDP media path?

Hi, noticed that aws global accelerator is used, but that the server IP:10000 seems another one, probably directly an instance. Is this intentional? Could UDP be routed through / proxied through aws global accelerator? Thanks

We use that only to land on the shard in the nearest region. And udp is directly to jvb public address

I have a jitsi server in vultr cloud in frankfurt, and 3 users in romania and myself in UK. Would it make sense to proxy the UDP through a CDN / edge network like cloudflare spectrum or google cdn? Assuming that their network is more reliable and has direct peering with the involved ISPs & vultr cloud?

Not sure how that works and how you can use that with jvb, as it advertises its ip in the signalling…

That is true. One way I imagine it may work (with additional costs) is a reverse proxy approach like Cloudflare Spectrum which supports both TCP & UDP and configuring the discovery IP address for the media path UDP as the Aws global accelerator or Cloudflare Spectrum IP address (which is an anycast IP address, or an IP address returned by an anycast DNS service of cloudflare, so it would be one closest to the requesting client…so if in the discovery it can only be configured an IP, then only the anycast IP address idea works with a provider that offers UDP proxying too…i tried with aws global accelerator & network balancer for 10000 udp but i did not find where to set the external origin server - my jvb). => so i think that with aws servers, having 10000 exposed from the edge (accelerator) would be a benefit…and the IP exposed by the configuration for jvb discovery would be the static IP of the accelerator (one of the 2 anycast static IPs)…but not sure how that may affect the region based routing to a colocated jvb…but for any one jvb IP exposed through discovery/config, that ip can be wrapped in edge…if a name is supported in config, not just IP, then based on the server name the edge could know where to direct the request to which jvb. => So the only clear thing is that for a mono server setup like mine, and for aws server (not in vultr), it should work i guess to expose the edge 10000 & configure one of the 2 static anycast IPs of the accelerator in config…that would allow both myself and the family in romania to go thru only one AS, hopefully a good one (al this is for fun, much more will matter if they use wifi or cable)…

In practice we did some AB testing on while ago with cross-region bridges per shards with octo, this way in one conference people from different regions/continents will use their closest bridge - the stats we gathered proved that the benefit is minimal to none, so it is not worth the complicated setup …

1 Like

Hi @damencho, I will test wrapping the jitsi server i have in 2 layers of reverse proxy for fun: first is a server in another cloud (i am in vultr, this tcp/udp reverse proxy will be in aws), second to expose it from the edge.
I am wondering about Can the IP address advertised by http-bind for port 10000 be specified in server configs? i.e. how is it possible to specify in server configuration the IP of the reverse proxy that proxies JVB rather than the IP of the backend JVB, is it possible? Thank you

Advanced section of

Hi @damencho, is having all the 3 ports served from the accelerator, so including the media UDP 10000.

Since the configuration did not remove the origin server from the candidates, at the moment I rewrite the content in the 5280 nginx proxy to localhost so that i return the candidates as one of the static anycast IP addresses that belong to the accelerator.

This way the traffic from all users goes through aws network as much as possible. This could be the same as if they would just connect directly to the aws instance hosting the jitsi server, but it can well be that connecting to the edge (well, not connecting, in case of udp) - that directing traffic from end users to the accelerator will make it more certain that most of the travel is done in the aws network, leading to potentially better stability in the congested times of the weekend when all want to speak.

I can’t wait to see tomorrow.

I wonder how will that work when you have more than one jvb

I’m thinking of a possibility like this: an accelerator accepts multiple endpoints in multiple regions, the accelerator will forward the packet to the closest endpoint, which could well already be the jvb hosting that conference or participating in hosting it, otherwise a feature may be needed (octo-like) that jvb instances are aware of which jvb instances are there and which conferences and which jvb instances are hosting the conferences (this can be disseminated in a gossip with hashicorp serf), and any jvb instance getting traffic that belongs to another jvb would forward it to a more competent jvb, which hosts that conference or which has more chances of knowing about that conference, such as an older node…after a little while gossip will stabilize (information disseminated across) and the road would be edge - local region - optionally forward to the hosting region…how does that sound? i am not familiar with jitsi…thanks. Btw, the quality was excellent in the call yesterday thru accelerator but in all fairness before the node was in vultr, and thru the 2 reverse proxies was much better with all users green and continous video audio streams, but probably just having the node in aws would have brought 90% of the benefit. Also agree there is a disadvantage of routing traffic through intermediary jvb instances even if that portion of the traffic would be temporary or minoritary, but for many users it could improve experience. Forwarding traffic is cheap at least with nginx, my guess it can be efficiently implemented in Java too…probably jitsi is already using non blocking methods with NIO selector and so on i would think? Another approach is to have a routing ring that knows the jitsi jvb conferences, jvb instances and mappings, so from edge will go into a layer just for routing to the right jvb (with indirection within that routing layer if needed until a node knows the conference-jvb mapping), and then jvbs will just do their current thing. It does seem a stretch because i would guess in most cases normal routing will jump in to the aws network pretty quickly, and going thru edge explicitly would rarely provide an additional help. In my tests, just in 10s tests, going to aws instance was max 60ms and via the edge 50ms. For longer tests this difference in upper percentiles would rise, and when network is crowded. So I think it would worth it esp for 8x8. So in this case would be: edge (accelerator) - routing layer (geo-distributed gossip hashicorp serf cluster disseminating the jvb-conference pairs eventually-consistent, in a CRDT) - matched JVB(s)…i am not sure how it currently works: do the browsers get candidates, and the first user opening a conference, his/her browser will pick a close jvb, and the other users will get that jvb already having the conf on?
latencies on the aws instance (wrk http://aws_instance…a small reply page with 200 from nginx), 21min, 1 conn (network not congested)
|99.9% | 48.096000|
|99.99% | 153.266000|
|99.999% | 261.558000|
latencies on the aws instance through the edge
|99.9% | 47.975000|
|99.99% | 54.735000|
|99.999% | 58.611000|
since at 30fps a frame is sent about every 33ms, a difference in the very high percentiles will not probably matter too much…but when the network is more crowded, when there are intermediate AS hops, when the user is on the mobile and any additional latency jitter is felt, it could matter of course

to edge: min/max: 9ms/29ms (160 pings) => interesting that in the upper percentiles, half of the latency is until the edge and half is from the edge (london) to the instance (frankfurt)…actually in the lower percentiles as well: 9ms till edge, 20.5ms nginx response from the instance through the edge…& there is the time it takes for nginx to flush & the client to consume that small page…so road london frankfurt could be like 10 to 20ms in all…
to aws instance: min/max: 27ms/54ms (160 pings)

btw can the audio encoding be perturbed in any way by the fact that jvb sees the sender of the udp as much closer than it actually is? or just the computed client bandwidth (sender/receiver) is taken into consideration in audio encoding and all that sort of stuff? because i had some audio distortions (the video was good at all times though), and wondering…in fairness the same distortions were before when users connected directly to the actual jvb…it is probably just the jitter of the local wifi / 4g connections of the end users i would guess typically right? and the quality of their connection / provider

A simpler solution (assuming each conference is hosted exactly on one jvb), is to map each jvb:10000 to the accelerator:anotherPort, and http-bind/conference or xmpp-websocket would return accelerator:remappedFwPort ? so jvb1:10000 would be, jvb2:10000 would be, and so on.

If the accelerator does not natively support port remapping (just forwarding), then 2 simple solutions are:

  • bind jvb1 on 10001, jvb2 on 10002, etc, or
  • forward locally 10002 to 10000 with an nginx stream listen udp directive, it is extremely cheap on cpu

Of course some development is needed to automatically provision the new ports on the accelerator and to have the mapping ready for http-bind or similar, do the local port fw, and so on - I would be really happy to be able to contribute to this, because the quality improvement is amazing after two such calls with my family, and at the moment i have a lot of free time,

in fact at upper latencies, the difference between aws global accelerator - proxied latency and the direct to aws instance latency is in the hundreds of millisecond which translates in end-to-end latency differences of up to a second…