[jitsi-dev] MCU performances/scalability and beyond


#1

Hi,

Thanks to your help I have been able to install remotely a Jitsi Meet+Videobridge instance. I also provided a script to avoid the long installation process (https://gist.github.com/francoisTemasys/a17f5874bf104f0a2684).

Now I have been interested to test some parameters using the MCU:
- CPU performances on the client side. Should I expect a gain using the MCU?
- CPU/Memory performances on the server side. What is for you the maximum number of client the server can handle at the same time? Under what scenario?
- Scalability of the MCU. If you would have to handle 1 000 000 of users I believe you would distribute the load on more than one MCU. Although on the documentation I don't see this scenario being considered. What would be your approach in this case?
- What is your evaluation of the stability of the MCU currently?

Best regards,

···

--
Francois From Temasys


#2

Hi all,

I would really appreciate a dev point of view about these questions.

Thanks

···

On 19/05/2014 11:38, Regnoult Francois wrote:

Hi,
Now I have been interested to test some parameters using the MCU:
- CPU performances on the client side. Should I expect a gain using the MCU?
- CPU/Memory performances on the server side. What is for you the maximum number of client the server can handle at the same time? Under what scenario?
- Scalability of the MCU. If you would have to handle 1 000 000 of users I believe you would distribute the load on more than one MCU. Although on the documentation I don't see this scenario being considered. What would be your approach in this case?
- What is your evaluation of the stability of the MCU currently?

Best regards,
--
Francois From Temasys

--
Francois From Temasys


#3

Hey Francois,

Hi all,

I would really appreciate a dev point of view about these questions.

Thanks

Hi,
Now I have been interested to test some parameters using the MCU:

So it's probably worth mentioning from the start that Jitsi Videobridge is NOT an MCU. It is an SFU (Selective Forwarding Unit). This is *very* important because it makes for all the difference in terms of scalability.

- CPU performances on the client side. Should I expect a gain using
the MCU?

A gain compared to what?

Compared to full mesh WebRTC conferences: Yes. In those scenarios the browser would typically create one encoding per receiver and this would be quite heavy on the CPU. Given the full mesh architecture though, you'd probably hit an upstream bandwidth limitation much earlier than a CPU one.

Compared to a regular MCU that mixes everything and sends it back to you as a single stream: No. But the small added client-side CPU efficiency of the MCU (compared to the SFU) comes at the expense of added latency, lower quality, *significantly* higher server-side cost, and degraded UX (not possible to switch to a specific participant).

- CPU/Memory performances on the server side. What is for you the
maximum number of client the server can handle at the same time? Under
what scenario?

It is very hard to say. It depends on bandwidth, topology (how many send and how many receive), stream quality, server configuration and many more. But, in order to give you a perspective we recently had the following data point from a user:

In a scenario with one sender and 180 receivers (i.e. the bridge decrypts one stream and then re-encrypts it 180 times) it uses
400 MBits of bandwidth on the server and on an i7 quad core the bridge was taking 30% CPU.

- Scalability of the MCU. If you would have to handle 1 000 000 of
users I believe you would distribute the load on more than one MCU.
Although on the documentation I don't see this scenario being
considered. What would be your approach in this case?

I assume these people are not all in the same conference (which would be a scenario we haven't much thought about right now) but if you simply mean 1 000 000 users in different conferences then that kind of scalability would depend on the business logic fronting Jitsi Videobridge. You can easily run a fleet of a thousand JVB instances and simply have to make sure that you take load into account before choosing one to create your next conference on.

- What is your evaluation of the stability of the MCU currently?

Jitsi Videobridge: *Very* stable.
Jitsi Meet: rather stable with some quirks now and then (but nothing that wouldn't go away after a page reload).

Hope this answers your questions.

Cheers,
Emil

···

On 28.05.14, 04:14, Regnoult Francois wrote:

On 19/05/2014 11:38, Regnoult Francois wrote:

Best regards,
--
Francois From Temasys

--
Francois From Temasys

_______________________________________________
dev mailing list
dev@jitsi.org
Unsubscribe instructions and other list options:
http://lists.jitsi.org/mailman/listinfo/dev

--
https://jitsi.org


#4

Thanks for the answers.

As I'm not an XMPP expert, when you say that it is possible to run thousands VideoBridge instances, they would have to be referenced by the XMPP server, isn't it? On prosody server (or other XMPP server) is it possible to add XMPP components on the fly? Who would be the one choosing which JVB to choose? Jitsi Meet (or other web server/ signalling application)?

If I want to start to tweak Jitsi Meet, what are the main files/class that I should focus on? Like where is the connection with the XMPP server/JVB is done? How can I decide how the JVB is going to be used (webinar scenario or conference scenario)?

Best regards,
Francois

···

On Wed 28 May 2014 03:07:25 PM, Emil Ivov wrote:

Hey Francois,

On 28.05.14, 04:14, Regnoult Francois wrote:

Hi all,

I would really appreciate a dev point of view about these questions.

Thanks

On 19/05/2014 11:38, Regnoult Francois wrote:

Hi,
Now I have been interested to test some parameters using the MCU:

So it's probably worth mentioning from the start that Jitsi
Videobridge is NOT an MCU. It is an SFU (Selective Forwarding Unit).
This is *very* important because it makes for all the difference in
terms of scalability.

- CPU performances on the client side. Should I expect a gain using
the MCU?

A gain compared to what?

Compared to full mesh WebRTC conferences: Yes. In those scenarios the
browser would typically create one encoding per receiver and this
would be quite heavy on the CPU. Given the full mesh architecture
though, you'd probably hit an upstream bandwidth limitation much
earlier than a CPU one.

Compared to a regular MCU that mixes everything and sends it back to
you as a single stream: No. But the small added client-side CPU
efficiency of the MCU (compared to the SFU) comes at the expense of
added latency, lower quality, *significantly* higher server-side cost,
and degraded UX (not possible to switch to a specific participant).

- CPU/Memory performances on the server side. What is for you the
maximum number of client the server can handle at the same time? Under
what scenario?

It is very hard to say. It depends on bandwidth, topology (how many
send and how many receive), stream quality, server configuration and
many more. But, in order to give you a perspective we recently had the
following data point from a user:

In a scenario with one sender and 180 receivers (i.e. the bridge
decrypts one stream and then re-encrypts it 180 times) it uses
400 MBits of bandwidth on the server and on an i7 quad core the bridge
was taking 30% CPU.

- Scalability of the MCU. If you would have to handle 1 000 000 of
users I believe you would distribute the load on more than one MCU.
Although on the documentation I don't see this scenario being
considered. What would be your approach in this case?

I assume these people are not all in the same conference (which would
be a scenario we haven't much thought about right now) but if you
simply mean 1 000 000 users in different conferences then that kind of
scalability would depend on the business logic fronting Jitsi
Videobridge. You can easily run a fleet of a thousand JVB instances
and simply have to make sure that you take load into account before
choosing one to create your next conference on.

- What is your evaluation of the stability of the MCU currently?

Jitsi Videobridge: *Very* stable.
Jitsi Meet: rather stable with some quirks now and then (but nothing
that wouldn't go away after a page reload).

Hope this answers your questions.

Cheers,
Emil

Best regards,
--
Francois From Temasys

--
Francois From Temasys

_______________________________________________
dev mailing list
dev@jitsi.org
Unsubscribe instructions and other list options:
http://lists.jitsi.org/mailman/listinfo/dev

--
Francois From Temasys


#5

Thanks for the answers.

As I'm not an XMPP expert, when you say that it is possible to run
thousands VideoBridge instances, they would have to be referenced by the
XMPP server, isn't it?

yeah. Better have an XMPP server that scales to the number of stanzas then. Or run multiple xmpp servers.

On prosody server (or other XMPP server) is it
possible to add XMPP components on the fly?

I think that can be done with the telnet admin console. Possibly changing the config and triggering a reload is sufficient.

Note that once you have configured components, you can dynamically connect them.

Who would be the one
choosing which JVB to choose? Jitsi Meet (or other web server/
signalling application)?

Currently it's hardcoded. https://github.com/jitsi/jitsi-meet/blob/master/config.js#L5
You'd need to use your own logic to pass a certain bridge jid to the focus constructor.

If I want to start to tweak Jitsi Meet, what are the main files/class
that I should focus on?

app.js

Like where is the connection with the XMPP
server

https://github.com/jitsi/jitsi-meet/blob/master/app.js#L53

> /JVB is done?

https://github.com/jitsi/jitsi-meet/blob/master/app.js#L631 (mostly) -- currently happens when you join a MUC.

Note that I don't think a clientside focus is such a good idea (even though it works pretty well) if you want real scalability.

How can I decide how the JVB is going to be used
(webinar scenario or conference scenario)?

I wish I had more time to work on https://github.com/jitsi/jitsi-meet/issues/6 :-/

···

Am 28.05.2014 09:39, schrieb Regnoult Francois:


#6

Hey Francois,

Thanks for the answers.

As I'm not an XMPP expert, when you say that it is possible to run
thousands VideoBridge instances, they would have to be referenced by the
XMPP server, isn't it?

That would be one way of doing it (for the record that specific way wouldn't work right now because we always use the same jitsi-videobridge subdomain, but this should be improved today or tomorrow) . They could also run on separate XMPP servers. It depends on how you want to build your architecture.

On prosody server (or other XMPP server) is it
possible to add XMPP components on the fly?

I think it takes a "reload" but I don't believe it's necessary to restart it.

Who would be the one choosing which JVB to choose?

The focus agent. The one that creates the conferences. In Jitsi Meet this is the first person to join a conference but that's very application specific. We expect that in most cases the focus would be a server-side entity.

Jitsi Meet (or other web server/
signalling application)?

Jitsi Meet is just one application and only one way of using the bridge. You are by no means constrained on using it the same way. As mentioned Jitsi Meet has made the choice to add the focus features (the conference session control) in the client. We will probably change this in the near future though and have that run on the server. No changes would be required in the bridge for this to happen.

If I want to start to tweak Jitsi Meet, what are the main files/class
that I should focus on?

Well that depends on what you want to do. Fortunately there aren't many files right now so it's easy to find stuff.

Like where is the connection with the XMPP
server/JVB is done?

These are to separate things. The connection to the XMPP server is handled by Strophe. The connection to JVB: by colibry.

How can I decide how the JVB is going to be used
(webinar scenario or conference scenario)?

I am not sure I understand the question. You decide whether you are doing a webinar or a conference in the application that is using the bridge. There's not much difference in how you are using the bridge though.

Emil

···

On 28.05.14, 09:39, Regnoult Francois wrote:

Best regards,
Francois

On Wed 28 May 2014 03:07:25 PM, Emil Ivov wrote:

Hey Francois,

On 28.05.14, 04:14, Regnoult Francois wrote:

Hi all,

I would really appreciate a dev point of view about these questions.

Thanks

On 19/05/2014 11:38, Regnoult Francois wrote:

Hi,
Now I have been interested to test some parameters using the MCU:

So it's probably worth mentioning from the start that Jitsi
Videobridge is NOT an MCU. It is an SFU (Selective Forwarding Unit).
This is *very* important because it makes for all the difference in
terms of scalability.

- CPU performances on the client side. Should I expect a gain using
the MCU?

A gain compared to what?

Compared to full mesh WebRTC conferences: Yes. In those scenarios the
browser would typically create one encoding per receiver and this
would be quite heavy on the CPU. Given the full mesh architecture
though, you'd probably hit an upstream bandwidth limitation much
earlier than a CPU one.

Compared to a regular MCU that mixes everything and sends it back to
you as a single stream: No. But the small added client-side CPU
efficiency of the MCU (compared to the SFU) comes at the expense of
added latency, lower quality, *significantly* higher server-side cost,
and degraded UX (not possible to switch to a specific participant).

- CPU/Memory performances on the server side. What is for you the
maximum number of client the server can handle at the same time? Under
what scenario?

It is very hard to say. It depends on bandwidth, topology (how many
send and how many receive), stream quality, server configuration and
many more. But, in order to give you a perspective we recently had the
following data point from a user:

In a scenario with one sender and 180 receivers (i.e. the bridge
decrypts one stream and then re-encrypts it 180 times) it uses
400 MBits of bandwidth on the server and on an i7 quad core the bridge
was taking 30% CPU.

- Scalability of the MCU. If you would have to handle 1 000 000 of
users I believe you would distribute the load on more than one MCU.
Although on the documentation I don't see this scenario being
considered. What would be your approach in this case?

I assume these people are not all in the same conference (which would
be a scenario we haven't much thought about right now) but if you
simply mean 1 000 000 users in different conferences then that kind of
scalability would depend on the business logic fronting Jitsi
Videobridge. You can easily run a fleet of a thousand JVB instances
and simply have to make sure that you take load into account before
choosing one to create your next conference on.

- What is your evaluation of the stability of the MCU currently?

Jitsi Videobridge: *Very* stable.
Jitsi Meet: rather stable with some quirks now and then (but nothing
that wouldn't go away after a page reload).

Hope this answers your questions.

Cheers,
Emil

Best regards,
--
Francois From Temasys

--
Francois From Temasys

_______________________________________________
dev mailing list
dev@jitsi.org
Unsubscribe instructions and other list options:
http://lists.jitsi.org/mailman/listinfo/dev

--
Francois From Temasys

_______________________________________________
dev mailing list
dev@jitsi.org
Unsubscribe instructions and other list options:
http://lists.jitsi.org/mailman/listinfo/dev

--
https://jitsi.org


#7

Hi,

Hey Francois,

As I'm not an XMPP expert, when you say that it is possible to run
thousands VideoBridge instances, they would have to be referenced by the
XMPP server, isn't it?

That would be one way of doing it (for the record that specific way wouldn't
work right now because we always use the same jitsi-videobridge subdomain,
but this should be improved today or tomorrow) .

Just wanted to mention that it's now possible to specify JVB
subdomain[1] via "--subdomain" command line argument. You can connect
multiple instances to the same server under different sub-domain. Note
that each one must be configured in prosody config before it will be
able to connect. Otherwise "host-unknwon" exception is usually
reported(which might be a bit confusing).

Regards,
Pawel

[1]: https://github.com/jitsi/jitsi-videobridge/commit/9500a2a1adf4a5005edbc700f00885ae0df044c1

···

On Wed, May 28, 2014 at 9:56 AM, Emil Ivov <emcho@jitsi.org> wrote:

On 28.05.14, 09:39, Regnoult Francois wrote:


#8

Just wanted to mention that it's now possible to specify JVB
subdomain[1] via "--subdomain" command line argument. You can connect

awesome!

multiple instances to the same server under different sub-domain. Note
that each one must be configured in prosody config before it will be
able to connect. Otherwise "host-unknwon" exception is usually
reported(which might be a bit confusing).

It would probably be helpful if the bridge clearly states to which ip/port it attempts to connect and as which component domain.