Video Bitrate Maxes Out at 1800kbps on a P2P Call

Setup:
Latest Jitsi Meet self-hosted on DO droplet with dedicated IP.
resolution: 1080,
disableSimulcast: true,
startAudioOnly: true,
Constraints are set to prefer 1080p, but they don’t work on mobile anyway yet.

Conditions:
Two participants join a call:
A) Chrome browser on a MacBook Pro connected via 801.11ac, standing just next to the access point. This client doesn’t send any video, only receives.
B) Latest Jitsi Meet from Play Store installed on a smartphone (tested with HTC One M7, Redmi 4A, Huawei Y7) being connected either by LTE (10 Mbps tested uplink) or by 801.11n (50 Mbps tested uplink). This client sends video, but doesn’t receive any.

Expected behavior:
Jitsi Meet client on Android utilizes all available uplink bandwidth to provide the best possible video quality that is supported by the hardware.

Actual behavior:
When camera on the HTC M7 smartphone is turned on, the streaming starts with ≈2600kbps bitrate, stays at that level for approximately ten seconds, then drops to ≈1800 (720x408@30fps) and never increases after that. The two other smartphones perform even worse, with video bitrate being at ≈800kbps.

Question:
What do I do wrong and what should I change to get to a 8 or 10 Mbit/s video bitrate (i.e. decent 1080p)?

Thanks!

P.S. I have found a reference to const MAX_TARGET_BITRATE = 2500; (ConnectionQuality.js) while researching older topics on here. Is this constant related to my issue? I don’t really understand what it does from reading the comments in the source code.

If you are testing using mobile apps the reduction in bitrate could be due to CPU overuse detection. It would be interesting to test using the hardware encoder (we currently use the software one so we can use simulcast and avoid crashes on crappy hardware encoders) but we don’t have a good way to test that as you need to recompile the app yourself. If you’re up for it, I can provide some guidance on the necessary change to enable hw encoding.

Thank makes sense, because the devices I tested are all either crappy or outdated and thus crappy.

Can it cause underperformance on more powerful devices?

In production I’m planning to stream 1080p continuously for 5 hours via Jitsi using a more capable smartphone. Would a more powerful device like Galaxy S8/9/10 or iPhone 8/10/11 be able to cope with this task?

If it helps advance the Jitsi project, I can assist.

Using the software encoder will always have a CPU impact.

Yeah, a more powerful device may have enolugh headroom for the CPU overuse detection not to kick in.

Thanks! So here: https://github.com/jitsi/jitsi-meet/blob/3e40bb19cdb79d685fd81cf494f42a5c3f5916ba/android/sdk/src/main/java/org/jitsi/meet/sdk/ReactInstanceManagerHolder.java#L101 you’d need to add a new line like so: videoEncoderFactory = new DefaultVideoEncoderFactory(eglContext);

Then rebuild the app. You can uncomment this line https://github.com/jitsi/jitsi-meet/blob/3e40bb19cdb79d685fd81cf494f42a5c3f5916ba/android/app/build.gradle#L51 to make a release build with a test signature.

Ok, I will recompile the app, but I’ve just found something that might indicate CPU is not the bottleneck here after all, I’d like to hear you comment on this.

First, I’ve tested the same arrangement with Galaxy S10 connected over WiFi, and the result is the same: we peaked slightly above 2mbps, then the bitrate decreased to ≈1700.

More importantly, here are some links mentioning that by default the platforms themselves limit sending bandwidth to the levels I observed, and the way to increase this limit is by modifying the SDP (as described by the link #1):

I’d be grateful if you could comment on these questions in particular:

  1. Can you confirm or deny whether this limit is addressed in Jitsi SDK? If not, can it be addressed at all in the case of mobile clients, or is it a limit on the OS level that would require patching a kernel to circumvent?
  2. Further, if your objective was to do one-way-video p2p-calls in fullHD (streaming video from mobile to web client, audio both ways), what would you do to make it work?

Thanks

This is literally my first time doing anything with Java or Android, so I’m poking around blindly, but here’s what I get:
error: cannot find symbol class DefaultVideoEncoderFactory


Can anyone help me out here?

Good points, I had forgotten about the 2.5 Mbps limit WebRTC seemed to impose. I thought they had removed it since it’s not possible to do 4k otherwise for example, and I have tested 4k succesfully (albeit on desktop only).

Not at the moment.

It could be addressed I think, it’s not a limitation in the OS but (used to be?) a limitation in WebRTC itself.

You need to import it. Click on the red word and a red lightbulb will show up, then click on it and it will offer importint the symbol.

Thanks for your efforts here!

Yes, I tried importing it, but what I get is:
error: incompatible types: Context cannot be converted to VideoEncoderFactory

Hm, but still, ConnectionQuality.js in lib-jitsi-meet seem to limit bitrate at < 2500? Or is it something else?

I don’t even think it’s accurate to say “used to be”. It seems “SDP munging” (or RTCLocalSdpModification) is widely used to arbitrarily override the min/max bitrate defaults that are set by client platforms (e.g. chromium with GetMaxDefaultVideoBitrateKbps*). The override value can be set arbitrarily, from what I read*, so it was never like a full-stop limitation, would you agree?

Btw, have you ever seen stable bitrate of > 2000kbps (per stream) being sent from mobile Jitsi client?

try this: new DefaultVideoEncoderFactory(eglContext, true, false);

We don’t use that value to munge the SDP

Ah, I thought that maybe they had lifted the limit in https://chromium.googlesource.com/external/webrtc/+/45c8b8940042bd2574c39920804ade8343cefdba/webrtc/media/engine/webrtcvideoengine2.cc#255, it seems like they haven’t.

We are not currently doing anything (that I’m aware of) to lift that limit in Jitsi Meet, so chances are you will reach the 2.5Mbps and that will be it.

We might need to do internal adjustments based on the target resolution.

Looks correct now. I pushed “Build APK” and installed it on the device manually after unintalling the Jitsi from play store. But I’m clearly doing something wrong somewhere else. I tried with and without test signature.

Let’s get you a release build. Uncomment this line: https://github.com/jitsi/jitsi-meet/blob/86130c14784709a2cfe793ce4a434674db328613/android/app/build.gradle#L51

Then in the Build Variants tab in Android Studio select “release” for the “app” target. Rebuild.

Ok, that helped, thanks. So I conducted a dumb manual testing and I will describe what I saw.

I did multiple calls with HTC m7 and a MacBook Pro (tried Chrome, latest Microsoft Edge and Firefox). I have both devices on the same 801.11ac hotspot in the line of sight across the room.

TLDR: Weird shit, most tests were maxed out at 1750kbps, two tests got stable 2500kbps, one test got stuck at 2000kbps. Diagonal artifacts appeared either when I changed video orientation, but sometimes arbitrarily, independent of what I did. Most of the times they didn’t appear at all.

The first test was the most interesting one. The device was charging, I turned on the video in horizontal orientation, initial bitrate was 500—800kbps and it stayed like that for 30 or 45 seconds, probably. The connection indicator even switched to yellow for some time. Then I rotated the device and got this artifact screen below, which disappeared after 1-2 seconds.

Meanwhile the bitrate shot up to 1750kbps.


I rotated the phone for some time and it triggered the effect several more times, although there wasn’t 100% correlation between the picture deterioration and when orientation was switched. Then these artifacts stopped appearing altogether.

I switched to the environment-facing camera and at that point noticed the bitrate going up to ≈2500kbps where it stayed for 5-7 minutes, until I stopped the conference. That said, even with such bitrate I never got above 960x544, even though my config.js declares resolution: 1080.


Then I did three more calls. I wasn’t able to trigger the artifact screen again. The bitrate stayed at 1750kbps every time.

I thought maybe the CPU got hot and throttles. The device did get hot, but not nearly as hot as with software encoding and it took much longer. I disconnected the charging cable, cooled the CPU, then re-joined the conference (being on battery power), but only saw 1750kbps again. The device crashed when I changed the orientation, but I’m not sure if it can be 100% attributed to encoding, because it runs unofficial LineageOS 16 (Android 9 port) on a 2013 device, so it may have contributed into instability.

After the crash I created another conference. This time the orientation change caused the artifact screen to appear, but only one time, not systematically. Then after a few minutes I saw the bitrate go up to ≈2000kbps, and it stayed there consistently.

I disconnected after 5 minutes and when I rejoined I saw the video parameters at 2500kbps, 1440x816@16fps. Battery got fully discharged after 20 minutes, and the device switched off abruptly.
Screen Shot 2020-02-18 at 18.47.45

I started charging it, turned it on, joined the conference once again. This time the diagonal artifact lines were present almost all the time, I only saw normal image for short periods of 3-5 seconds.

After 45 minutes I did another test and once again got a lot of artifact lines. I decided to test whether artifacts only appear when the device is charging, so I disconnected the power cable and re-connected to the room. This time no artifacts, but higher bitrate of 2500kbps. Connected back the power cable and re-joined — still no artifacts, bitrate dropped back to 1750.

I can’t see any consistent pattern in what I saw. I also tried to join the room in varying orders (mobile client being first or second to join), but I didn’t see it correlate with anything either.

One thing I noticed for the first time is that video quality on the preview is different depending on whether you’re alone in the room or joined by someone. The latter is noticeably degraded as shown on the GIF below. Is this ok?

This is interesting. My guess is that the de-noise filter is only enabled once the stream starts being encoded, and it looks distorted with the image you use because of its patterns. I don’t see the effect with more typical input:

Not only it looks distorted — it is actually distorted. See the pic below, these are the neighboring fragments of the two screenshots glued together and upscaled. Original screenshot files here: https://imgur.com/a/AlA20dO

Yes, but if it affects the quality of the outgoing stream in some way it’s still a serious shortcoming, I would argue.

Thanks fot the thorough testing!

From your findings (and some prior experience we have) here is what I can tell you:

  • Artifacts are often caused by not-so-great HW encoders, am I correct to assume they are gone if you switch back to the HW encoder?

  • We need to do more testing to see if we can push 1080 on mobile, at least when in P2P mode. We may need SDP munging to accomodate that.

This was a really useful exercise, thank you!

Correct, with software there were no artifacts.

But with software encoding my old phone goes burning hot in just under 10 minutes, which isn’t ideal either — I literally can’t hold the phone in my hand, which I’m sure isn’t great for its internals. Maybe the solution is a manual switch in the app’s settings rather than a one-size-fits-all approach.

You mean there might be other potential bottlenecks in Jitsi’s architecture that can prevent it from outputting 5-8-10mbps bitrate — even if SDP is munged?

P.S. I’m considering posting a feature request to the “Paid Work” subforum — what would your estimate be for the amount of work required here? I.e. SDP munging and whatever optimization beyond it.

UPD: Posted the bounty, but still would appreciate some ballpark estimate on this.

Yeah, makes sense. The problem is that the app needs to be restarted for the change to work. No big deal but something to consider.

Right now I don’t tthink tthere is anything else preventing it no.

I think the initial assesment could take as little as a couple of days, and the SDK munging aand testing anotherr 3 or so. Now, I cannot take a bounty because I am employed full time to work on Jitsi. I’ll try to see if we can spend some time on this in the short term though.

Sure, I understand that.

Thank you very much, Saúl!

I’ve just spoke to someone else about this whole issue, and I was told that bitrate limiting only happens on the transmitting side, and since SDP munging would be an ugly crutch, I should better just remove the hardcoded upper limit from the WebRTC implementation that react-native-webrtc imports and recompile the whole dependency chain.

BTW, that way I could also hardcode my preferred constraints (height & frameRate) into the react-native-webrtc, since the ones from config.js aren’t supported yet.

Do you think it’ll work? If so, may I ask you for guidance along the way?

As I understand this line below is the one responsible for my inability to utilize the full uplink bandwidth?

IMHO that is not a great idea, or at least a future-proof one. You’d have to keep patching and. rebuilding WebRTC yourself ad-eternum because I won’t apply that patch to the WebRTC tree we use to build react-native-webrtc.

As ugly as SDP munging is, is the way to have the WebRTC do some things there is no API for.

@jallamsetty do you know what SDP incantation we can use to raise thiss cap? (note that on RN we don’t have RTPSender yet).