Virtual Background on Mobile

Dear Jitsi Team,

We loved Jitsi OpenSuse Keynote video on YouTube. Thanks a lot for sharing it!

I have a question about mobile virtual backgrounds you announced in the video.

We have a Jitsi based product and adding mobile virtual backgrounds became a requirement for us about 2 months ago. In keynote, you mention that the mobile virtual background changes are being built in WebRTC layer. Our team has also made some WebRTC code changes to make selfie segmentation work on iOS. Right now we are not able to place an image on the background, but we are able to see the person segmented out with a blue background, using the MediaPipe Selfie Segmentation Model.

We estimated that maintaining a separate WebRTC codebase would be too much work for us, so we carried the manipulations to react-native-webrtc level.
With changes to react-native-webrtc, we are again able to run selfie segmentation on our client iOS app.

Adding an image to the background or implementing blur is our next step.

Other steps are bridging these changes to Jitsi Meet layer and controlling background turning on/off behavior from there, modifying our build script to integrate changes and doing everything for Android.

My question is, could you please share your timeline to bring mobile virtual backgrounds? Some of our efforts may be unnecessary when you bring the solution. With current situation, do you think it is better to collaborate and maybe we try to work on a commit to react-native-webrtc with our changes or should we just wait for your solution if it’s soon to come? We’d like to focus on another contribution to prevent double work if this project is already much ahead of us.

Thanks a lot for your advice in advance.
Thank you for the great product!

@yavuz @saghul

Hi,

We would like to share with you the screen shots of the demo, we performed with @Sidal . We will be thrilled to contribute as we witness the development of this great project.

Thanks.

This is really nice!

As we mentioned the virtual backgrounds work is part of a student project with Google Summer of Code, so it’s hard to predict the outcome.

Our intention is not to modify WebRTC but to implement a custom capturer, which would live in RN WebRTC and would pull frames from the camera, psss them through a transforming plugin, and then let them through.

If you have also gone in this direction, and are willing to share your current progress that would likely help our student quite a bit.

We implemented an interceptor between video capturer and video source. Video frames are gathered from Video capturer and passed to the Selfie Segmentator framework. After processing is completed on Selfie Segmentator framework, they are being passed back to the video source.

Our changes on RN WebRTC are available on github . All of them are committed into virtual-background-ios branch.

It seems our thoughts about solution is similar. If you also like our solution we can open a PR to RN WebRTC project.

@Sidal @saghul

That is actually really close to what we need! Our current action plan is to add a generic interception mechanism to RN-WebRTC to that application can plug different effects, instead of bundling them with RN-WebRTC.

Can you create a draft PR perhaps, with something like what I mention above? We can continue the discussion there and bring in our GSoC contributor too.

I tried to open a PR but got permission error. Could you add my github user (yavuzcakir) as a contributor to react-native-webrtc project.

You don’t need to be part of the project for that. Create a branch on your own fork and send the PR, that’s the usual flow.

PR steps and related Git commands

1 Like

PR is created. Your comments are welcome :slightly_smiling_face: