Streaming realtime 3D virtual environment

Now that my wife is happily doing online ballet classes using Jitsi (thanks for that!) I was thinking a way to make them a bit more “entertaining”.

As professional ballet teacher she didn’t like the idea to “entertain”, but the problem is a live conference it’s not a real class, there is more distance and less group effect, so I’m wondering how much the classes will resist if the lockdown will persist for months.

So was thinking if/how is possible to

  • transmit the moderator video to the server
  • on the server chroma key to extract his profile
  • on the server extract his depth map/perform motion capture to extract his 3D depth
  • on the server compose this into a virtual 3D environment
  • push out this complete virtual+real feed into the complete feed to be sent to all the participants

I know probably how to do the different steps alone (probably using a combination of sw such as TouchDesigner, OpenCV, UnrealEngine), but cannot figure out if Jitsi Meet can be integrated in such a workflow

These should be on the client side, not on the server… After processing the video, you can stream it to a virtual camera and you can choice it as the video device on Jitsi. Needed a fast machine to prevent the latency

Problem is the server is much more powerful than the client.
Will be possible on the server side?

AFAIK, JVB (the component which manages the audio/video traffic) does not decode/encode the packages, it only relays them effectively. Therefore you have not a suitable video data on the server.

1 Like