Now that my wife is happily doing online ballet classes using Jitsi (thanks for that!) I was thinking a way to make them a bit more “entertaining”.
As professional ballet teacher she didn’t like the idea to “entertain”, but the problem is a live conference it’s not a real class, there is more distance and less group effect, so I’m wondering how much the classes will resist if the lockdown will persist for months.
So was thinking if/how is possible to
- transmit the moderator video to the server
- on the server chroma key to extract his profile
- on the server extract his depth map/perform motion capture to extract his 3D depth
- on the server compose this into a virtual 3D environment
- push out this complete virtual+real feed into the complete feed to be sent to all the participants
I know probably how to do the different steps alone (probably using a combination of sw such as TouchDesigner, OpenCV, UnrealEngine), but cannot figure out if Jitsi Meet can be integrated in such a workflow