How to get current video frame in Jitsi?

I had this discussion from GitHub:

https ://github.com/jitsi/jitsi-meet/issues/5269#issuecomment-622655407

(Please remove the space above to access the link.)

Referring to Hu Ningxin OpenCV WASM demo:

vc = new cv.VideoCapture(video);
    
function processVideo() {stats.begin();
  vc.read(src);

The object vc is the video stream. vc.read(src) stores the current video frame into src for further processing.

What is the easiest way to do this in Jitsi?

My intention is to use Jitsi foreground images (face, body, objects etc.) as avatar in an Augmented Reality app I am developing:

Thank you very much!!

I found a thread on similar issue:

https://news.ycombinator.com/item?id=22823070

(Update ) Following up with search keyword “blur”, I found:

How can I write a simple test to extract the body of the person?

I had a looked at the package given in the code above. Very impressive demo. Even works with Firefox mobile browser!!

https://storage.googleapis.com/tfjs-models/demos/body-pix/index.html

Update 3 (8 May 2020)

After several days (much longer than I initially estimated) combing through Jitsi-Meet code, without any help from anyone whatsoever, I found this command:

APP.store.getState()['features/base/tracks']

So Jitsi-Meet allows lots of internal variables to be access globally (via the console or otherwise) through the variable APP.

Next step:

  • Check the BodyPix algorithm accesses the above via wrapper functions.
  • Update 4 (10 May 2020)

Found code from TFJS BodyPix Mask and modified JitsiStreamBlurEffect.js:

1 Like