Video Streaming Access


How do I pipe video stream in jvb?

What is your goal, and what do you want to do?

I want to redirect each participant video stream into openVC library and do some analysis.

The bridge does not have encoding/decoding capabilities, all it does is to forward RTP packets. If you want to do video analysis you need to do that on the client-side.
Checkout the virtualbackgrond, face expressions, and centering face in video - features in jitsi-meet. Those already do that by processing videoframes.

Thank you @damencho.

This analysis has to be done for all participant videos. So when considering network bandwidth and participant CPU/GUP, thinking to do at backend.

Would that be easy to off the RTP packet out (like proxying) from the bridge and another server to do encoding/decoding and analyse?

You can do it based on the jibri approach, a participant enters the conference and receives all streams. GitHub - jitsi/jibri: Jitsi BRoadcasting Infrastructure
Or you can maybe use the gst-meet Gstreamer IO - #8 by jbg to achieve it.

This approach will keep your modifications external to the system so you can still update it, which is one important step to keep it in good shape as browsers change every four weeks…

It’s more scalable to do this on the client side if you can. Each client only has to process its own video, and with an efficient model they likely have the spare processing power. They also already have access to the uncompressed video by definition. By contrast, you will spend a lot on backend resources to process all streams of all clients in real-time, even just the decompression part before you even consider your ML model. This is similar to the reason why SFU scales better than server-side compositing.

1 Like

Thanks @damencho and @jbg
much appreciate your support.

Let me try both approaches to achieve this.