Using MediaStream Recording API instead of x11grab on Jibri

Hello,

We notice that Jibri depends on virtual audio and video devices to allow ffmpeg to capture the desktop audio and video. This seems to make Jibri resource-intensive. I’m wondering if anyone has tested a headless implementation of Jibri using the MediaStream Recorder API instead, which seems to be a simpler method.

The MediaStream Recording API makes it possible to capture the data generated by a MediaStream or HTMLMediaElement object.

This method could probably be implemented using headless chrome instead, but my guess is that there are reasons for doing it the way it’s being done now. Is it feasible to modify Jibri to work this way instead? Might anyone be able to explain the advantages / disadvantages of the different approaches?

1 Like

I think it would be possible to feed data from Selenium directly to ffmped by using two UNIX pipes, one for audio and one for video.

1 Like

The biggest hurdle is to check if you can make Selenium access the MediaStream.

I looked at some advanced examples of using selenium but i only found how to feed data to the browser not in the other direction.
https://bonigarcia.github.io/selenium-jupiter/advanced

The reason Jibri uses the desktop audio and video is to get around the above hurdle.

1 Like

Thanks for the input @xranby. This is interesting, will experiment with this. It does seem as though Puppeteer might be better suited for this task than Selenium. But what we’re speaking of is probably an overhaul of the entire Jibri architecture.

The puppeteer api can capture singe images, people have tried to record video using that with limited success:

it may be possible to transfer data from the chrome browser javascript to the node side using page.exposeFunction