Yes, this is another question about running Jitsi on a Raspberry Pi. Sorry. I’ve read far too many posts about it tonight, but most seem to concentrate on running a Jitsi Meet server. I feel I’m only interested in the client-end, but maybe that’s not how it works? Is there a good architectural diagram that shows how WebRTC links to the video bridges, etc.?
Ultimately, I was hoping to run a Jitsu endpoint on a Raspberry Pi so that I can throw it at my parents to plug into their telly and get a nice big videconference going with their grandkids. Peer-to-peer primarily, but 3±way would be a bonus. I care much less about where the server components go - they can happily sit in the cloud somewhere.
Am I confusing what Jitsu is all about? Is the focus more on the server and the client is just an ancillary? Maybe my concern should simply be about getting Chromium running with sufficient oomph to deliver and display low-latency audio/video and run it all in a web browser but it does seem like that’s resource wasteful on a constrained device. I get the impressions from previous posts that getting the hardware video decoding drivers working is key to solving performance and latency issues; there’s a mix of views - “it won’t work at all” (from some time ago) to more recent posts suggesting it does (but with latency considerations not noted, positively or negatively).
There seem have been attempts to build a native ARM client in the past but the thread is from 2016. I presume none of these have succeeded into production?
It looks like this Gist is the best walkthrough and has been updated recently, but it’s still seems to be focusing on implementing a server to be accessed remotely from a beefier device rather than getting the UI up and running on the device itself. Is this the best stab to follow?
Am I taking myself down a blind alley?? Or is it likely I can get reasonable performance and sufficient latency from some form of Jistu running on a Raspberry Pi 4?