Let me paint a picture. You are in a chat with another person who is at some remote location. Sitting next to that person is a robotic arm (or other device… in this case it’s an open source industrial robotic arm from Haddington Dynamics, Inc called Dexter). The robot arm is connected to the computer via the local network. CAT5 because low latency is critical. The remote person gives you permission to move their robotic arm and goes off to do other things. Sitting next to you is another robotic arm. You put the local arm into “follow me” mode and take hold of it. Your local robot arm follows your movements. Those movements are transmitted to your local computer, then out through the chat of the jitsi meeting (or some other channel if available) and into the computer on the remote end, where it is routed to the remove robot arm, making it move. From your local position, you move things around and do useful work on the remote end. If the robot on the remote end runs into something, it sends back data about the forces it is experiencing and those are used to drive the motors in the local arm, so that you can feel the responses.
The missing bit is routing the data from the robot arm into the chat at this end and out of the chat into the robot arm at that end. All the rest of that paragraph already exists, except the sentences that are in italics. We can already control one (or many) robot arm with another on the local, very low latency network, including stunningly realistic haptic feedback, we are just missing the ability to go over the internet.
There are two ways I can imagine this being done.:
We host Jitsi ourselves and modify the web pages it serves so that the browser reaches out to the robots via websockets. We have a browser / websocket interface already. However: There are cross scripting issues. Bigger issue: This limits the computational work on the robots motion to what can be accomplished in the browser
We have a VERY advanced IDE already which installs as an Electron app and has gobs of code in it for kinematics which can really help make predictions about robot movement which will reduce latency and provide a better experience. If that application can connect to the room as if it were a separate user, and send PMs to it’s remote counterpoint, then no change to Jitsi would be required. Human operator can see / hear via video / audio with remote PC, and move the arm via text chat PM between applications.
Can you advise me as to how to allow an application, which is not a web browser, to join a meeting and send PM texts to another participant? Or in general, how would you implement transfer of data between devices attached to PC’s other than video / audio?