There’s an ongoing effort on adding support for multiple video tracks per endpoint which will change the way the presenter mode works. The goal is to send camera and screen as a separate video streams.
For starters only multiple videos and not audio tracks will be supported, because this results in smaller scope. It should be easy to extend the support to include the audio once the appropriate mechanics are in place.
Different parts of the code are more or less prepared to handle multiple streams and almost all parts of the system will require some modifications. Because of an assumption that an endpoint can send only one stream per media type the signaling operates endpoint IDs, which will have to be changed to use stream identifiers instead. The first phase will be about adding source names to signaling and the support for receiving multiple streams. The second phase will be about sending multiple video tracks and this will be only available in the clients supporting Unified Plan. Receiving should work without any issues on Plan-B clients and those can continue to use the legacy presenter mode where camera and screen are composed on a canvas into a single video stream.
Here’s the link to the document which contains more detailed plan description:
This post is being published in response to the request from the last community call. Please DM me and/or post here if you want to help with the implementation.