[jitsi-dev] Desktop Audio Sharing questions.


#1

I am attempting to add functionality to the screen sharing that would allow
desktop audio to be forwarded to the opposite endpoint.

Ideally per window/client audio would be supported.

The simplest solution is modify the plugin to attach an audio track for the
desktop.

This does in fact work, and the desktop audio is played out of the
endpoint-- however the audio for the browser/NWJS instance running the
conference is included.

The result is an unpleasant echo, since a remote user speaks, their audio
is then played out of the desktop, and that voice is then forwarded back to
them.

I'm wondering if this is already handled somehow in WebRTC- it seems like a
common use case, and since Jitsi is handling the audio/video tracks
separately maybe it isn't being triggered.

I'm looking now at solutions that involve getting the direct audio track
out of an individual window, or perhaps using AudioContext on the client
side, or potentially some processing on the server side, to negate the
remote audio tracks from the desktop audio track.

Any thoughts, ideas, or experience involving this, would be greatly
appreciated.

···

--
- Jason Thomas


#2

My understanding is that the way it's supposed to work is by requesting
audio in the getUserMedia call that we use for obtaining the screen sharing screen (jitsi-meet only requests video right now).

This is supposed to work for tab sharing in the new screen share picket in Chrome >= 54 (see the release notes here: https://groups.google.com/forum/#!topic/discuss-webrtc/S5yex8rNIjA).

Regards,
Boris

···

On 09/12/2016 11:08, Jason Thomas wrote:

I am attempting to add functionality to the screen sharing that would
allow desktop audio to be forwarded to the opposite endpoint.

Ideally per window/client audio would be supported.

The simplest solution is modify the plugin to attach an audio track for
the desktop.

This does in fact work, and the desktop audio is played out of the
endpoint-- however the audio for the browser/NWJS instance running the
conference is included.

The result is an unpleasant echo, since a remote user speaks, their
audio is then played out of the desktop, and that voice is then
forwarded back to them.

I'm wondering if this is already handled somehow in WebRTC- it seems
like a common use case, and since Jitsi is handling the audio/video
tracks separately maybe it isn't being triggered.


#3

Hi Boris,

So, yes, requesting audio does indeed work with getUserMedia if the source is from the desktop.

Windows doesn’t provide any API calls it appears of any kind to isolate the audio of a single window, and the APIs being used by webrtc are simply to get the audio for the entire desktop.

The issue is that it includes the audio from the Jitsi window itself- which makes it pretty useless for sending to the remote end of a conference.

The single tab sharing makes sense since the browser will be able to isolate its own audio stream before it being mixed with the audio for the entire system.

I’m wondering if it would be feasible grab all of the audio tracks being played by Meet, and using the WebAudio API to subtract them from the outgoing desktop audio stream that I obtained via getUserMedia.

I was thinking possibly you guys had thoughts or ideas about this that would save me some time going down the totally wrong path.

- Jason.

···

On Dec 9, 2016, at 10:51 AM, Boris Grozev <boris@jitsi.org> wrote:

On 09/12/2016 11:08, Jason Thomas wrote:

I am attempting to add functionality to the screen sharing that would
allow desktop audio to be forwarded to the opposite endpoint.

Ideally per window/client audio would be supported.

The simplest solution is modify the plugin to attach an audio track for
the desktop.

This does in fact work, and the desktop audio is played out of the
endpoint-- however the audio for the browser/NWJS instance running the
conference is included.

The result is an unpleasant echo, since a remote user speaks, their
audio is then played out of the desktop, and that voice is then
forwarded back to them.

I'm wondering if this is already handled somehow in WebRTC- it seems
like a common use case, and since Jitsi is handling the audio/video
tracks separately maybe it isn't being triggered.

My understanding is that the way it's supposed to work is by requesting
audio in the getUserMedia call that we use for obtaining the screen sharing screen (jitsi-meet only requests video right now).

This is supposed to work for tab sharing in the new screen share picket in Chrome >= 54 (see the release notes here: https://groups.google.com/forum/#!topic/discuss-webrtc/S5yex8rNIjA).

Regards,
Boris

_______________________________________________
dev mailing list
dev@jitsi.org
Unsubscribe instructions and other list options:
http://lists.jitsi.org/mailman/listinfo/dev