I am pretty sure on how the translator handles video by sending video (or last n) to each participant on the ssrc it arrives at apart from the participant that sends the video.
I remain, however, a bit confused about audio. I thought the audio went through some form of mixing process so that the audio is sent as a single stream to each participant (with one ssrc). Initially I thought the getAudioMixer() method on content did something about this, but clearly is it something related to the rtptranslator.
Can anyone tell me please a bit about how the rtptranslator handles audio and the mixing of audio?