Thanks for the information.
Is there any reason other than extra CPU power requirement for mixer,
that translator mode is recommended by you guys.
That's the main reason. But the tradeoff is huge (you need to decode all incoming streams, create a separate mix for each received, and encode each, all to save something like 40kbps per (unmuted) stream).
There's also an additional delay due to the server having to use a jitter buffer, and clients can't distinguish between the different speakers (without RFC6465 support in the browser or client).
Also I need recording of separate audio streams in mixer mode. Can
you point me in the right direction for where to change.
The recorder which we use in the bridge requires an RTPTranslator. It's initialized here: https://github.com/jitsi/jitsi-videobridge/blob/master/src/main/java/org/jitsi/videobridge/Content.java#L758
If I remember correctly, a libjitsi MediaStream can't have both a MediaDevice and an RTPTranslator. You may be able to work around this by keeping the RTPTranslator separate from the MediaStream-s, and feeding it packets in another way (e.g. from RtpChannel#acceptDataInputStreamDatagramPacket).
On 29/03/16 01:32, Somil Bansal wrote: