Get audioLevel of SIP peer


Hello, I have set up a conference between a web-based client and a SIP client who is called via Jigasi. The conference works as intended and the web-based party can hear the remote SIP party. However, I am trying to automate this into a test and I am running into some trouble. For the test, I have set up Jigasi to call a number which just plays a pre-recorded message. Then, in the web-based client side I am measuring the audioLevel to detect fluctuations in volume, which would indicate that the track is indeed audible. The problem is that the audioLevel for the remote audio track is always -1.

The code which I am executing is the following:['features/base/tracks'].find(track => track.mediaType === 'audio' && !track.local).jitsiTrack.audioLevel

With normal clients this value shows the audioLevel, which varies with the loudness of the voice. However, this doesn’t seem to work with SIP client.

I would appreciate any help with the following questions:

  1. How is it possible to detect the audioLevel in this case?
  2. Chrome webrtc-internals provides the value audioOutputLevel which could be suitable for this scenario. Do you know how to read this value from the developer console?
  3. Is it possible to get access to the RTCPeerConnection that jitsi establishes? I could call RTCPeerConnection.getStats() and try to get the value from there.


I just tested that, calling from a jitsi-meet through jigasi a sip number and that code works I see the jigasi remote track the audio levels.
Is only jigasi in the call, I mean can it be you have another peer which is silent and your code can be getting the audio level of the other participant?


Hi Damencho, thanks for testing. That is indeed different behaviour from what I am seeing. Although I am not using upstream, but rather a version I downloaded from apt repositories a couple of months ago.

There are only 2 participants in the room, and there are 3 tracks: local audio, local video, and remote audio (jigasi). I just realised that the audioLevel for the local audio track is also always -1. Any idea how I could identify the source of this error?


What is your browser version?


I am using Version 69.0.3497.92 (Official Build) (64-bit) for Mac.

The logger prints some warning to the console. I don’t know if this is related to the problem:

[modules/statistics/AnalyticsAdapter.js] <e.value>:  Not sending an event, disposed.
n @ Logger.js:125
value @ AnalyticsAdapter.js:178
i.sendAnalytics @ statistics.js:751
value @ AvgRTPStatsReporter.js:725
n._onLocalStatsUpdated @ AvgRTPStatsReporter.js:526
n.emit @ events.js:96
value @ ConnectionQuality.js:478
n.emit @ events.js:96
o._processAndEmitReport @ RTPStatsCollector.js:870
o.processStatsReport @ RTPStatsCollector.js:755
(anonymous) @ RTPStatsCollector.js:372

The dominant speaker seems to work though. At least if I mute any of the participants (on the browser or directly on the cellphone dial), the other one becomes the dominant speaker. I don’t know if this is calculated from audio levels as well. Any clues?