I am trying to do some audio filtering on my custom jitsi project I have written using react. I’m having trouble implementing the setEffect track method that is mentioned in the documentation. I have pasted the relevant functions I’m trying to use here. If someone could help me out with the syntax of applying some sort of panning to an audio track that would be great! Thanks.
In my experience with WebAudio I would think the best way to do this would be to create a MediaElementAudioSourceNode via createMediaElementSource or createMediaStreamSource. This is quite tricky with Firefox and Chrome each having different methods.
I’ve found createMediaStreamSource works well for Chrome in piping the audio into the WebAudio context. From there, it’s just a matter of adding the effects you like and then calling a .connect to connect the effect node (StereoPannerNode) to the WebAudio node graph.
I was able to do most of this in the AudioTrack document. My guess would be that to stop the effect you would need to destroy the graph with a disconnect and see if the underlying audio track will begin to play again.
However, if anyone else has any experience piping the audio tracks into AudioNodes in a browser compatible way, it would be great to hear from you!
Thanks for the reply @jacksongoode , appreciate the feedback. I have been trying just as you described, but not achieving the desired effect. I’ll attach the snippet of my code below. I am mapping all the audio tracks in the conference into this object and attempting to do the panning here.
Have a look at this old commit of mine where I was testing out the StereoPannerNode. In my case, I duplicated the volume slider as a pnn slider. It might give you some insights as to where everything should go in the AudioTrack.