I solved the first problem (I wasn't sending videoType tags in my presence
messages). New problem is that my video track is coming in as muted, like
the MediaStreamTrack object has its read-only "muted" property set to true.
The WebRTC spec
that this means "the source is temporarily unable to provide the track with
data". I can tell that the data from the Pi camera is getting through to
the browser because Chrome's webrtc-internals tool shows an RTC connection
called "Conn-audio-1-0" (I assume that this single connection carries both
audio and video because of bundling), and the bitsReceivedPerSecond graph
for it spikes when I wiggle my fingers over the Pi camera. It also shows
two sources coming from the Pi, and the video source shows about 5k bytes
received when the connection starts, and nothing after that. I don't trust
those 100% though since the audio source doesn't show any bits received
even though I am able to hear audio from the Pi's microphone.
Any ideas or guidance here would be helpful!
PS, it would be really helpful if someone could send me any documentation
on jitsi's custom xmlns, even if it's unpolished or internal. The stuff
like http://jitsi.org/jitmeet/video and http://jitsi.org/jitmeet/user-agent,
right now I'm just combing the prosody logs and making educated guesses and
it'd be nice to be more methodical about it.
On Tue, Jan 3, 2017 at 4:05 PM, V User <firstname.lastname@example.org> wrote:
I'm pretty close to getting my whole jitsi-for-pi thing working, but I'm
having a couple issues with the browser end now. Specifically, when it
initially sets the audio level here
throws an error because the LargeVideo object's state is undefined. Do you
have any pointers on where to start investigating this, like how the
LargeVideo's state ought to be getting set? I'm going to dive in myself,
but I'm not super familiar with React/Redux and I was hoping someone could
point me in the right direction.