Update and Questions!
Update from Last time: Video call worked outside school network.
Research Update On the bitrate adaptation - task 3:
I have read and digested these papers - "Rate Adaptation for Adaptive HTTP Streaming" - http://dl.acm.org/citation.cfm?id=1943575 and "SARA" ieeexplore.ieee.org/iel7/7226899/7247062/07247436.pdf and I have a question with regards implementing adaptive bitrate streaming in jitsi.
In the 1st paper, they identified two types of rate adaptation techniques, Sender driven rate adaptation and Receiver driven rate adaptation.
The basic difference is that in the sender driven rate adaptation, the sender (the client end generating the video stream or a server) estimates network condition and determines which bitrate to use to encode the video stream.
In the receiver driven rate adaptation, the receiver(the client end requesting the video stream) computes the required bitrate and sends the bitrate to the server or client generating the video stream. In this model, it is assumed that the client/server generating the videostream encodes the video segments in multiple bitrates such that it can match requested bitrates to the closest available video segment with that bitrate. This model also has the advantage of scalability - am thinking of conference calling were different callers have different network conditions.
I will save the maths for now. I will be adapting and implementing the algorithm described in the "Rate Adaptation for Adaptive HTTP streaming" paper for task 3. SARA uses video segment size, buffer and throughput to estimate bitrate, I think it will be "bulky" for live streaming.
From my understanding so far, I think am expected to the rate adaptation logic for the sending side, that is implementing the sender driven rate adaptation. Is this correct?
Is it possible that during a call, both clients encode in multiple bitrates to enable a receiver driven rate adaptation. I know this will consume memory and resources but is this an option.?
I have also been going through the libjitsi file and noticed that for every implementation in the impl folder, there seems to be an abstract class or interface in the "service" folder. I am not clearly sure why it so and would like to know.
Thanks for your response.I used regular XMPP accounts and it worked, video calls and desktop sharing became enabled although if I try to initiate a call(voice/video/desktop sharing) from any of the two ends (On windows - downloaded jitsi msi and Ubuntu running compiled jitsi version)
The caller displays:
Call ended by remote side. Reason: failed-application. Error: Could not establish connection (ICE failed and no relay found)
and callee displays - Error: Could not establish connection (ICE failed and no relay found).
As for the google talks, voice calls work. I don't know what might cause the error though I don't feel its the jitsi client. I will still re-try outside school network.
With regards this,
- "The Jitsi client should just tell libjitsi (i.e. the MediaStream instance) to use adaptive rate
control, and the library should handle it from there."
This makes sense. I will look more into this. Thanks.
Thanks, got it. mvn install worked.
For this task;
* 3. Enable the use of bandwidth estimation and set the encoder's target
I think I figured out were changes need to be made -
in createVideoAdvancedSettings() method
1st if I implement the interface from the
the jitsi project, I can be able to register a listener through the
BandwidthEstimator's "addListener(Listener listener)" which retrieves
the current estimated bandwidth using the BandwidthEstimator's
Then I can use some "bitrate estimation equation/formular" which am
currently looking in to, to set the new bitrate in the JSpinner
videoBitrate. Already, I see a changeListener has been added to the
JSpinner to pick up any bit rate changes.
Q1. Please is the above reasoning sound?
Good job researching the problem! I think what you describe could work
(I'm a bit rusty with my knowledge of the jitsi client itself). However,
I would like to do this this in a different way. The Jitsi client should
just tell libjitsi (i.e. the MediaStream instance) to use adaptive rate
control, and the library should handle it from there.
I think that right now we don't support setting a target bitrate on the
encoder dynamically, so that would need to be implemented.
One other thing, I wanted to test the video call with another contact on
google talk (hangout) using the jitsi client. But the video call button
wasn't enabled (Both the libjitsi library I built and the old one worked
but the video wasn't enabled). Using jitsi client for chatting via works.
my webcam works when viewed under devices.(ie. from jitsi chat console >
Q2) A quick question, What happens when all encoding options are checked
(> Tools >options> encoding)?
I'm not sure I understand the question. Is it is about which encoding
ends up being used? When a call is initialized there is an offer-answer
procedure, in which the two sides exchange their lists of supported
formats. They then use the highest priority format which is supported by
Q3) How do I enable the video button ("Call contact")
I think this might have to do with hangouts. Can you try with some
regular XMPP accounts? There is a list of public XMPP servers here: