I am just getting started in trying to understand how I might be able to
use libjitsi to write a server side webrtc peer. The server side peer
will need to use the soundcard device on the server machine (both input
and output - audio only)
I believe that libjitsi is fit for this purpose, but I am having a hard
time getting traction on understanding a few basic things. I come from
a a J2EE line of work, so all these standards/protocols/APIs are all new
to me. And this is just a side, fun, learning project.
Looking at this stack:
provide an API over the SRTP and SCTP protocol layers?
Yes. It also provides an API to acquire access to local media (i.e. mic and camera).
Does libjitsi provide everything needed in that stack, except the top
WebRTC layer? (SRTP, SCTP, DTLS, STUN, TURN, ICE)
Libjitsi can certainly be used to create a WebRTC endpoint (jitsi-videobridge is the canonical example), but it doesn't provide all the components. It provides an API (MediaStream, not to be confused with a WebRTC MediaStream) which can be used to send/receive SRTP with DTLS. However, it does not automatically provide:
1. Connection establishment (e.g. ICE)
2. Signaling (e.g. SDP-based offer/answer)
For point 1, you can use a library such ice4j. We use it in jitsi-videobridge and it is WebRTC-compatible.
For point 2, there are many options, and which one is most appropriate depends on your use case. WebRTC does signalling using SDP blobs. One of the simplest approaches would be to have your application use the same.
Simplifying somewhat, you application will have a libjitsi MediaStream, an ice4j Agent, and something to handle signalling (SDP). It will:
1. Receive an offer
2. Setup the Agent with the transport information from the offer
3. Setup the MediaStream with information from the offer (codec information, DTLS fingerprints, etc)
4. Create and send answer using information from the MediaStream and Agent
5. Start the Agent and wait until it completes -- this effectively takes care of STUN/TURN/ICE
6. Get sockets from the Agent and pass them to the MediaStream
7. Start the MediaStream -- this will establish DTLS and then start to send/receive SRTP
SRTP and SCTP are protocols, but who defines the API for that?
server side client be using those APIs directly?
It will not use WebRTC APIs at all. Libjitsi and ice4j are independent implementations of the protocols and don't have the same API.
I recall reading that Jitsi is limited to a mono-only audio stream?
This is true -- currently libjitsi only supports mono.
I've seen posts from user "mondaine" that he got something like this
working. Any direction anyone could give could be great.
Jitsi-videobridge would probably serve as the best example. It does pretty much what I described above (and also SCTP), but uses COLIBRI for signalling. The code might be complicated at places, but it is mostly clearly written and documented.
I hope this helps more than it confuses.
On 12/03/15 18:31, Scott McClements wrote:
dev mailing list
Unsubscribe instructions and other list options: