[jitsi-dev] VLC unencrypted streams and the Videobridge


#1

As I understand it the Videobridge can cope with unencrypted streams as part of a conference. Hence, for example, something could be played by VLC and directed as a stream at an IP address and port and then recieved by the conference. Similarly VLC could respond to an unencrypted stream from a conference. I know how to set VLC up to send and receive streams. Hence a conference could have items played by VLC (or anything else) mixed with live video from WebRtc all made available through those routes.

Question 1.

How is the configuration for the Videobridge different for encrypted and unencrypted streams. I am working on using the REST interface to patch conference members in and out so it would help to know how to do this. I don't mind debugging the code and changing it if necessary (and posting it to github on my own account and using the standard licence).

Question 2.

Is there somewhere, other than the source code, a specification of how the videobridge responds to various things. Alternatively are there particular classes I should look at which have the details in them.


#2

Hi John,

As I understand it the Videobridge can cope with unencrypted streams as
part of a conference. Hence, for example, something could be played by
VLC and directed as a stream at an IP address and port and then
recieved by the conference. Similarly VLC could respond to an
unencrypted stream from a conference. I know how to set VLC up to send
and receive streams. Hence a conference could have items played by VLC
(or anything else) mixed with live video from WebRtc all made available
through those routes.

Question 1.

How is the configuration for the Videobridge different for encrypted and
unencrypted streams. I am working on using the REST interface to patch
conference members in and out so it would help to know how to do this.
I don't mind debugging the code and changing it if necessary (and
posting it to github on my own account and using the standard licence).

I think you should be able to just create a channel with RawUdp transport (<transport xmlns="urn:xmpp:jingle:transports:raw-udp:1">) and it will not use DTLS. I'm not aware of anyone using these mixed together with channels with ICE, and it would be interesting to hear the results.

Question 2.

Is there somewhere, other than the source code, a specification of how
the videobridge responds to various things. Alternatively are there
particular classes I should look at which have the details in them.

I don't understand the question. What kinds of "various things"? There is some documentation in the "doc/" directory.

Boris

···

On 13/01/2017 04:14, John Hemming wrote:


#3

I don't understand the question. What kinds of "various things"? There

is some documentation in the "doc/" >directory.

Things like what codecs Jitsi can handle, what the various transports are and their xmpp name.

An example of the complete series of data transmitted for a successful link between the videobridge and chrome.

What should go in the various fields in the JSON for the REST interface - some things are obvious others are not (although I now have the answer on channel bundle id).

I would be pleased to put this together for the REST interface when I have got it to work as then I will have had to find quite a bit of this.

What I would probably do for this is to create a transaction that gives this sort of information by querying the code so it would not end up out of sync with the videobridge probably as a web page from jetty rather than json. If you would like me to do this I would be pleased (once I can get it to work).

···

On 14/01/2017 04:01, Boris Grozev wrote:

Hi John,

On 13/01/2017 04:14, John Hemming wrote:

As I understand it the Videobridge can cope with unencrypted streams as
part of a conference. Hence, for example, something could be played by
VLC and directed as a stream at an IP address and port and then
recieved by the conference. Similarly VLC could respond to an
unencrypted stream from a conference. I know how to set VLC up to send
and receive streams. Hence a conference could have items played by VLC
(or anything else) mixed with live video from WebRtc all made available
through those routes.

Question 1.

How is the configuration for the Videobridge different for encrypted and
unencrypted streams. I am working on using the REST interface to patch
conference members in and out so it would help to know how to do this.
I don't mind debugging the code and changing it if necessary (and
posting it to github on my own account and using the standard licence).

I think you should be able to just create a channel with RawUdp transport (<transport xmlns="urn:xmpp:jingle:transports:raw-udp:1">) and it will not use DTLS. I'm not aware of anyone using these mixed together with channels with ICE, and it would be interesting to hear the results.

Question 2.

Is there somewhere, other than the source code, a specification of how
the videobridge responds to various things. Alternatively are there
particular classes I should look at which have the details in them.

I don't understand the question. What kinds of "various things"? There is some documentation in the "doc/" directory.

Boris

_______________________________________________
dev mailing list
dev@jitsi.org
Unsubscribe instructions and other list options:
http://lists.jitsi.org/mailman/listinfo/dev


#4

Hi,

>I don't understand the question. What kinds of "various things"? There
is some documentation in the "doc/" >directory.

Things like what codecs Jitsi can handle, what the various transports
are and their xmpp name.

For audio this depends on the rtp-level-relay-type. For "translator", anything should work. For "mixer", since it requires re-encoding, the only supported codecs are the ones that libjitsi supports. These include opus, G722, G711 and others (see MediaUtils.java for a complete list).

For video, without simulcast, any codec should probably work. For simulcast we are currently limited to VP8, because we need to understand some very small parts of the stream (e.g. is a packet the beginning of a keyframe?). This should be easily extendable to other codecs.

An example of the complete series of data transmitted for a successful
link between the videobridge and chrome.

In light of the question below, I assume this refers to the media path. I would refer to any document which describes this for WebRTC. Short version: first ICE, then DTLS, then media. Keep the ICE consent fresh.

What should go in the various fields in the JSON for the REST interface
- some things are obvious others are not (although I now have the answer
on channel bundle id).

The JSON format used for REST is really just a translation of the XMPP based COLIBRI protocol. So, for documentation of the content and its semantics see here (I think it is slightly outdated now, but nothing major is missing):
http://xmpp.org/extensions/xep-0340.html

The translation to JSON is documented here:
https://github.com/jitsi/jitsi-videobridge/blob/master/doc/rest-videobridge.md

I would be pleased to put this together for the REST interface when I
have got it to work as then I will have had to find quite a bit of this.

What I would probably do for this is to create a transaction that gives
this sort of information by querying the code so it would not end up out
of sync with the videobridge probably as a web page from jetty rather
than json. If you would like me to do this I would be pleased (once I
can get it to work).

I think documentation would be a welcome contribution, but I cannot guarantee that it will be included.

Regards,
Boris

···

On 14/01/2017 02:29, John Hemming wrote:


#5

Thank you for this. It helps quite a bit, but it is obvious I am going to get much more into the code than I am at the moment.

When I said:

>An example of the complete series of data transmitted for a successful
>link between the videobridge and chrome.

I only really meant the Json transmitted and the SDP sent to Chrome (attached as remote description) the rest is not open to me making a mistake with it.

Additionally I have now got into the chrome webrtc internals and using sawbuck to look at the real time chrome logs. Those tell me more (including that a stream is set up for audio and video, but no traffic is on it) and that ICE pings don't seem to be getting through. I might set the bridge up on a separate device to the client so I can watch it with a packet sniffer.

However, I am going to have to get the videobridge to tell me more about what it is doing. I will also write some routines to dump the objects created from the JSON to the log so I can be certain what the video bridge thinks is going on.

Does anyone know of anyone who is using the REST interface to operate the video bridge that I could talk to?

···

On 14/01/2017 16:34, Boris Grozev wrote:

Hi,

On 14/01/2017 02:29, John Hemming wrote:

>I don't understand the question. What kinds of "various things"? There
is some documentation in the "doc/" >directory.

Things like what codecs Jitsi can handle, what the various transports
are and their xmpp name.

For audio this depends on the rtp-level-relay-type. For "translator", anything should work. For "mixer", since it requires re-encoding, the only supported codecs are the ones that libjitsi supports. These include opus, G722, G711 and others (see MediaUtils.java for a complete list).

For video, without simulcast, any codec should probably work. For simulcast we are currently limited to VP8, because we need to understand some very small parts of the stream (e.g. is a packet the beginning of a keyframe?). This should be easily extendable to other codecs.

An example of the complete series of data transmitted for a successful
link between the videobridge and chrome.

In light of the question below, I assume this refers to the media path. I would refer to any document which describes this for WebRTC. Short version: first ICE, then DTLS, then media. Keep the ICE consent fresh.

What should go in the various fields in the JSON for the REST interface
- some things are obvious others are not (although I now have the answer
on channel bundle id).

The JSON format used for REST is really just a translation of the XMPP based COLIBRI protocol. So, for documentation of the content and its semantics see here (I think it is slightly outdated now, but nothing major is missing):
http://xmpp.org/extensions/xep-0340.html

The translation to JSON is documented here:
https://github.com/jitsi/jitsi-videobridge/blob/master/doc/rest-videobridge.md

I would be pleased to put this together for the REST interface when I
have got it to work as then I will have had to find quite a bit of this.

What I would probably do for this is to create a transaction that gives
this sort of information by querying the code so it would not end up out
of sync with the videobridge probably as a web page from jetty rather
than json. If you would like me to do this I would be pleased (once I
can get it to work).

I think documentation would be a welcome contribution, but I cannot guarantee that it will be included.

Regards,
Boris

_______________________________________________
dev mailing list
dev@jitsi.org
Unsubscribe instructions and other list options:
http://lists.jitsi.org/mailman/listinfo/dev


#6

Further on this. I have been looking at the issue of a conference with participants with mixed transport using the REST interface. I have submitted a patch request to the bridge with a transport request of "urn:xmpp:jingle:transports:raw-udp:1". It responds with ice.

Hence I have started looking at the code. As far as I can see it would not cope with anything other than the ice transport as far as reading the JSON goes. Obviously that is different in terms of the conference processing, but it looks like the JSON interface is (currently) incapable of handling alternative transports.

Question.

1. Has anyone successfully tested the JSON interface for any transport other than ice.

I am entirely happy if the answer is no. It just would be good to know whether I am right in that I think it wouldn't work or not.

Now I understand the system at a lower level I would think it would cope with a mixture of transports, but only if it can be told that there are a mixture of transports.

2. If I am going to put in raw-udp would there be anything else that people would wish coded at the same time?


#7

A key question on this, however, is whether raw-udp ever works with channel bundles or whether it only works if there is a single channel allocated to an interface.

I assume it will do rctp-mux.

···

On 01/02/2017 14:53, John Hemming wrote:

Further on this. I have been looking at the issue of a conference with participants with mixed transport using the REST interface. I have submitted a patch request to the bridge with a transport request of "urn:xmpp:jingle:transports:raw-udp:1". It responds with ice.

Hence I have started looking at the code. As far as I can see it would not cope with anything other than the ice transport as far as reading the JSON goes. Obviously that is different in terms of the conference processing, but it looks like the JSON interface is (currently) incapable of handling alternative transports.

Question.

1. Has anyone successfully tested the JSON interface for any transport other than ice.

I am entirely happy if the answer is no. It just would be good to know whether I am right in that I think it wouldn't work or not.

Now I understand the system at a lower level I would think it would cope with a mixture of transports, but only if it can be told that there are a mixture of transports.

2. If I am going to put in raw-udp would there be anything else that people would wish coded at the same time?

_______________________________________________
dev mailing list
dev@jitsi.org
Unsubscribe instructions and other list options:
http://lists.jitsi.org/mailman/listinfo/dev


#8

It will not do rtcp-mux or bundle.

Boris

···

On 01/02/2017 10:00, John Hemming wrote:

A key question on this, however, is whether raw-udp ever works with
channel bundles or whether it only works if there is a single channel
allocated to an interface.

I assume it will do rctp-mux.


#9

There are other unencrypted transports. I note that none of them are coded into the JSON interface whereas raw-udp is.

If I were to wish to say use rctp-mux would that be a bad idea or a good idea or an idea of no real consequence.

In the mean time at least that explains why when I tried to patch a raw-udp channel bundle in it came back with ice.

···

On 01/02/2017 16:31, Boris Grozev wrote:

On 01/02/2017 10:00, John Hemming wrote:

A key question on this, however, is whether raw-udp ever works with
channel bundles or whether it only works if there is a single channel
allocated to an interface.

I assume it will do rctp-mux.

It will not do rtcp-mux or bundle.

Boris

_______________________________________________
dev mailing list
dev@jitsi.org
Unsubscribe instructions and other list options:
http://lists.jitsi.org/mailman/listinfo/dev


#10

That seems to work in terms of the patching if nothing else.

{
   "id": "46d97eff2d911e16",
   "contents": [
     {
       "name": "audio",
       "channels": [
         {
           "expire": 60,
           "initiator": true,
           "endpoint": "89b40ae",
           "direction": "sendrecv",
           "rtp-level-relay-type": "mixer",
           "transport": {
             "xmlns": "urn:xmpp:jingle:transports:raw-udp:1"
           }
         }
       ]
     }
   ]
}
videobridgetest.js:278 ======================= then the response from the raw rtp patch ====================
videobridgetest.js:212 {
   "contents": [
     {
       "channels": [
         {
           "endpoint": "89b40ae",
           "sources": [
             636162532
           ],
           "rtp-level-relay-type": "mixer",
           "expire": 60,
           "initiator": true,
           "id": "e6900aa4f01d93fb",
           "transport": {
             "candidates": [
               {
                 "generation": 0,
                 "component": 1,
                 "port": 10001,
                 "ip": "192.168.2.220",
                 "id": "1",
                 "type": "host"
               },
               {
                 "generation": 0,
                 "component": 2,
                 "port": 10002,
                 "ip": "192.168.2.220",
                 "id": "2",
                 "type": "host"
               }
             ],
             "xmlns": "urn:xmpp:jingle:transports:raw-udp:1"
           },
           "direction": "recvonly"
         }
       ],
       "name": "audio"
     }
   ],
   "id": "46d97eff2d911e16"
}

···

On 01/02/2017 16:31, Boris Grozev wrote:

On 01/02/2017 10:00, John Hemming wrote:

A key question on this, however, is whether raw-udp ever works with
channel bundles or whether it only works if there is a single channel
allocated to an interface.

I assume it will do rctp-mux.

It will not do rtcp-mux or bundle.

Boris

_______________________________________________
dev mailing list
dev@jitsi.org
Unsubscribe instructions and other list options:
http://lists.jitsi.org/mailman/listinfo/dev


#11

I don't see any important differences for RAW-UDP.

Boris

···

On 01/02/2017 10:48, John Hemming wrote:

There are other unencrypted transports. I note that none of them are
coded into the JSON interface whereas raw-udp is.

If I were to wish to say use rctp-mux would that be a bad idea or a good
idea or an idea of no real consequence.


#12

Fair point. If everything that can do rctp-mux can also do it without the mux then everything can interoperate. I will stick with raw-udp.

I think I now understand what happens with the ice links into the bridge in that the video for the last-n participants is forwarded to all participants and the audio is mixed.

Question 1.

Is there a single mix of audio or does it mix everything but the person who it is being sent to?

Question 2.

Can Audio also operate through translation? (as an alternative to mixing)

Question 3.

Does the direction have any effect?

The "direction" that is sent to an from the bridge does not superficially appear to have any effect as the video is always sent to the bridge on one ssrc and comes back on separate ssrcs that are not known at the time the participant is patched into the bridge.

Similarly the situation with audio is unclear. If a mix is being sent back regardless then I cannot see how it has any effect.

I am going to experiment with audio for a while because the bandwidth is lower and I think one of my problems is linked to bandwidth - although it shouldn't be.

···

On 01/02/2017 17:40, Boris Grozev wrote:

On 01/02/2017 10:48, John Hemming wrote:

There are other unencrypted transports. I note that none of them are
coded into the JSON interface whereas raw-udp is.

If I were to wish to say use rctp-mux would that be a bad idea or a good
idea or an idea of no real consequence.

I don't see any important differences for RAW-UDP.

Boris

_______________________________________________
dev mailing list
dev@jitsi.org
Unsubscribe instructions and other list options:
http://lists.jitsi.org/mailman/listinfo/dev


#13

Fair point. If everything that can do rctp-mux can also do it without
the mux then everything can interoperate. I will stick with raw-udp.

I think I now understand what happens with the ice links into the bridge
in that the video for the last-n participants is forwarded to all
participants and the audio is mixed.

Question 1.

Is there a single mix of audio or does it mix everything but the person
who it is being sent to?

The latter (separate mix for everyone).

Question 2.

Can Audio also operate through translation? (as an alternative to mixing)

Yes, and this is indeed the mode we mostly use. I though it was the default. Pass in rtp-level-relay-type=translator to use it
.

Question 3.

Does the direction have any effect?

This might be outdated, but the original idea was to enable receive-only endpoints (i.e. drop any media, if they decide to send it). I don't think we've ever actually used it though, so it might have been broken at some point.

Boris

···

On 01/02/2017 13:14, John Hemming wrote:


#14

As far as linking to the bridge using raw-udp goes what is necessary for the second patch.

The first patch gets from the bridge the transport, and two ports for rtp and rtcp. The question then is how a link is established.

Question 1.

Would any connection to those two ports then establish a connection or is there a need to patch to the bridge the ip address and port of the two ports on the other device that are going to connect to the bridge.

Question 2

Secondly if someone connects to the bridge via those two ports are they then "used" or are they like listening ports that could kick off another participant. (one would assume the former)

···

On 01/02/2017 19:49, Boris Grozev wrote:

On 01/02/2017 13:14, John Hemming wrote:

Fair point. If everything that can do rctp-mux can also do it without
the mux then everything can interoperate. I will stick with raw-udp.

I think I now understand what happens with the ice links into the bridge
in that the video for the last-n participants is forwarded to all
participants and the audio is mixed.

Question 1.

Is there a single mix of audio or does it mix everything but the person
who it is being sent to?

The latter (separate mix for everyone).

Question 2.

Can Audio also operate through translation? (as an alternative to mixing)

Yes, and this is indeed the mode we mostly use. I though it was the default. Pass in rtp-level-relay-type=translator to use it
.

Question 3.

Does the direction have any effect?

This might be outdated, but the original idea was to enable receive-only endpoints (i.e. drop any media, if they decide to send it). I don't think we've ever actually used it though, so it might have been broken at some point.

Boris

_______________________________________________
dev mailing list
dev@jitsi.org
Unsubscribe instructions and other list options:
http://lists.jitsi.org/mailman/listinfo/dev


#15

I have been experimenting with patching and seeing what is created. For ICE it is only on the second patch that the multiplexing sockets are created.

It is not surprising that it does not appear that any datagram sockets are created for RTP as I am not doing anything sensible with the second patch.

I have to assume, therefore, that a second patch is required and there is some data that needs to be in it. I would assume that payload types are needed as they are otherwise filtered out in the sockets although that did relate to the multiplexing sockets.

I am slightly concerned about simply giving it "candidates" where I get candidates as obviously there are the local binding addresses and the remote binding addresses. It is, of course, possible that they are called the same thing.

Any hints?

···

On 02/02/2017 07:07, John Hemming wrote:

As far as linking to the bridge using raw-udp goes what is necessary for the second patch.

The first patch gets from the bridge the transport, and two ports for rtp and rtcp. The question then is how a link is established.

Question 1.

Would any connection to those two ports then establish a connection or is there a need to patch to the bridge the ip address and port of the two ports on the other device that are going to connect to the bridge.

Question 2

Secondly if someone connects to the bridge via those two ports are they then "used" or are they like listening ports that could kick off another participant. (one would assume the former)

On 01/02/2017 19:49, Boris Grozev wrote:

On 01/02/2017 13:14, John Hemming wrote:

Fair point. If everything that can do rctp-mux can also do it without
the mux then everything can interoperate. I will stick with raw-udp.

I think I now understand what happens with the ice links into the bridge
in that the video for the last-n participants is forwarded to all
participants and the audio is mixed.

Question 1.

Is there a single mix of audio or does it mix everything but the person
who it is being sent to?

The latter (separate mix for everyone).

Question 2.

Can Audio also operate through translation? (as an alternative to mixing)

Yes, and this is indeed the mode we mostly use. I though it was the default. Pass in rtp-level-relay-type=translator to use it
.

Question 3.

Does the direction have any effect?

This might be outdated, but the original idea was to enable receive-only endpoints (i.e. drop any media, if they decide to send it). I don't think we've ever actually used it though, so it might have been broken at some point.

Boris

_______________________________________________
dev mailing list
dev@jitsi.org
Unsubscribe instructions and other list options:
http://lists.jitsi.org/mailman/listinfo/dev

_______________________________________________
dev mailing list
dev@jitsi.org
Unsubscribe instructions and other list options:
http://lists.jitsi.org/mailman/listinfo/dev


#16

I think I have worked this out now from this:

     /**
      * Whether this {@link RtpChannel} should latch on to the remote address of
      * the first received data packet (and control packet) and only received
      * subsequent packets from this remote address.
      * We want to enforce this if RAW-UDP is used. When ICE is used, ice4j does
      * the filtering for us.
      */
     private boolean verifyRemoteAddress = true;

However, it would be helpful to know what I need to put in on the second patch to make it link up. I am going to work on the payload numbers. I think those are relatively obvious as without those packets would be ignored just after hitting the bridge.

···

On 02/02/2017 09:51, John Hemming wrote:

I have been experimenting with patching and seeing what is created. For ICE it is only on the second patch that the multiplexing sockets are created.

It is not surprising that it does not appear that any datagram sockets are created for RTP as I am not doing anything sensible with the second patch.

I have to assume, therefore, that a second patch is required and there is some data that needs to be in it. I would assume that payload types are needed as they are otherwise filtered out in the sockets although that did relate to the multiplexing sockets.

I am slightly concerned about simply giving it "candidates" where I get candidates as obviously there are the local binding addresses and the remote binding addresses. It is, of course, possible that they are called the same thing.

Any hints?

On 02/02/2017 07:07, John Hemming wrote:

As far as linking to the bridge using raw-udp goes what is necessary for the second patch.

The first patch gets from the bridge the transport, and two ports for rtp and rtcp. The question then is how a link is established.

Question 1.

Would any connection to those two ports then establish a connection or is there a need to patch to the bridge the ip address and port of the two ports on the other device that are going to connect to the bridge.

Question 2

Secondly if someone connects to the bridge via those two ports are they then "used" or are they like listening ports that could kick off another participant. (one would assume the former)

On 01/02/2017 19:49, Boris Grozev wrote:

On 01/02/2017 13:14, John Hemming wrote:

Fair point. If everything that can do rctp-mux can also do it without
the mux then everything can interoperate. I will stick with raw-udp.

I think I now understand what happens with the ice links into the bridge
in that the video for the last-n participants is forwarded to all
participants and the audio is mixed.

Question 1.

Is there a single mix of audio or does it mix everything but the person
who it is being sent to?

The latter (separate mix for everyone).

Question 2.

Can Audio also operate through translation? (as an alternative to mixing)

Yes, and this is indeed the mode we mostly use. I though it was the default. Pass in rtp-level-relay-type=translator to use it
.

Question 3.

Does the direction have any effect?

This might be outdated, but the original idea was to enable receive-only endpoints (i.e. drop any media, if they decide to send it). I don't think we've ever actually used it though, so it might have been broken at some point.

Boris

_______________________________________________
dev mailing list
dev@jitsi.org
Unsubscribe instructions and other list options:
http://lists.jitsi.org/mailman/listinfo/dev

_______________________________________________
dev mailing list
dev@jitsi.org
Unsubscribe instructions and other list options:
http://lists.jitsi.org/mailman/listinfo/dev

_______________________________________________
dev mailing list
dev@jitsi.org
Unsubscribe instructions and other list options:
http://lists.jitsi.org/mailman/listinfo/dev


#17

The following status report is a bit garbled, but it does demonstrate that the stats on a RawUdpTransportManager are responding to an external rtp stream. I have tested this sufficiently to know that it is definitely responding to a stream from VLC. I am in fact streaming "take the a train" using the opus codec.

What it is doing with it and where it then goes is a completely different issue. It is entirely possible that it is handling it correctly and broadcasting it in some form to the other participants. I would have no reason to believe otherwise.

org.jitsi.videobridge.AudioChannel Endpoint: 1e740825 Id:7abbe21b64fc015b cb-id:null sources localinitial=4091019625 remote=1601365528 and =40910196251601365528CN:User@JAMH-I7Send Stream Count 0 RawUdpTransportManager Bytes rec 719792 Bytes sent 0 Pkts recd 2168 Pkts sent 0 13:48:10 13:48:10
org.jitsi.videobridge.AudioChannel Endpoint: 364915aa Id:356824b26844fe9e cb-id:364915 sources localinitial=1899317779 remote=-1 and =CN:User@JAMH-I7Send Stream Count 0 IceUdpTransportManager Bytes rec 0 Bytes sent 0 Pkts recd 0 Pkts sent 0 01:00:00 13:48:09
org.jitsi.videobridge.AudioChannel Endpoint: 364915aa Id:c823335d6b025b47 cb-id:364915 sources localinitial=3130904285 remote=2224762053 and =2224762053CN:User@JAMH-I7Send Stream Count 1CN:User@JAMH-I7ssrc: 3130904285 IceUdpTransportManager Bytes rec 1883145 Bytes sent 428462 Pkts recd 18857 Pkts sent 18620 13:48:10 13:48:10
videochannels:2

···

On 02/02/2017 09:51, John Hemming wrote:

I have been experimenting with patching and seeing what is created. For ICE it is only on the second patch that the multiplexing sockets are created.

It is not surprising that it does not appear that any datagram sockets are created for RTP as I am not doing anything sensible with the second patch.

I have to assume, therefore, that a second patch is required and there is some data that needs to be in it. I would assume that payload types are needed as they are otherwise filtered out in the sockets although that did relate to the multiplexing sockets.

I am slightly concerned about simply giving it "candidates" where I get candidates as obviously there are the local binding addresses and the remote binding addresses. It is, of course, possible that they are called the same thing.

Any hints?

On 02/02/2017 07:07, John Hemming wrote:

As far as linking to the bridge using raw-udp goes what is necessary for the second patch.

The first patch gets from the bridge the transport, and two ports for rtp and rtcp. The question then is how a link is established.

Question 1.

Would any connection to those two ports then establish a connection or is there a need to patch to the bridge the ip address and port of the two ports on the other device that are going to connect to the bridge.

Question 2

Secondly if someone connects to the bridge via those two ports are they then "used" or are they like listening ports that could kick off another participant. (one would assume the former)

On 01/02/2017 19:49, Boris Grozev wrote:

On 01/02/2017 13:14, John Hemming wrote:

Fair point. If everything that can do rctp-mux can also do it without
the mux then everything can interoperate. I will stick with raw-udp.

I think I now understand what happens with the ice links into the bridge
in that the video for the last-n participants is forwarded to all
participants and the audio is mixed.

Question 1.

Is there a single mix of audio or does it mix everything but the person
who it is being sent to?

The latter (separate mix for everyone).

Question 2.

Can Audio also operate through translation? (as an alternative to mixing)

Yes, and this is indeed the mode we mostly use. I though it was the default. Pass in rtp-level-relay-type=translator to use it
.

Question 3.

Does the direction have any effect?

This might be outdated, but the original idea was to enable receive-only endpoints (i.e. drop any media, if they decide to send it). I don't think we've ever actually used it though, so it might have been broken at some point.

Boris

_______________________________________________
dev mailing list
dev@jitsi.org
Unsubscribe instructions and other list options:
http://lists.jitsi.org/mailman/listinfo/dev

_______________________________________________
dev mailing list
dev@jitsi.org
Unsubscribe instructions and other list options:
http://lists.jitsi.org/mailman/listinfo/dev

_______________________________________________
dev mailing list
dev@jitsi.org
Unsubscribe instructions and other list options:
http://lists.jitsi.org/mailman/listinfo/dev