[jitsi-dev] Performance bottleneck in OutputDataStreamImpl


#1

When I run larger conferences approaching 30+ participants I run into
packet loss problems. There are two threads per conference
running org.jitsi.impl.neomedia.rtp.translator.OutputDataStreamImpl, per
modality if I read it correctly. At least one of them becomes 100% occupied
with forwarding packets through the conference. SendThreads have been
activated to do the actual packet transmission for outgoing RTP connectors,
but that does not help. Total CPU consumption is not the problem.

To me it seem those two threads are doing mostly buffer shuffleing.

So my question is if there is a configuration somewhere to change the setup
of these threads or if you have looked into this before?

Sincerely
Olof Kallander
Symphony

Disclaimer

The information contained in this communication from the sender is confidential. It is intended solely for use by the recipient and others authorized to receive it. If you are not the recipient, you are hereby notified that any disclosure, copying, distribution or taking action in relation of the contents of this information is strictly prohibited and may be unlawful.


#2

Olof,

What is the CPU consumption of JVB itself for this case? The one that
becomes 100% occupied is for RTP forwarding, the other one is for RTCP
forwarding. Does increasing

RTPConnectorOutputStream.PACKET_QUEUE_CAPACITY solve the packet loss
problem?

Best regards,

/Kaiduan

···

On Tue, May 8, 2018 at 4:05 AM, Olof Källander <olof.kallander@symphony.com> wrote:

When I run larger conferences approaching 30+ participants I run into
packet loss problems. There are two threads per conference
running org.jitsi.impl.neomedia.rtp.translator.OutputDataStreamImpl, per
modality if I read it correctly. At least one of them becomes 100% occupied
with forwarding packets through the conference. SendThreads have been
activated to do the actual packet transmission for outgoing RTP connectors,
but that does not help. Total CPU consumption is not the problem.

To me it seem those two threads are doing mostly buffer shuffleing.

So my question is if there is a configuration somewhere to change the
setup of these threads or if you have looked into this before?

Sincerely
Olof Kallander
Symphony

*Disclaimer*

The information contained in this communication from the sender is
confidential. It is intended solely for use by the recipient and others
authorized to receive it. If you are not the recipient, you are hereby
notified that any disclosure, copying, distribution or taking action in
relation of the contents of this information is strictly prohibited and may
be unlawful.

_______________________________________________
dev mailing list
dev@jitsi.org
Unsubscribe instructions and other list options:
http://lists.jitsi.org/mailman/listinfo/dev

--
Founder of Goodstartsoft
https://www.goodstartsoft.com


#3

Hi Olof,

rtp.translator.OutputDataStreamImpl has a queue where all packets
received from endpoints are added. Its thread reads from the queue,
and writes to a set of per-endpoint streams. If this thread can not
process packets quick enough, this will result in the threads adding
packets to queue dropping them, and messages like this being logged by
rtp.translator.OutputDataStreamImpl:
"Dropped xxx packets hashCode=yyy". Do you see any of these in your logs?

Regards,
Boris

···

On Tue, May 8, 2018 at 3:05 AM, Olof Källander <olof.kallander@symphony.com> wrote:

When I run larger conferences approaching 30+ participants I run into packet
loss problems. There are two threads per conference running
org.jitsi.impl.neomedia.rtp.translator.OutputDataStreamImpl, per modality if
I read it correctly. At least one of them becomes 100% occupied with
forwarding packets through the conference. SendThreads have been activated
to do the actual packet transmission for outgoing RTP connectors, but that
does not help. Total CPU consumption is not the problem.

To me it seem those two threads are doing mostly buffer shuffleing.

So my question is if there is a configuration somewhere to change the setup
of these threads or if you have looked into this before?

Sincerely
Olof Kallander
Symphony

Disclaimer

The information contained in this communication from the sender is
confidential. It is intended solely for use by the recipient and others
authorized to receive it. If you are not the recipient, you are hereby
notified that any disclosure, copying, distribution or taking action in
relation of the contents of this information is strictly prohibited and may
be unlawful.

_______________________________________________
dev mailing list
dev@jitsi.org
Unsubscribe instructions and other list options:
http://lists.jitsi.org/mailman/listinfo/dev


#4

Hi Olof,

What's your UDP buffer size/backlog configuration? If you haven't tweaked
that, you can try increasing it a bit and see if that helps.

This goes in sip-communicator.properties file:
org.ice4j.ice.harvest.AbstractUdpListener.SO_RCVBUF=104857600

These are sysctl values:
net.core.rmem_max=104857600
net.core.netdev_max_backlog=100000

If that doesn't help, then could you please share a bit more about your
measurement methodology and the numbers that you're obtaining? Also please
include your sip-communicator.properties file and JVB logs.

Cheers,
George

···

On Thu, May 10, 2018 at 9:04 AM, Kaiduan Xie <kaiduanx@gmail.com> wrote:

Olof,

What is the CPU consumption of JVB itself for this case? The one that
becomes 100% occupied is for RTP forwarding, the other one is for RTCP
forwarding. Does increasing

RTPConnectorOutputStream.PACKET_QUEUE_CAPACITY solve the packet loss
problem?

Best regards,

/Kaiduan

On Tue, May 8, 2018 at 4:05 AM, Olof Källander < > olof.kallander@symphony.com> wrote:

When I run larger conferences approaching 30+ participants I run into
packet loss problems. There are two threads per conference
running org.jitsi.impl.neomedia.rtp.translator.OutputDataStreamImpl,
per modality if I read it correctly. At least one of them becomes 100%
occupied with forwarding packets through the conference. SendThreads have
been activated to do the actual packet transmission for outgoing RTP
connectors, but that does not help. Total CPU consumption is not the
problem.

To me it seem those two threads are doing mostly buffer shuffleing.

So my question is if there is a configuration somewhere to change the
setup of these threads or if you have looked into this before?

Sincerely
Olof Kallander
Symphony

*Disclaimer*

The information contained in this communication from the sender is
confidential. It is intended solely for use by the recipient and others
authorized to receive it. If you are not the recipient, you are hereby
notified that any disclosure, copying, distribution or taking action in
relation of the contents of this information is strictly prohibited and may
be unlawful.

_______________________________________________
dev mailing list
dev@jitsi.org
Unsubscribe instructions and other list options:
http://lists.jitsi.org/mailman/listinfo/dev

--
Founder of Goodstartsoft
https://www.goodstartsoft.com

_______________________________________________
dev mailing list
dev@jitsi.org
Unsubscribe instructions and other list options:
http://lists.jitsi.org/mailman/listinfo/dev


#5

I'm sorry for top posting over Kaiduan, he brings up a good point, so you
should try his suggestion as well.

···

On Fri, May 11, 2018 at 10:50 AM, George Politis <gp@jitsi.org> wrote:

Hi Olof,

What's your UDP buffer size/backlog configuration? If you haven't tweaked
that, you can try increasing it a bit and see if that helps.

This goes in sip-communicator.properties file:
org.ice4j.ice.harvest.AbstractUdpListener.SO_RCVBUF=104857600

These are sysctl values:
net.core.rmem_max=104857600
net.core.netdev_max_backlog=100000

If that doesn't help, then could you please share a bit more about your
measurement methodology and the numbers that you're obtaining? Also please
include your sip-communicator.properties file and JVB logs.

Cheers,
George

On Thu, May 10, 2018 at 9:04 AM, Kaiduan Xie <kaiduanx@gmail.com> wrote:

Olof,

What is the CPU consumption of JVB itself for this case? The one that
becomes 100% occupied is for RTP forwarding, the other one is for RTCP
forwarding. Does increasing

RTPConnectorOutputStream.PACKET_QUEUE_CAPACITY solve the packet loss
problem?

Best regards,

/Kaiduan

On Tue, May 8, 2018 at 4:05 AM, Olof Källander < >> olof.kallander@symphony.com> wrote:

When I run larger conferences approaching 30+ participants I run into
packet loss problems. There are two threads per conference
running org.jitsi.impl.neomedia.rtp.translator.OutputDataStreamImpl,
per modality if I read it correctly. At least one of them becomes 100%
occupied with forwarding packets through the conference. SendThreads have
been activated to do the actual packet transmission for outgoing RTP
connectors, but that does not help. Total CPU consumption is not the
problem.

To me it seem those two threads are doing mostly buffer shuffleing.

So my question is if there is a configuration somewhere to change the
setup of these threads or if you have looked into this before?

Sincerely
Olof Kallander
Symphony

*Disclaimer*

The information contained in this communication from the sender is
confidential. It is intended solely for use by the recipient and others
authorized to receive it. If you are not the recipient, you are hereby
notified that any disclosure, copying, distribution or taking action in
relation of the contents of this information is strictly prohibited and may
be unlawful.

_______________________________________________
dev mailing list
dev@jitsi.org
Unsubscribe instructions and other list options:
http://lists.jitsi.org/mailman/listinfo/dev

--
Founder of Goodstartsoft
https://www.goodstartsoft.com

_______________________________________________
dev mailing list
dev@jitsi.org
Unsubscribe instructions and other list options:
http://lists.jitsi.org/mailman/listinfo/dev