[sip-comm-dev] Re: svn commit: r7065 - trunk/src/net/java/sip/communicator/impl/neomedia/portaudio/streams


#1

Hi Werner,

While we've started implementing this Buffer data reuse in our
DataSource and Codec implementations, I'm afraid doing it for
MasterPortAudioStream and InputPortAudioStream is risky because it
effectively shares one and the same byte[] with all slave
InputPortAudioStreams of a given MasterPortAudioStream. In other
words, if there are multiple slaves, reading from one of it will steal
the audio data from the other slave by making the
MasterPortAudioStream overwrite it with newly read data. Have you been
able to test your modifications with multiple InputPortAudioStreams on
the same MasterPortAudioStream (e.g. simultaneous calls none of which
is on hold)?

Best regards,
Lubomir

···

On Sun, May 2, 2010 at 3:26 PM, <wernerd@dev.java.net> wrote:

Author: wernerd
Date: 2010-05-02 12:26:42+0000
New Revision: 7065

Modified:
trunk/src/net/java/sip/communicator/impl/neomedia/portaudio/streams/InputPortAudioStream.java
trunk/src/net/java/sip/communicator/impl/neomedia/portaudio/streams/MasterPortAudioStream.java

Log:
Implement Buffer data re-use for portaudio capture device.

If the Buffer has a pre-allocated data area then check if portaudio can re-use it. Only if this
is not possible then allocate a buffer and set it in Buffer. This reduces usage of dynamic
memory allocation and thus garbage collection.

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@sip-communicator.dev.java.net
For additional commands, e-mail: dev-help@sip-communicator.dev.java.net


#2

Lubo,

exactly this is the reason why I use System.arraycopy to
really copy data instead of just setting the byte[] object.

In the existing (prior to my mods) implementation the
slave stream set the same byte[] object to all buffers. Each
read on a slave has it's independent Buffer structure and thus
its own data pointer, length etc.

As said, my modifications do not share the same byte[] all over
as it was the case in the old implementation.

According the the JMF documentation the stream should take care
about this issue.

Regards,
Werner

···

Am 02.05.2010 14:44, schrieb Lubomir Marinov:

Hi Werner,

While we've started implementing this Buffer data reuse in our
DataSource and Codec implementations, I'm afraid doing it for
MasterPortAudioStream and InputPortAudioStream is risky because it
effectively shares one and the same byte[] with all slave
InputPortAudioStreams of a given MasterPortAudioStream. In other
words, if there are multiple slaves, reading from one of it will steal
the audio data from the other slave by making the
MasterPortAudioStream overwrite it with newly read data. Have you been
able to test your modifications with multiple InputPortAudioStreams on
the same MasterPortAudioStream (e.g. simultaneous calls none of which
is on hold)?

Best regards,
Lubomir

On Sun, May 2, 2010 at 3:26 PM, <wernerd@dev.java.net> wrote:

Author: wernerd
Date: 2010-05-02 12:26:42+0000
New Revision: 7065

Modified:
  trunk/src/net/java/sip/communicator/impl/neomedia/portaudio/streams/InputPortAudioStream.java
  trunk/src/net/java/sip/communicator/impl/neomedia/portaudio/streams/MasterPortAudioStream.java

Log:
Implement Buffer data re-use for portaudio capture device.

If the Buffer has a pre-allocated data area then check if portaudio can re-use it. Only if this
is not possible then allocate a buffer and set it in Buffer. This reduces usage of dynamic
memory allocation and thus garbage collection.

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@sip-communicator.dev.java.net
For additional commands, e-mail: dev-help@sip-communicator.dev.java.net

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@sip-communicator.dev.java.net
For additional commands, e-mail: dev-help@sip-communicator.dev.java.net


#3

Just an addition: how would you implement this in DataSource related
classes? These classes do not deal with the Buffer structure at all,
only the BufferStream related classes do.

Regards,
Werner

···

Am 02.05.2010 14:44, schrieb Lubomir Marinov:

Hi Werner,

While we've started implementing this Buffer data reuse in our
DataSource and Codec implementations, I'm afraid doing it for
MasterPortAudioStream and InputPortAudioStream is risky because it
effectively shares one and the same byte[] with all slave
InputPortAudioStreams of a given MasterPortAudioStream. In other
words, if there are multiple slaves, reading from one of it will steal
the audio data from the other slave by making the
MasterPortAudioStream overwrite it with newly read data. Have you been
able to test your modifications with multiple InputPortAudioStreams on
the same MasterPortAudioStream (e.g. simultaneous calls none of which
is on hold)?

Best regards,
Lubomir

On Sun, May 2, 2010 at 3:26 PM, <wernerd@dev.java.net> wrote:

Author: wernerd
Date: 2010-05-02 12:26:42+0000
New Revision: 7065

Modified:
  trunk/src/net/java/sip/communicator/impl/neomedia/portaudio/streams/InputPortAudioStream.java
  trunk/src/net/java/sip/communicator/impl/neomedia/portaudio/streams/MasterPortAudioStream.java

Log:
Implement Buffer data re-use for portaudio capture device.

If the Buffer has a pre-allocated data area then check if portaudio can re-use it. Only if this
is not possible then allocate a buffer and set it in Buffer. This reduces usage of dynamic
memory allocation and thus garbage collection.

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@sip-communicator.dev.java.net
For additional commands, e-mail: dev-help@sip-communicator.dev.java.net

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@sip-communicator.dev.java.net
For additional commands, e-mail: dev-help@sip-communicator.dev.java.net


#4

Yet another addition :slight_smile: :

IMHO the re-use of buffers should be handled at the portaudio stream level
because it is the lowest level and a implementation here is then ok for all
upper layers.

Also I'm just thinking of a concept to speed up things a little bit:

Currently InputPortAudioStream.read synchronizes the whole block. If another
InpoutPortAudioStream.read holds the mutex (in case of several calls) then only
_one_ of the waiting InpoutPortAudioStream.read can aquire the mutex and
process the Buffer, then the next, etc. This could lead to a situation where
not all InpoutPortAudioStream.read get data before the first one performs a
new read (may be a rare case - but it may happen :slight_smile: ) which ten overwrites
data.

Using another way to synchronize the calls I think we may overcome this.
I'm just drawing some call flows / synchcronization flows to check my ideas.

Regards,
Werner

···

Am 02.05.2010 15:32, schrieb Werner Dittmann:

Lubo,

exactly this is the reason why I use System.arraycopy to
really copy data instead of just setting the byte[] object.

In the existing (prior to my mods) implementation the
slave stream set the same byte[] object to all buffers. Each
read on a slave has it's independent Buffer structure and thus
its own data pointer, length etc.

As said, my modifications do not share the same byte[] all over
as it was the case in the old implementation.

According the the JMF documentation the stream should take care
about this issue.

Regards,
Werner

Am 02.05.2010 14:44, schrieb Lubomir Marinov:

Hi Werner,

While we've started implementing this Buffer data reuse in our
DataSource and Codec implementations, I'm afraid doing it for
MasterPortAudioStream and InputPortAudioStream is risky because it
effectively shares one and the same byte[] with all slave
InputPortAudioStreams of a given MasterPortAudioStream. In other
words, if there are multiple slaves, reading from one of it will steal
the audio data from the other slave by making the
MasterPortAudioStream overwrite it with newly read data. Have you been
able to test your modifications with multiple InputPortAudioStreams on
the same MasterPortAudioStream (e.g. simultaneous calls none of which
is on hold)?

Best regards,
Lubomir

On Sun, May 2, 2010 at 3:26 PM, <wernerd@dev.java.net> wrote:

Author: wernerd
Date: 2010-05-02 12:26:42+0000
New Revision: 7065

Modified:
  trunk/src/net/java/sip/communicator/impl/neomedia/portaudio/streams/InputPortAudioStream.java
  trunk/src/net/java/sip/communicator/impl/neomedia/portaudio/streams/MasterPortAudioStream.java

Log:
Implement Buffer data re-use for portaudio capture device.

If the Buffer has a pre-allocated data area then check if portaudio can re-use it. Only if this
is not possible then allocate a buffer and set it in Buffer. This reduces usage of dynamic
memory allocation and thus garbage collection.

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@sip-communicator.dev.java.net
For additional commands, e-mail: dev-help@sip-communicator.dev.java.net

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@sip-communicator.dev.java.net
For additional commands, e-mail: dev-help@sip-communicator.dev.java.net

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@sip-communicator.dev.java.net
For additional commands, e-mail: dev-help@sip-communicator.dev.java.net


#5

Hi Werner,

I meant the PushBufferStream or PullBufferStream implementations of
our DataSource implementations. I just wanted to say it with a could
of words.

Best regards,
Lubo

···

On Sun, May 2, 2010 at 4:37 PM, Werner Dittmann <Werner.Dittmann@t-online.de> wrote:

Just an addition: how would you implement this in DataSource related
classes? These classes do not deal with the Buffer structure at all,
only the BufferStream related classes do.

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@sip-communicator.dev.java.net
For additional commands, e-mail: dev-help@sip-communicator.dev.java.net


#6

Lubo,

attached a tar file that contains two modified portaudio streams (Input
and Master) that use a new syncchronization concept.

The InputStream class uses its own synch object to synchronize its
buffers and bufferData. If no data is available then InputStream calls
the parent's (MasterStream) read. This modified MasterStream read returns
false in case another read for this master is already active. InputStream
then waits for data on its own synch object.

If no read is active MasterStream starts a portaudio read, fills the buffer,
and returns true. InputStream just returns the buffer in this case.

In case another read was active MasterStream read calls setBuffer once it
got data from portaudio. MasterStream read calls InputStream setBuffer to
feed back data to the waiting InputStream. setBuffer uses the same synch
object and notifies that data is ready. InputStream read now can start
immediately to get the data and to return it to its caller.

This synch concept is more fine grained and the InputStream can start
as soon as data is available and don't need to wait/compete for the more
global MasterStream synch as before. Some small enhancements at
MasterStream close method take care of active read and close only if no
read is pending.

This works in my sandbox without problems. What do you think? Can you
check if it works in your environment as well and if this is ok to
implement the Buffer re-use?

Regards,
Werner

pstreams.tar (20 KB)

···

Am 02.05.2010 17:04, schrieb Werner Dittmann:

Yet another addition :slight_smile: :

IMHO the re-use of buffers should be handled at the portaudio stream level
because it is the lowest level and a implementation here is then ok for all
upper layers.

Also I'm just thinking of a concept to speed up things a little bit:

Currently InputPortAudioStream.read synchronizes the whole block. If another
InpoutPortAudioStream.read holds the mutex (in case of several calls) then only
_one_ of the waiting InpoutPortAudioStream.read can aquire the mutex and
process the Buffer, then the next, etc. This could lead to a situation where
not all InpoutPortAudioStream.read get data before the first one performs a
new read (may be a rare case - but it may happen :slight_smile: ) which ten overwrites
data.

Using another way to synchronize the calls I think we may overcome this.
I'm just drawing some call flows / synchcronization flows to check my ideas.

Regards,
Werner

Am 02.05.2010 15:32, schrieb Werner Dittmann:

Lubo,

exactly this is the reason why I use System.arraycopy to
really copy data instead of just setting the byte[] object.

In the existing (prior to my mods) implementation the
slave stream set the same byte[] object to all buffers. Each
read on a slave has it's independent Buffer structure and thus
its own data pointer, length etc.

As said, my modifications do not share the same byte[] all over
as it was the case in the old implementation.

According the the JMF documentation the stream should take care
about this issue.

Regards,
Werner

Am 02.05.2010 14:44, schrieb Lubomir Marinov:

Hi Werner,

While we've started implementing this Buffer data reuse in our
DataSource and Codec implementations, I'm afraid doing it for
MasterPortAudioStream and InputPortAudioStream is risky because it
effectively shares one and the same byte[] with all slave
InputPortAudioStreams of a given MasterPortAudioStream. In other
words, if there are multiple slaves, reading from one of it will steal
the audio data from the other slave by making the
MasterPortAudioStream overwrite it with newly read data. Have you been
able to test your modifications with multiple InputPortAudioStreams on
the same MasterPortAudioStream (e.g. simultaneous calls none of which
is on hold)?

Best regards,
Lubomir


#7

Hi Werner,

Thank you very much for the explanation!

Best regards,
Lubo

···

Am 02.05.2010 15:32, schrieb Werner Dittmann:

exactly this is the reason why I use System.arraycopy to
really copy data instead of just setting the byte[] object.

In the existing (prior to my mods) implementation the
slave stream set the same byte[] object to all buffers. Each
read on a slave has it's independent Buffer structure and thus
its own data pointer, length etc.

As said, my modifications do not share the same byte[] all over
as it was the case in the old implementation.

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@sip-communicator.dev.java.net
For additional commands, e-mail: dev-help@sip-communicator.dev.java.net


#8

Hi Werner,

It seems to work as expected for me on Mac OS X and Linux!

Best regards,
Lubo

···

On Sun, May 2, 2010 at 9:20 PM, Werner Dittmann <Werner.Dittmann@t-online.de> wrote:

What do you think? Can you
check if it works in your environment as well and if this is ok to
implement the Buffer re-use?

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@sip-communicator.dev.java.net
For additional commands, e-mail: dev-help@sip-communicator.dev.java.net


#9

Hello,

we have tried the recently released nightly build, and it seems to work
with our Symbian OS VoIP client. We can hear voice on both ends. Speex
Narrowband is used.

Nevertheless, there is a marked difference in quality. The stream from
SIP-Communicator sounds really good in our VoIP software, no lags, no
obviously missing frames. Even though the bitrate coming from
SIP-Communicator is only 8 kbps (Speex narrowband actually supports
higher bitrates, like 11 kbps, 15.2 kbps, 18.4 kbps and even 24 kbps -
all of them would be better - I do not know why SIP-Communicator selects
8 kbps bitrate as the default), the sound is good. No discomfort for the
listener.

On the other hand, the sound from our VoIP software is not really good
in the SIP-Communicator output. There are frequent omissions, as if a
frame was missing from the jitter buffer, and although the communication
is intelligible, it is not a pleasure to listen to it.

Which is interesting, because when communicating from our VoIP to our
VoIP, we get better sound quality even for worse connections (like 3G
GPRS to 3G GPRS, which is vastly inferior in reliability, bandwidth and
ping to the WLAN-to-LAN connection we've been trying with SIP-Communicator).

I wonder whether the answer may be in the jitter buffer. Our VoIP client
does not use the jitter buffer bundled with Speex library. Instead, we
had to implement our own jitter buffer, which is designed to cope with
irregularities in the typical mobile packet service traffic, even at
cost of introducing a 150-200 ms delay. (It is still much more pleasant
to have a slight delay in speech than to have frequent omissions - and
in typical GSM/UMTS networks, you can't get rid of both these nuisances
at the same time).

Are you using the jitter buffer provided by Jean-Marc Valin in the Speex
library, or are you using your own jitter buffer?

Best regards

Marian Kechlibar

···

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@sip-communicator.dev.java.net
For additional commands, e-mail: dev-help@sip-communicator.dev.java.net


#10

Hi,

we currently don't use any external jitter buffer accept the
mechanisms in jmf. Till now we haven't experienced any problems with
that. Can you test with other codecs, is the problem the same with
them?

Regards
damencho

···

On Tue, May 11, 2010 at 10:54 AM, Marian Kechlibar <marian.kechlibar@circletech.net> wrote:

Hello,

we have tried the recently released nightly build, and it seems to work
with our Symbian OS VoIP client. We can hear voice on both ends. Speex
Narrowband is used.

Nevertheless, there is a marked difference in quality. The stream from
SIP-Communicator sounds really good in our VoIP software, no lags, no
obviously missing frames. Even though the bitrate coming from
SIP-Communicator is only 8 kbps (Speex narrowband actually supports
higher bitrates, like 11 kbps, 15.2 kbps, 18.4 kbps and even 24 kbps -
all of them would be better - I do not know why SIP-Communicator selects
8 kbps bitrate as the default), the sound is good. No discomfort for the
listener.

On the other hand, the sound from our VoIP software is not really good
in the SIP-Communicator output. There are frequent omissions, as if a
frame was missing from the jitter buffer, and although the communication
is intelligible, it is not a pleasure to listen to it.

Which is interesting, because when communicating from our VoIP to our
VoIP, we get better sound quality even for worse connections (like 3G
GPRS to 3G GPRS, which is vastly inferior in reliability, bandwidth and
ping to the WLAN-to-LAN connection we've been trying with SIP-Communicator).

I wonder whether the answer may be in the jitter buffer. Our VoIP client
does not use the jitter buffer bundled with Speex library. Instead, we
had to implement our own jitter buffer, which is designed to cope with
irregularities in the typical mobile packet service traffic, even at
cost of introducing a 150-200 ms delay. (It is still much more pleasant
to have a slight delay in speech than to have frequent omissions - and
in typical GSM/UMTS networks, you can't get rid of both these nuisances
at the same time).

Are you using the jitter buffer provided by Jean-Marc Valin in the Speex
library, or are you using your own jitter buffer?

Best regards

Marian Kechlibar

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@sip-communicator.dev.java.net
For additional commands, e-mail: dev-help@sip-communicator.dev.java.net

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@sip-communicator.dev.java.net
For additional commands, e-mail: dev-help@sip-communicator.dev.java.net


#11

Hi Marian,

I and Damian talked offline and we thought that you may want to
experiment with different settings for the JMF jitter buffer. There's
currently the property
net.java.sip.communicator.impl.neomedia.RECEIVE_BUFFER_LENGTH which is
read from our configuration file
~/.sip-communicator/sip-communicator.properties (the path given here
is Linux-specific). Its current default value is 100 which means 100
milliseconds. Since JMF assumes an RTP packet is with a duration of 30
milliseconds, you may want to experiment with values for the property
in question which are divisible by 60. For example, add/edit the line
net.java.sip.communicator.impl.neomedia.RECEIVE_BUFFER_LENGTH=120 in
the file ~/.sip-communicator/sip-communicator.properties.

Regards,
Lubomir

···

On Tue, May 11, 2010 at 11:21 AM, Damian Minkov <damencho@sip-communicator.org> wrote:

Hi,

we currently don't use any external jitter buffer accept the
mechanisms in jmf. Till now we haven't experienced any problems with
that. Can you test with other codecs, is the problem the same with
them?

Regards
damencho

On Tue, May 11, 2010 at 10:54 AM, Marian Kechlibar > <marian.kechlibar@circletech.net> wrote:

Hello,

we have tried the recently released nightly build, and it seems to work
with our Symbian OS VoIP client. We can hear voice on both ends. Speex
Narrowband is used.

Nevertheless, there is a marked difference in quality. The stream from
SIP-Communicator sounds really good in our VoIP software, no lags, no
obviously missing frames. Even though the bitrate coming from
SIP-Communicator is only 8 kbps (Speex narrowband actually supports
higher bitrates, like 11 kbps, 15.2 kbps, 18.4 kbps and even 24 kbps -
all of them would be better - I do not know why SIP-Communicator selects
8 kbps bitrate as the default), the sound is good. No discomfort for the
listener.

On the other hand, the sound from our VoIP software is not really good
in the SIP-Communicator output. There are frequent omissions, as if a
frame was missing from the jitter buffer, and although the communication
is intelligible, it is not a pleasure to listen to it.

Which is interesting, because when communicating from our VoIP to our
VoIP, we get better sound quality even for worse connections (like 3G
GPRS to 3G GPRS, which is vastly inferior in reliability, bandwidth and
ping to the WLAN-to-LAN connection we've been trying with SIP-Communicator).

I wonder whether the answer may be in the jitter buffer. Our VoIP client
does not use the jitter buffer bundled with Speex library. Instead, we
had to implement our own jitter buffer, which is designed to cope with
irregularities in the typical mobile packet service traffic, even at
cost of introducing a 150-200 ms delay. (It is still much more pleasant
to have a slight delay in speech than to have frequent omissions - and
in typical GSM/UMTS networks, you can't get rid of both these nuisances
at the same time).

Are you using the jitter buffer provided by Jean-Marc Valin in the Speex
library, or are you using your own jitter buffer?

Best regards

Marian Kechlibar

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@sip-communicator.dev.java.net
For additional commands, e-mail: dev-help@sip-communicator.dev.java.net


#12

Hi Lubomir,

the hardcodec assumption of JMF that a RTP packet carries a 30-ms frame
does not really correspond to real parameters of voice codecs.

First: codecs quite vary in their frame sizes:
- AMR uses 20 ms frames (AMR is internally supported by Nokia
smartphones, so it is one of our choices),
- Speex uses 20 ms frames,
- iLBC can use either 20 ms or 30 ms frames,
- G.729 uses 10 ms frames.

Second, a single RTP packet may carry more than one frame of the
appropriate codec. This is actually a technique that we use in our VoIP
client, as it
seems that sending 25 or 16 bigger RTP packets per second is "easier" on
the GSM/UMTS packet network than 50 shorter RTP packets.

This is a generally accepted technique, and the RFCs which describe
embedding of codec frames into RTP packets usually discuss how to pack
multiple
frames into a single RTP packet. The transmitting side does not even
have to keep the # of codec frames per RTP packet constant during the call.

So, jitter buffers should, at least theoretically, be a little adaptive
and concentrate their functionality around frames, not RTP packets,
because frames are the atomic elements of the incoming sound.

This is probably a non-issue in the world of PC-based VoIP, where the
standard seems to be 1-frame-per-RTP packet.

But in our case, it seems clearly more effective to pack more frames
into a single RTP packet, and maybe even change that amount adaptively,
based on the observed
connection quality.

Marian

Lubomir Marinov napsal(a):

···

Hi Marian,

I and Damian talked offline and we thought that you may want to
experiment with different settings for the JMF jitter buffer. There's
currently the property
net.java.sip.communicator.impl.neomedia.RECEIVE_BUFFER_LENGTH which is
read from our configuration file
~/.sip-communicator/sip-communicator.properties (the path given here
is Linux-specific). Its current default value is 100 which means 100
milliseconds. Since JMF assumes an RTP packet is with a duration of 30
milliseconds, you may want to experiment with values for the property
in question which are divisible by 60. For example, add/edit the line
net.java.sip.communicator.impl.neomedia.RECEIVE_BUFFER_LENGTH=120 in
the file ~/.sip-communicator/sip-communicator.properties.

Regards,
Lubomir

On Tue, May 11, 2010 at 11:21 AM, Damian Minkov > <damencho@sip-communicator.org> wrote:
  

Hi,

we currently don't use any external jitter buffer accept the
mechanisms in jmf. Till now we haven't experienced any problems with
that. Can you test with other codecs, is the problem the same with
them?

Regards
damencho

On Tue, May 11, 2010 at 10:54 AM, Marian Kechlibar >> <marian.kechlibar@circletech.net> wrote:
    

Hello,

we have tried the recently released nightly build, and it seems to work
with our Symbian OS VoIP client. We can hear voice on both ends. Speex
Narrowband is used.

Nevertheless, there is a marked difference in quality. The stream from
SIP-Communicator sounds really good in our VoIP software, no lags, no
obviously missing frames. Even though the bitrate coming from
SIP-Communicator is only 8 kbps (Speex narrowband actually supports
higher bitrates, like 11 kbps, 15.2 kbps, 18.4 kbps and even 24 kbps -
all of them would be better - I do not know why SIP-Communicator selects
8 kbps bitrate as the default), the sound is good. No discomfort for the
listener.

On the other hand, the sound from our VoIP software is not really good
in the SIP-Communicator output. There are frequent omissions, as if a
frame was missing from the jitter buffer, and although the communication
is intelligible, it is not a pleasure to listen to it.

Which is interesting, because when communicating from our VoIP to our
VoIP, we get better sound quality even for worse connections (like 3G
GPRS to 3G GPRS, which is vastly inferior in reliability, bandwidth and
ping to the WLAN-to-LAN connection we've been trying with SIP-Communicator).

I wonder whether the answer may be in the jitter buffer. Our VoIP client
does not use the jitter buffer bundled with Speex library. Instead, we
had to implement our own jitter buffer, which is designed to cope with
irregularities in the typical mobile packet service traffic, even at
cost of introducing a 150-200 ms delay. (It is still much more pleasant
to have a slight delay in speech than to have frequent omissions - and
in typical GSM/UMTS networks, you can't get rid of both these nuisances
at the same time).

Are you using the jitter buffer provided by Jean-Marc Valin in the Speex
library, or are you using your own jitter buffer?

Best regards

Marian Kechlibar
      
---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@sip-communicator.dev.java.net
For additional commands, e-mail: dev-help@sip-communicator.dev.java.net

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@sip-communicator.dev.java.net
For additional commands, e-mail: dev-help@sip-communicator.dev.java.net


#13

Hey Marian,

На 11.05.10 11:50, Marian Kechlibar написа:

Hi Lubomir,

the hardcodec assumption of JMF that a RTP packet carries a 30-ms frame
does not really correspond to real parameters of voice codecs.

No indeed it doesn't. The point is that this is how JMF converts the
configuration value to a number of packets that it actually buffers. In
the case of SC for example, packets rarely carry exactly 30 ms.

So, jitter buffers should, at least theoretically, be a little adaptive
and concentrate their functionality around frames, not RTP packets,
because frames are the atomic elements of the incoming sound.

Would be nice indeed. Care to log an issue? I don't know if anyone would
be able to work on this any time soon but it's worth keeping in mind.

Emil

···

This is probably a non-issue in the world of PC-based VoIP, where the
standard seems to be 1-frame-per-RTP packet.

But in our case, it seems clearly more effective to pack more frames
into a single RTP packet, and maybe even change that amount adaptively,
based on the observed
connection quality.

Marian

Lubomir Marinov napsal(a):

Hi Marian,

I and Damian talked offline and we thought that you may want to
experiment with different settings for the JMF jitter buffer. There's
currently the property
net.java.sip.communicator.impl.neomedia.RECEIVE_BUFFER_LENGTH which is
read from our configuration file
~/.sip-communicator/sip-communicator.properties (the path given here
is Linux-specific). Its current default value is 100 which means 100
milliseconds. Since JMF assumes an RTP packet is with a duration of 30
milliseconds, you may want to experiment with values for the property
in question which are divisible by 60. For example, add/edit the line
net.java.sip.communicator.impl.neomedia.RECEIVE_BUFFER_LENGTH=120 in
the file ~/.sip-communicator/sip-communicator.properties.

Regards,
Lubomir

On Tue, May 11, 2010 at 11:21 AM, Damian Minkov >> <damencho@sip-communicator.org> wrote:
  

Hi,

we currently don't use any external jitter buffer accept the
mechanisms in jmf. Till now we haven't experienced any problems with
that. Can you test with other codecs, is the problem the same with
them?

Regards
damencho

On Tue, May 11, 2010 at 10:54 AM, Marian Kechlibar >>> <marian.kechlibar@circletech.net> wrote:
    

Hello,

we have tried the recently released nightly build, and it seems to work
with our Symbian OS VoIP client. We can hear voice on both ends. Speex
Narrowband is used.

Nevertheless, there is a marked difference in quality. The stream from
SIP-Communicator sounds really good in our VoIP software, no lags, no
obviously missing frames. Even though the bitrate coming from
SIP-Communicator is only 8 kbps (Speex narrowband actually supports
higher bitrates, like 11 kbps, 15.2 kbps, 18.4 kbps and even 24 kbps -
all of them would be better - I do not know why SIP-Communicator selects
8 kbps bitrate as the default), the sound is good. No discomfort for the
listener.

On the other hand, the sound from our VoIP software is not really good
in the SIP-Communicator output. There are frequent omissions, as if a
frame was missing from the jitter buffer, and although the communication
is intelligible, it is not a pleasure to listen to it.

Which is interesting, because when communicating from our VoIP to our
VoIP, we get better sound quality even for worse connections (like 3G
GPRS to 3G GPRS, which is vastly inferior in reliability, bandwidth and
ping to the WLAN-to-LAN connection we've been trying with SIP-Communicator).

I wonder whether the answer may be in the jitter buffer. Our VoIP client
does not use the jitter buffer bundled with Speex library. Instead, we
had to implement our own jitter buffer, which is designed to cope with
irregularities in the typical mobile packet service traffic, even at
cost of introducing a 150-200 ms delay. (It is still much more pleasant
to have a slight delay in speech than to have frequent omissions - and
in typical GSM/UMTS networks, you can't get rid of both these nuisances
at the same time).

Are you using the jitter buffer provided by Jean-Marc Valin in the Speex
library, or are you using your own jitter buffer?

Best regards

Marian Kechlibar
      
---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@sip-communicator.dev.java.net
For additional commands, e-mail: dev-help@sip-communicator.dev.java.net

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@sip-communicator.dev.java.net
For additional commands, e-mail: dev-help@sip-communicator.dev.java.net

--
Emil Ivov, Ph.D. 67000 Strasbourg,
Project Lead France
SIP Communicator
emcho@sip-communicator.org PHONE: +33.1.77.62.43.30
http://sip-communicator.org FAX: +33.1.77.62.47.31

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@sip-communicator.dev.java.net
For additional commands, e-mail: dev-help@sip-communicator.dev.java.net