RTP stats explaination

Home » Asterisk Users » RTP stats explaination
Asterisk Users 4 Comments

Hi all,
This question is not related to asterisk, but related to voip quality
in general. But i thought there are lot of experienced guys out here
who can help me with this. And our telephony platform is also asterisk
:). May be i can extract some bias over this 🙂

We are getting very poor quality of voice during testing of a new
filtering application of us.

The application receives packets from kernel using netfilter_queue
library. Then insert the packets into a new user managed queue and
does some transformations on it, like concatenation of udp payload.

The network is healthy. Its inside our lab. And it does not drop
packets or anything .

In our app we do not forward packet immediately. After enough packet
received to increase rtp packetization time (ptime) the we forward the
message over raw socket and set dscp to be 10 so that this time
packets can escape iptable rules.

From client side the RTP stream analysis shows nearly every stream as
problematic. summery for some streams are given below :

Stream 1:

Max delta = 1758.72 ms at packet no. 40506
Max jitter = 231.07 ms. Mean jitter = 9.27 ms.
Max skew = -2066.18 ms.
Total RTP packets = 468   (expected 468)   Lost RTP packets = 0
(0.00%)   Sequence errors = 0
Duration 23.45 s (-22628 ms clock drift, corresponding to 281 Hz (-96.49%)

Stream 2:

Max delta = 1750.96 ms at packet no. 45453
Max jitter = 230.90 ms. Mean jitter = 7.50 ms.
Max skew = -2076.96 ms.
Total RTP packets = 468   (expected 468)   Lost RTP packets = 0
(0.00%)   Sequence errors = 0
Duration 23.46 s (-22715 ms clock drift, corresponding to 253 Hz (-96.84%)

Stream 3:

Max delta = 71.47 ms at packet no. 25009
Max jitter = 6.05 ms. Mean jitter = 2.33 ms.
Max skew = -29.09 ms.
Total RTP packets = 258   (expected 258)   Lost RTP packets = 0
(0.00%)   Sequence errors = 0
Duration 10.28 s (-10181 ms clock drift, corresponding to 76 Hz (-99.05%)

Any idea where should we look for the problem?

4 thoughts on - RTP stats explaination

  • A maximum jitter of 230 milliseconds looks pretty horrendous to me.
    This is going to cause really serious audio stuttering on the
    receiving side, and/or will force the use of such a long “jitter
    buffer” by the receiver that the audio will suffer from an
    infuriating amount of delay. Even a local call would sound as if
    it’s coming from overseas via a satellite-radio link.

    I suspect it’s likely due to a combination of two things:

    (1) The fact that you are deliberately delaying the forwarding
    of the packets. This adds latency, and if you’re forwarding
    packets in batches it will also add jitter.

    (2) Scheduling delays. If your forwarding app fails to run its
    code on a very regular schedule – if, for example, it’s delayed
    or preempted by a higher-priority task, or if some of its code
    is paged/swapped out due to memory pressure and has to be paged
    back in – this will also add latency and jitter.

    Pushing real-time IP traffic up through the application layer like
    this is going to be tricky. You may be able to deal with issue (2)
    by locking your app into memory with mlock() and setting it to run
    at a “real-time” scheduling priority.

    Issue (1) – well, I really think you need to avoid doing this.
    Push the packets down into the kernel for retransmission as quickly
    as you can. If you need to rate-limit or rate-pace their sending,
    use something like the Linux kernel’s traffic-shaping features.

    Is there other network traffic flowing to/from this particular
    machine? It’s possible that other outbound traffic is saturating
    network-transmit buffers somewhere – either in the kernel, or in
    an “upstream” communication node such as a router or DSL modem.
    If this happens, there’s no guarantee that “high priority” or
    “expedited delivery” packets would be given priority over
    (e.g.) FTP uploads… many routers/switches/modems don’t pay
    attention to the class-of-service on IP packets.

    To prevent this, you’d need to use traffic shaping features on
    your system, to “pace” the transmission of *all* packets so that
    the total transmission rate is slightly below the lowest-bandwidth
    segment of your uplink. You’d also want to use multiple queues
    to give expedited-deliver packets priority over bulk-data packets.
    The “Ultimate Linux traffic-shaper” page would show how to
    accomplish this on a Linux system; the same principles with
    different details would apply on other operating systems.

  • Won’t a cell-to-cell call experience delays in the 300ms range?

    Many moons ago I remember listening with a cell while tapping on the table
    with another cell and being stunned with the magnitude of the delay and
    that most people manage to carry on conversations without noticing.

  • Hi Dave,

    There is no other ways other than doing this. Because we need enough
    packets to be queued before doing a repacketization feature.

    Asterisk also does this by “allow:g729:120” in sip.conf. But we have
    seen that asterisk fails to do that in different circumstances.
    Because of this we are trying to do it before it goes into asterisk.
    We can also try learning asterisk development and then try to modify
    asterisk to meet our needs. But writing a new application seems
    logical to me, because then we will be bound in one platform. It will
    be better if this “repacketization” is telephony platform agnostic.

    We also have some freeswitch boxes, and some propitiatory platform for
    which we do not have codes. There is a severe limitation in freeswitch
    regarding this feature because of its dependence on L16 format when
    communication with transcoder card. Because of this they only support
    up to 50ms of ptime for g729. and the other platform vendor does not
    intend to support it either. But this feature is very critical to our

    If we want to move this application in kernel space by writing kernel
    module would it help? What are the constraints we need to be aware if
    we start writing a kernel module to provide this functionality?

    We will test it and post further results.

    Thank you for the advice.