lwip-users
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[lwip-users] large latency on last TCP segment


From: David Belohrad
Subject: [lwip-users] large latency on last TCP segment
Date: Mon, 18 Mar 2013 22:41:05 +0100
User-agent: Notmuch/0.15.1+15~gd037040 (http://notmuchmail.org) Emacs/23.4.1 (x86_64-pc-linux-gnu)

Dear All,

I have an application, which requires to send continually large chunk of
data. Typically 40kib in one go, with period of roughly 700ms. I'm using
beaglebone with starterware, which itself deploys LWIP 1.4

I do the standard magic:

1) wait for tcp_sndbuf > 40kib
2) send the chunk using tcp_write (TCP_WRITE_FLAG_MORE is set)

I have setup TCP_SND_BUF and all other stuff to accomodate 65535 bytes
of receiver buffer, and when tcp_sndbuf is called with empty
transmission buffer, it shows slighly below 64kib, probably due to some
space needed for internal structures.

The trouble I have is, that even the speediest configuration (i.e. I set
my 700ms delay to zero and sending fake data to see how much throughout
I get) experiences very weird latencies: _the last segment of the large
40kib packet is like 600->1.2seconds late with respect to all previous
packets. When looking using wireshark, I see something like this:

http://www.cern.ch/belohrad/notreceivedpackets4.png

(sorry, i hesitated to post image). 

This picture shows the last segment of the communication (#79), followed
by its ack (#80), but then nothing happens during quite a long time
(#80->#81).

This 'nothing' I have identified to my polling of tcp_sndbuf until it
returns data buffers available. It typically requires 3-5 pollings of
snd_buf to get enough buf space.

The trouble with all this is, that as my data are >40kib, I cannot
shift-in another tcp_write as my buffer data do not fit into transmit
buffer anymore.

this quite drastically limits the data rate at which I can shift in the
data. Probably as an 'option' would be to fragment the data I want to
send to e.g. 8 kib blocks and queue them. But that does not really solve
the situation...

so my question is: why the tcp_sndbuf returns available buffers with
such latency? is there a sort of timeout mechanism to liberate those
buffers so they do not get freed immediatelly after ack? or is it the
freeing mechanism which takes so long time? I'm not using pool buffers,
i'm using heap


thanks

david



reply via email to

[Prev in Thread] Current Thread [Next in Thread]