lwip-users
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[lwip-users] Driver Tx queue filling up


From: Jeff Barber
Subject: [lwip-users] Driver Tx queue filling up
Date: Thu, 22 Oct 2009 08:19:32 -0400

I have an lwip-based FTP server built on a nearly stock 1.3.1 lwIP.
When I do a GET on a large file (resulting in a high-speed
unidirectional transfer), I'm seeing about every 256th TCP packet
getting dropped (often the first drop doesn't happen until about 512
packets into the transfer but then it seems very regular).  TCP
eventually recovers from this but it takes about a second each time.
Then a drop happens again 256 packets later.

Now 256 happens to correlate to the size of the Tx and Rx ring buffers
in my driver so that was the obvious place to look.  I notice that
sometimes my driver's linkoutput function is being called when its Tx
queue is full.  I'm thinking that's the proximate cause of the
problem.  However, shouldn't the TCP_SND_QUEUELEN (32) and TCP_SND_BUF
(8 * MSS) values be a limit to the maximum number of "outstanding"
pbufs?  I did turn on TCP_QLEN_DEBUG and according to that info, I
never get a queue len > 8.  So why do I end up with so many packets
"in flight"?

And if I'm misunderstanding, what is the intended feedback mechanism
from the driver?  tcp_output always seems to ignore the return value
of ip_output.  Hence, if I understand it correctly, that means an
attempt to send while the Tx queue is full is treated exactly the same
as if the packet was dropped on the wire: i.e. it will rely on the
retransmission process to recover.

I'm running with these values in my opt.h:
#define TCP_MSS                     1460
#define TCP_SND_BUF                 (8 * TCP_MSS)
#define TCP_SND_QUEUELEN            (4 * (TCP_SND_BUF/TCP_MSS))

Any ideas on debugging or understanding this would be appreciated.
Packet capture demonstrating the problem attached (see packet # 399).

Thanks,
Jeff

Attachment: ftp_snip.cap
Description: Binary data


reply via email to

[Prev in Thread] Current Thread [Next in Thread]