lwip-users
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [lwip-users] lwip does not ack retransmissions


From: Jonathan Larmour
Subject: Re: [lwip-users] lwip does not ack retransmissions
Date: Wed, 28 Jan 2009 00:45:43 +0000
User-agent: Thunderbird 1.5.0.12 (X11/20070530)

john bougs wrote:
> 
> 1) Yes I would guess that my application is occasionally going off and
> erase a flash sector (not sure how long but < 400ms for whole device).
> So that causes some queueing.

Is it possible that your driver holds on to the packets, and then presents
them to lwIP in reverse order? I'm just wondering about the effect of
TCP_QUEUE_OOSEQ.

> 2) I checked and added a bunch of code to monitor my releasing of pbufs,
> and I am doing that correctly.  I am not holding any pbufs, but
> something else grabs them all when everything goes haywire.

Good.

> 3) Yes I added the LWIP_STATS_DISPLAY and all the pbufs are being used,
> so this look like it is the cause of the problem.
> 
> LINK xmit: 130 rexmit: 0 recv: 125 fw: 0 drop: 9
> MEM PBUF_POOL avail: 8 used: 8 max: 8 err: 9

I'm sure it is then.

> 4) I disabled TCP_QUEUE_OOSEQ and that seems to resolve the problem. (or
> does it just hide it?)

Yes it's sort of hiding it, hence me wondering about how queued packets are
given to lwIP. But at the same time, this isn't necessarily such a bad form
of hiding given the alternative!

>  Shouldn't  the TCP code know that its out of
> pbufs and free some of them?  Or is something that is missing to keep
> the product light weight?

I think you're probably right that it should, but right now it doesn't. It
only frees such packets in the TCP slow timer function, which is (supposed
to be) called every 500ms. The out-of-sequence packets are only freed after
six times the retransmission timeout (RTO). The RTO is a bit of a funny
figure as it depends on the round-trip-time, and is calculated in a complex
way, including a dependency on past retransmissions. But to give you an
idea of the order of magnitude, the initial RTO would correspond to an 18
second delay before out-of-sequence segments are freed. That's a long time
to be out of pbuf space, so I think this is worth fixing up.

There isn't a clean way to make lwIP free OOSEQ packets if we run out of
pbufs, but that's ok - one of lwIP's design principles is not to let
layering get too much in the way of efficient operation :-).

So perhaps you could submit a task at
http://savannah.nongnu.org/projects/lwip to sort this out so it doesn't get
forgotten. But thinking about it, it shouldn't be that difficult really...
in fact I speculate it would just involve adding something like the
following in pbuf.c:

#if TCP_QUEUE_OOSEQ
#include "lwip/tcp.h"
#endif

#if TCP_QUEUE_OOSEQ
#define ALLOC_PBUF(p) do { (p) = alloc_pbuf_pool(); } while (0)
#else
#define ALLOC_PBUF(p) do { (p) = memp_malloc(MEMP_PBUF_POOL); } while (0)
#endif

#if TCP_QUEUE_OOSEQ
/* Attempt to reclaim some memory from queued out-of-sequence packets */
/* It's better to give priority to new packets if we're running out. */
static struct pbuf *
alloc_pbuf_pool(void)
{
  struct tcp_pcb *pcb;
  struct pbuf *p;

retry:
  p = memp_malloc(MEMP_PBUF_POOL);
  if (NULL == p)
  {
    for (pcb=tcp_active_pcbs; NULL != pcb; pcb = pcb->next) {
      if (NULL != pcb->ooseq) {
        tcp_segs_free(pcb->ooseq);
        pcb->ooseq = NULL;
        goto retry;
      }
    }
  }
  return p;
}
#endif /* TCP_QUEUE_OOSEQ */

Then change the two calls to memp_malloc(MEMP_PBUF_POOL) in pbuf_alloc() to
 ALLOC_PBUF_POOL(p).

If you don't mind giving this a spin, that would be great. I'm not in a
position to test it (I've just written it off the top of my head).

Jifl
-- 
eCosCentric Limited      http://www.eCosCentric.com/     The eCos experts
Barnwell House, Barnwell Drive, Cambridge, UK.       Tel: +44 1223 245571
Registered in England and Wales: Reg No 4422071.
------["Si fractum non sit, noli id reficere"]------       Opinions==mine




reply via email to

[Prev in Thread] Current Thread [Next in Thread]