lwip-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [lwip-devel] How many MEMP_NUM_TCP_SEG needed?


From: narke
Subject: Re: [lwip-devel] How many MEMP_NUM_TCP_SEG needed?
Date: Wed, 28 Mar 2012 14:59:25 +0800

On 28 March 2012 14:54, Simon Goldschmidt <address@hidden> wrote:
> narke <address@hidden> wrote:
>> I set MEMP_NUM_TCP_SEG as same as TCP_SND_QUEUELEN.  I thought this
>> should be okay because I always disable the nagle and always call the
>> tcp_output after called tcp_write().    So, in the case, each sending
>> should resulted in one tcp segement, so the MEMP_NUM_TCP_SEG should be
>> same value as current send queue length.  However, in some of my
>> tests, I observed, my pcb's snd_queuelen did not go beyond the
>> TCP_SND_QUEUELEN, but the TCP_SEG memory poll get used out --
>> allocation failed and the err counter of lwip_stats.memp[x] get
>> increased.
>>
>> How can I understand this and how do I set the MEMP_NUM_TCP_SEG const?
>
> - Are you using more than one pcb? MEMP_NUM_TCP_SEG is a global value, while 
> TCP_SND_QUEUELEN is per pcb.

Yes, there is only one active pcb in my application.   There is a
listen pcb, after connection has been accepted, the listen pcb was
closed.

> - When TCP_QUEUE_OOSEQ is enabled, each incoming out-of-sequence segments 
> requires one MEMP_TCP_SEG.

I disabled the TCP_QUEUE_OOSEQ.

> - Finally, segments including flags only (SYN/FIN) need one MEMP_TCP_SEG, too.
>

But the SEG occupied by SYN/FIN segments should have been released in
ESTABLISH state, right?


> Simon
> --
> Empfehlen Sie GMX DSL Ihren Freunden und Bekannten und wir
> belohnen Sie mit bis zu 50,- Euro! https://freundschaftswerbung.gmx.de
>
> _______________________________________________
> lwip-devel mailing list
> address@hidden
> https://lists.nongnu.org/mailman/listinfo/lwip-devel



-- 
Life is the only flaw in an otherwise perfect nonexistence
    -- Schopenhauer

narke
public key at http://subkeys.pgp.net:11371 (address@hidden)



reply via email to

[Prev in Thread] Current Thread [Next in Thread]