Maybe this is a dumb question, but I've been wondering this for awhile. Why does PBUF_POOL_SIZE have to be so large? It's been recommended to set it at 16 or more. Say I have MSS set to the maximum of 1460, TCP_WND is set to 2*MSS, and PBUF_POOL_BUFSIZE is about 1520. In this case, if less than 3KB of data can be in flight at a time, and I have no more than one connection at a time, why would I need anymore than about 3 or 4 pbufs if they are for Rx only?
I know the easy thing to do is enable LWIP_STATS and see what the maximum used is in my application, but I'm still having some other issues to work out. It's still dropping packets. I know I reported earlier that this was caused by removing a global interrupt disable/enable from sys_arch, but it seems that really didn't fix it. And to make matters worse,
enabling enough debugging to see what is occurring mostly fixes the problem! Incidentally, defining LWIP_PLATFORM_DIAG(x) as a short delay also fixes the problem. So I'm just trying to get a handle on some of the basics first. Right now I'm concentrating on the Stellaris ethernet driver as the possible culprit.
|