lwip-users
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [lwip-users] p->payload == iphdr failed...


From: Ed Sutter
Subject: Re: [lwip-users] p->payload == iphdr failed...
Date: Sat, 08 Nov 2008 15:57:19 -0500
User-agent: Thunderbird 2.0.0.17 (Windows/20080914)



address@hidden wrote:
Ed Sutter wrote:
Ok, I see what you're talking about in pbuf_header(). I don't know the history of the code to comment on whether or not the change should be in the master code base; but it would seem to me that if this is forcing a driver to use the memcpy loop to build the pbuf chain, then it would be a good improvement.
The reason pbuf_header can't expand to the front is that it doesn't know where the memory area that the PBUF_REF points to starts. While shrinking works as long as p->len is > 0, expanding to the front doesn't work without another pointer in the pbuf struct that tells us the original ->payload pointer.

Why is it a problem to just add another member to the structure that keeps
track of the start of the buffer?  I'll try this out if that's the only issue
(I'm guessing there must be more to it).

For this reason, it is not a good thing to give this 'feature' to everyone as this could very easily lead to memory corruption.

The way to prevent memcpy is to allocate a PBUF_POOL (of the maximum size) and pass the ->payload pointer to the MAC which can store the receive data directly to that location. This method has some disadvantages though: a) with most MACs, you cannot use chained pbufs for receiving and b) you cannot use this method with MACs that have internal receive buffers.

The strategy that I use in my code is one that is independent of the driver.
This is the LWIP-based server stuff I'm using as a demo application that
runs on uMon's packet API.  Seems to me that its a reasonable thing to
be able to do.

Just so you know: changing this _is_ on the to-do-list somewhere, but I think don't even know whether there is a feature-request for this somewhere...

I'll look into this further.
Thanks
Ed




reply via email to

[Prev in Thread] Current Thread [Next in Thread]