[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [lwip-users] Automatic Rx DMA ring replenish

From: web
Subject: Re: [lwip-users] Automatic Rx DMA ring replenish
Date: Thu, 03 Nov 2011 19:19:49 +0100

Finding a more elegant solution to the deadlock issue in Ethernet drivers was my original reason for trying this out. But the (yet somewhat unverified) performance increase is a bonus.
I mentioned both advantages in the submitted patch.

I have not yet done any performance measurements. It is sifficult to measure this particular optimization, since it cannot be measured as a decreased latency between two events.
Instead, the number of CPU cycles spent per second for a certain traffic load must be measured. A much more difficult measurement...
I would need to find a way to produce a perfectly controlled traffic load, and then measure the exact CPU load.

The patch decreases the amount of code executed in the processing of a pbuf, so there should be a performance increase. But it might not be a big difference.

Timmy Brolin

On 3 nov 2011 17:15 "Bill Auerbach" <address@hidden> wrote:



I commented based on the “This should improve performance” statement in Timmy’s message.  The intent was not to invalidate the change or deem it unnecessary, only to not have performance be a reason one heads off in the direction of adding this feature to their port.  I well could be wrong about performance too – this is merely an opinion based on code review.




From: address@hidden [mailto:address@hidden On Behalf Of Simon Goldschmidt
Sent: Thursday, November 03, 2011 9:47 AM
To: Mailing list for lwIP users
Subject: Re: [lwip-users] Automatic Rx DMA ring replenish


"Bill Auerbach" <address@hidden>:

I haven’t benchmarked to be able to provide factual data, but I’ve done a lot of optimization and tweaking of lwIP to improve bandwidth and my study of pbufs and memory pools did not show the need for improvement considering all of the other things required to handle a TCP connection.


I saw the patch not as speed optimization but as a way to simplify the netif driver: to provide a robust implementation, a netif driver would have to ensure it retries if pbuf_alloc fails because the pool is empty (to prevent a deadlock). This patch prevents a netif driver having to implement some sort of timer because it just gets notified when pbufs are available again.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]