lwip-users
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [lwip-users] Ethernet - all packets over 32 dropped


From: bobbyb
Subject: Re: [lwip-users] Ethernet - all packets over 32 dropped
Date: Mon, 19 Oct 2009 13:15:46 -0700 (PDT)

It sends 32 packets each with 1500bytes total including all headers. changing
it to PBUF_RAM had no effect :(



address@hidden wrote:
> 
> bobbyb wrote:
>> Iperf is a common application used for testing network bandwidth
>> (http://en.wikipedia.org/wiki/Iperf for more details). xapp1026 provides
>> a
>> utxperf which allows you to setup an iperf server which basically just
>> spams
>> packets as fast as it can to determine maximum bandwidth. This
>> application
>> works fine for me.
>>   
> I knew what iperf is, but I didn't know it was ported to lwIP (and 
> escpecially the raw API).
>> I am using the raw api with my pbufs setup just as in the iperf example -
>> pbuf_alloc(PBUF_RAW, SEND_BUFSIZE, PBUF_POOL).
> Although that might not have anything to do with your problem, using 
> PBUF_POOL for TX is not a good idea: at least with TCP, you risk 
> deadlocks when running out of pbufs as incoming ACK segments can not be 
> received to free allocated pbufs (if the pool is empty).
>> The pbuf_pool_size is set to
>> 256 and the pbuf_pool_bufsize is set to 1600. I am using lwip v1.3 which
>> i
>> believe is equivalent to lwip v3.0 with some xilinx specific changes.
>> This
>> is the exact setup i use to run iperf too which is why i'm very confused.
>>   
> Try using pbuf_alloc(PBUF_RAW, SEND_BUFSIZE, PBUF_RAM) instead. Oh, and 
> you didn't say if it sends 32 bytes of data or 32 bytes total.
> 
> Simon
> 
> 
> _______________________________________________
> lwip-users mailing list
> address@hidden
> http://lists.nongnu.org/mailman/listinfo/lwip-users
> 
> 

-- 
View this message in context: 
http://www.nabble.com/Ethernet---all-packets-over-32-dropped-tp25962931p25964903.html
Sent from the lwip-users mailing list archive at Nabble.com.





reply via email to

[Prev in Thread] Current Thread [Next in Thread]