[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Discuss-gnuradio] USRP2 eth_buffer

From: Juha Vierinen
Subject: Re: [Discuss-gnuradio] USRP2 eth_buffer
Date: Wed, 22 Apr 2009 23:06:19 +0000

> Try setting your application to run using real-time scheduling
> priority.  This is done in C++ via a call to:
> gr_enable_realtime_scheduling()

I am using this.

> We use the Linux kernel packet ring method of receiving packets from
> sockets.  This is a speed optimized method that maps memory in such a
> way that the kernel sees it as kernel memory and the user process sees
> it at its own memory, so there is no copying from kernel to user
> space.  It also lets us receive multiple packets with one system call.
>  (At full rate, we process about 50 packets per system call.)
> The kernel maintains a ring of pointers to pending packets, and these
> ring descriptors must be stored in one kernel memory region.  These
> memory regions are of MAX_SLAB_SIZE, and each descriptor is
> sizeof(void*).  So the tp_block_nr variable calculates the number of
> possible packets by dividing the buffer length by the block size, and
> if that is more than can be stored in MAX_SLAB_SIZE, it reduces it to
> the limit that imposes.

Doesn't this apply only for pre 2.6.5 and 2.4.26 kernels? At least
that is what the Documentation/networking/packet_mmap.txt says.

BTW, shouldn't the MAX_SLAB_SIZE should be 131072 instead of  131702?

> So you probably aren't using all 500 MB of that memory.  You can
> uncomment the debug printf in that part of code to see the number of
> blocks actually allocated.

I think I'm using it all. I removed the MAX_SLAB_SIZE constraint and
it still works in mmaped mode. The setsockopt still succeeds and the
data looks ok.

> What tends to happen if you aren't running your user process as RTPRIO
> is that the libusrp2 driver grabs the packets from the kernel okay,
> but your flowgraph doesn't read them from the driver fast enough, and
> you get backed up into an overflow.

This is exactly the problem. On average the disk bandwidth is more
than enough, but there are fairly large "hiccups" that cause the
buffer to overrun. I could try to write my own buffer, but that would
be one extra memory copy, I'd prefer a large kernel-space buffer to a
large user space buffer.


reply via email to

[Prev in Thread] Current Thread [Next in Thread]