[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Discuss-gnuradio] Debugging overruns

From: Eric Blossom
Subject: Re: [Discuss-gnuradio] Debugging overruns
Date: Mon, 29 Jan 2007 11:05:38 -0800
User-agent: Mutt/1.5.9i

On Mon, Jan 29, 2007 at 10:23:15AM -0800, Dan Halperin wrote:
> An unanswered question from before:
> >> Also, another (incidental) question, I get really bad performance when the 
> >> fusb_options set by realtime being true are used....
> >>     

> What are the fusb_options all about, and how can I get intuition on the
> right settings for them?

They set the amount of buffering being done in the Fast USB interface.
Under Linux it goes like this (NetBSD is similar):

  block_size is the size of the transfer made to/from the kernel.  It
  must be a multiple of 512 bytes.  Bigger block_sizes give lower
  overhead (fewer kernel calls to move a given number of bytes),
  however if you're trying to reduce worst case latency (particularly
  important in transceiver apps trying to do carrier sense), smaller
  values are better.

  nblocks is the maximum number of blocks that are scheduled for i/o
  at any time.  Under Linux we use a usbfs ioctl to asynchronously
  submit multiple requests.  If you don't care about latency, the
  default value (not specifying the fusb_nblocks ctor arg) is fine.
  If you're trying to reduce your worst case latency, then smaller
  values of nblocks are better.  The minimum that I've ever seen work
  is 4.

When running as realtime, it's possible to run with less buffering
since the USRP library code doesn't get preempted by the X-server, etc.

The values specified in tunnel.py were found by experimentation with my
X30 and X61 laptops.  (1.4 GHz Pentium M and 1.8 GHz Core Duo respectively).

> > I think you're burning up all the cycles constructing the string
> > of random bytes.  Building the string byte by byte is very expensive.
> > Basically O(N^2).
> >
> > Try this:
> >
> >   def random_bytes(number):
> >     return ''.join([chr(randint(0, 255)) for x in range(number)])
> >
> >
> > Also, are you sure you're not holding onto references to old payloads
> > somewhere?  If you are, no amount of garbage collection or reference
> > counting will save you ;)

> I implemented the above change and a few other optimizations (googling
> for python optimization is an effective tactic); but the problem
> persists. It does seem to be tied to Python's randomization choking
> after python's randint was called ~512k times; in particular, 512k =
> 524288 bytes and I was running into massive overruns after generating:
> ~445 1200-byte packets (1200*445 = 534000)
> ~524 1024-byte packets (1024*524 = 536576)
> ~700 768-byte packets (768*700 = 537600)
> Old payloads are definitely not being kept around; they are processed
> and the only output is the number of bit errors in each packet.


> I'll figure out another way around the randomness problem, maybe using
> an LRS or something.

If you don't need high quality randomness a linear congruent pseudo-
random generator will do the trick in a few operations.  See Knuth.

You might try reading /dev/urandom, however your tests won't be reproducible.


reply via email to

[Prev in Thread] Current Thread [Next in Thread]