[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Discuss-gnuradio] usrp_basic_rx::stop appears to take a long time,

From: Dave Gotwisner
Subject: Re: [Discuss-gnuradio] usrp_basic_rx::stop appears to take a long time, and reading after stop always returns > 0 bytes
Date: Tue, 29 May 2007 19:04:26 -0700
User-agent: Mozilla Thunderbird 1.0.7 (Windows/20050923)

Eric Blossom wrote:

On Tue, May 29, 2007 at 05:06:37PM -0700, Dave Gotwisner wrote:
Eric Blossom wrote:

On Fri, May 25, 2007 at 04:23:29PM -0700, Dave Gotwisner wrote:

I am working on an application that will tune to multiple frequencies,
and capture a small number of samples at each frequency for further

The program loop, is essentially, a series of configuration commands
(such as set_rx_frequency, etc.), followed by a start() command. I
then do read() until I get the requested number of samples. I then do
a stop(), and for the hell of it, loop on a read (until there is no

For the purpose of the test, I am using the default buffer sizes for
the ::make call (I also tried fusbBlockSize == 1024 and fusbNblocks =
8K). The decim rate used for the usrp_standard_rx constructor is 8. I
am trying to capture 100,000 samples at each frequency.
There's absolutely no reason to be using fusb_nblocks == 8192.
Try using fusb_block_size = 4096 and fusb_nblocks = 16

I tried it with 4K/16. I am now running at fusb_block_size = 16K and fusb_nblocks = 512. I also tried with 16K/16. Results for both are similar.

When you say the results are similar, do you mean that you are still
seeing the overruns?

I haven't seen any comments about the suggestions I made regarding the
file systems issues with ext3 vs ext2 and/or lame laptop disk performance.

Care to comment on those?  I've been assuming that you're running
under GNU/Linux.  If not, then all the fusb_* stuff may be a nop.

By similar, I meant that the behavior appears to be independent upon buffer size (16K/512 and 16K/16). Yes, about the overruns. More info on those: I have modified my program to continually perform "configure; start; read; stop" for a fixed sample. I have eliminated the variability of the frequency from the issue, as I now tune to the same frequency. If I capture 100000 samples, every 8th read group overruns. If I go to 200000 samples, it increases to every 4th. If I go to 50000 samples, it goes to every 16th.

The time elapsed (100000 samples) from after the stop to before the start is 12 milliseconds. If you include the start/stop calls, it goes to 90 milliseconds.

The software is running ubuntu linux with the hard drive being an NFS mount. I am not writing any of the data to disk, so the disk I/O / network I/O should essentially be limited to output across telnet back to my host (another linux running VNC), and any demand paging that the program is doing. Running or not running oprofile makes no difference, the load average hovers between 0.00 and 0.10. My program consumes at most 20% of the cpu.

The ext2/3 stuff was with respect to someone elses query, not mine. I spent today trying to get to the bottom of start/stop timings and only spent about an hour on the overruns. If you think putting the code on a EXT2 fs vs a network fs will make a difference, I will do so, but, I doubt it, since I am not writing to disk.

In the GNU Radio code, which you don't appear to be using, we have gnuradio-examples/python/usrp/usrp_spectrum_sense.py, which does
something similar to what you are doing.

I looked at the example, and if my understanding of the code is right, you never stop getting data from the USRP (or shut it off).

That's correct.

You change the frequency, and suck samples for a fixed period of time (throwing them out [basically, the amount of time it would take to flush the old data through the USB buffering system]) before capturing again (and using them). Does my usrp_spectrum_sense.py understanding match reality? I am not really a Python person. It seems to me that an efficient start/stop implementation would be more effective than having to read data that you never need.

start and stop are actually quite heavy-weight.  They aren't really
designed to do what you're trying to do, but were added just to solve
the problem of there potentially being quite a bit of time between
when the constructor was called and when you really wanted the data to
start streaming.

There are no plans to change this behavior.  If you'd like to, and
are willing to generate patches and assign copyright for the changes
to the Free Software Foundation, I would consider them.  Assuming they
don't break anything else.

The work currently going on with "inband signaling" should moot most
of these concerns, since we'll be able to accurately track when a
frequency change took place with regard to the data stream.

In our case, we want to walk a large frequency range, capturing data for approximately 100 - 200 milliseconds per frequency, and would prefer to have less than 50 milliseconds of overhead between captures.

That's exactly why we are NOT calling stop/start, but are rather
skipping the samples in the zone where the tuning and buffering matter.

We also need to do this on a potentially loaded CPU, so we need
large enough buffering to reduce the likelyhood of us overrunning
(assuming other tasks, such as games or other CPU hogs want much of
the available CPU resources).

That's what real time scheduling is for.  Increasing the total buffersize
increases the worst case latency that you have to account for if you
leave everything running.  Hence our choice of smaller values.

   # Attempt to enable realtime scheduling
   r = gr.enable_realtime_scheduling()
   if r == gr.RT_OK:
       realtime = True
       realtime = False
       print "Note: failed to enable realtime scheduling"

In C++ it's called gr_enable_realtime_scheduling().
See gr_realtime.h
I'll pursue this more tomorrow.

The amount of CPU resource we need should be out of the available CPU after other things run, rather than as the highest priority task. From calculations based upon your proposed buffering, I get (4096*16)/32 MB/s) = ~2 milliseconds of buffering, we feel we need a minimum of about 50 milliseconds of buffering, hence the large numbers for fusb_block_size.

FYI, I tried building the trunk code on my ubuntu box, and when I did the "./configure" command,

did you do a ./bootstrap first?

Yes. I did everything as me, not as root, though, if that makes a difference.

it reported problems finding guile. If I look at the packages on my machine, synaptic reports that guile 1.6.7-2 is installed on the machine, which should match the requirements from the readme file.




reply via email to

[Prev in Thread] Current Thread [Next in Thread]