On Tue, May 29, 2007 at 05:06:37PM -0700, Dave Gotwisner wrote:
Eric Blossom wrote:
On Fri, May 25, 2007 at 04:23:29PM -0700, Dave Gotwisner wrote:
I am working on an application that will tune to multiple frequencies,
and capture a small number of samples at each frequency for further
processing.
The program loop, is essentially, a series of configuration commands
(such as set_rx_frequency, etc.), followed by a start() command. I
then do read() until I get the requested number of samples. I then do
a stop(), and for the hell of it, loop on a read (until there is no
data).
For the purpose of the test, I am using the default buffer sizes for
the ::make call (I also tried fusbBlockSize == 1024 and fusbNblocks =
8K). The decim rate used for the usrp_standard_rx constructor is 8. I
am trying to capture 100,000 samples at each frequency.
There's absolutely no reason to be using fusb_nblocks == 8192.
Try using fusb_block_size = 4096 and fusb_nblocks = 16
I tried it with 4K/16. I am now running at fusb_block_size = 16K and
fusb_nblocks = 512. I also tried with 16K/16. Results for both are similar.
When you say the results are similar, do you mean that you are still
seeing the overruns?
I haven't seen any comments about the suggestions I made regarding the
file systems issues with ext3 vs ext2 and/or lame laptop disk performance.
Care to comment on those? I've been assuming that you're running
under GNU/Linux. If not, then all the fusb_* stuff may be a nop.
In the GNU Radio code, which you don't appear to be using, we have
gnuradio-examples/python/usrp/usrp_spectrum_sense.py, which does
something similar to what you are doing.
I looked at the example, and if my understanding of the code is right,
you never stop getting data from the USRP (or shut it off).
That's correct.
You change the frequency, and suck samples for a fixed period of time (throwing
them out [basically, the amount of time it would take to flush the old
data through the USB buffering system]) before capturing again (and
using them). Does my usrp_spectrum_sense.py understanding match
reality? I am not really a Python person. It seems to me that an
efficient start/stop implementation would be more effective than having
to read data that you never need.
start and stop are actually quite heavy-weight. They aren't really
designed to do what you're trying to do, but were added just to solve
the problem of there potentially being quite a bit of time between
when the constructor was called and when you really wanted the data to
start streaming.
There are no plans to change this behavior. If you'd like to, and
are willing to generate patches and assign copyright for the changes
to the Free Software Foundation, I would consider them. Assuming they
don't break anything else.
The work currently going on with "inband signaling" should moot most
of these concerns, since we'll be able to accurately track when a
frequency change took place with regard to the data stream.
In our case, we want to walk a large frequency range, capturing data for
approximately 100 - 200 milliseconds per frequency, and would prefer to
have less than 50 milliseconds of overhead between captures.
That's exactly why we are NOT calling stop/start, but are rather
skipping the samples in the zone where the tuning and buffering matter.
We also need to do this on a potentially loaded CPU, so we need
large enough buffering to reduce the likelyhood of us overrunning
(assuming other tasks, such as games or other CPU hogs want much of
the available CPU resources).
That's what real time scheduling is for. Increasing the total buffersize
increases the worst case latency that you have to account for if you
leave everything running. Hence our choice of smaller values.
# Attempt to enable realtime scheduling
r = gr.enable_realtime_scheduling()
if r == gr.RT_OK:
realtime = True
else:
realtime = False
print "Note: failed to enable realtime scheduling"
In C++ it's called gr_enable_realtime_scheduling().
See gr_realtime.h