[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Discuss-gnuradio] usrp_siggen.py underruns

From: Eric Blossom
Subject: Re: [Discuss-gnuradio] usrp_siggen.py underruns
Date: Thu, 12 Feb 2009 08:32:46 -0800
User-agent: Mutt/1.5.18 (2008-05-17)

On Thu, Feb 12, 2009 at 11:07:43AM +0100, Dominik Auras wrote:
> Hi!
>> Yes, there are lots of ways to do this.  In this particular case,
>> you're going to want to keep track of the worst case and average run
>> times. 
> Hm run times may not be the appropriate performance measure in my case.  
> The transmitter is of course designed to run continuously (until I  
> interrupt him). What about interarrival times? I once had the idea to  
> record every buffer update with a timestamp, the difference in the  
> number of samples and the current processor the task is running on. Do  
> you think that these samples may help to reveal the reason for the  
> underruns in my transmitter code?

> Will a big buffer in the USRP1 probably change the behavior? Am I right  
> that with setting fusb_nblocks etc., the buffer size changes?

You can try that, though on Linux the defaults are pretty big already.

> I have just confirmed that the Gaussian PRNG can't send at a bandwidth  
> of near 8 MHz with the USRP2. That was definitely a bad example.


> I will try to perform some measurements in the next week. Are there any  
> gnuradio blocks, gnuradio utils available to find the average and worst  
> cases? Oprofile will sample the whole application, not only the link  
> between my last block and the USRP1 sink.
> For your interest, I was measuring the throughput with a modified  
> gr.throttle block. Instead of delaying the stream, I compute the  
> momentaneous rate/throughput and average with a simple IIR (the rate  
> estimate).

I wouldn't try to use gr.throttle for this.  I suggest running your
flow graph with a known amount of known input and throw the output
into a null sink.  Then time the wall and cpu times.

  $ time <my_application>

or you could insert a gr.head(...) immediately before the null sink
which will stop the graph after it's copied N samples into the null
sink.  In either case, you're got a graph that will process a known
amount of input and then exit.

You can get time measurements that avoid most of the setup overhead by
just measuring fg.run().  Check the python docs for functions that
measure wall and cpu time.

If your code can't on the average generate the required amount of
output in the required time, then you've got some work to do.  If you
think you've got a case where the average and the worst cases vary
widely, I suspect that the easiest way to go after that is to think
about it!  You wrote the code, right?  You know the expected and worst
case complexity for each block, right?  If not, spend some time with
Knuth, then think about it some more...


reply via email to

[Prev in Thread] Current Thread [Next in Thread]