[Top][All Lists]
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Discuss-gnuradio] Debugging overruns
From: |
Eric Blossom |
Subject: |
Re: [Discuss-gnuradio] Debugging overruns |
Date: |
Sat, 27 Jan 2007 17:58:20 -0800 |
User-agent: |
Mutt/1.5.9i |
On Sat, Jan 27, 2007 at 04:25:19PM -0800, Dan Halperin wrote:
> Eric Blossom wrote:
> > overruns or underruns?
> >
>
> uO means a USRP Overrun right?
Yes.
> > Underruns are to be expected with tunnel.py, assuming that you're not
> > feeding it data constantly.
> >
> > (When the in-band signaling stuff is complete, we'll have a more
> > sensible interpretation for the underrun case. It'll only
> > report a problem if it occurs within a packet, not between packets.)
> >
> > Are you running with real time scheduling enabled? If you run
> > tunnel.py as root (or having CAP_SYS_NICE) it'll be enabled (currently
> > only implemented on system that implement sched_setscheduler.)
> >
>
> Real-time scheduling is enabled. The process gets priority -50. Also,
> another (incidental) question, I get really bad performance when the
> fusb_options set by realtime being true are used....
>
> > Have you enabled logging? Turn it off.
> >
>
> Logging is off.
>
> > Linux or some other OS?
> >
>
> Ubuntu 6.10 (but installed before all the recent libtool fun).
>
> > Does the unmodified tunnel.py exhibit the same behavior?
> >
> > Does benchmark_tx.py / benchmark_rx.py work without over/underruns?
> >
>
> No, yes.
>
> I suspect the problem is something to do with randomization; I'm trying
> to write a more comprehensive benchmark where I send random payloads.
> Using 1024 byte packets (i.e. 1024 random bytes generated per packet), I
> can get 524+/- 1 sent. With 768-byte packets, I can send 700+/-1. With
> 1200-byte packets, I can send 445 packets, more or less. The product of
> all of these numbers is close;
>
> >>> 1200*445
> 534000
> >>> 1024*524
> 536576
> >>> 768*700
> 537600
>
> Perhaps python does something funky after a certain number of bytes? I'm
> using these functions:
>
> from random import seed,randint
>
> def rand_init(s=0):
> seed(s)
>
> def random_bytes(number):
> ret = ""
> for i in range(number):
> ret += chr(randint(0, 255))
> return ret
>
> Or could it be some garbage collection kicking in? I know that this
> function is extraordinarily wasteful of memory... Except then it doesn't
> make sense as to why that product would be constant..
I think you're burning up all the cycles constructing the string
of random bytes. Building the string byte by byte is very expensive.
Basically O(N^2).
Try this:
def random_bytes(number):
return ''.join([chr(randint(0, 255)) for x in range(number)])
Also, are you sure you're not holding onto references to old payloads
somewhere? If you are, no amount of garbage collection or reference
counting will save you ;)
Eric