discuss-gnuradio
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Discuss-gnuradio] Run graph/ scheduler overhead


From: Dennis Glatting
Subject: Re: [Discuss-gnuradio] Run graph/ scheduler overhead
Date: Mon, 13 Jul 2015 19:38:32 -0700

On Mon, 2015-07-13 at 00:30 -0400, West, Nathan wrote:
> This is a lot of information, and I'm just going to pick out one
> statement to comment on.
> 
> On Sun, Jul 12, 2015 at 6:13 PM, Dennis Glatting <address@hidden>
> wrote:
>         
>         If I remove most of the blocks from my graph with the result:
>         
>           source --> dc block --> Preamble --> null
>         
>         with the statement:
>         
>               return noutput_items;
>         
>         at the beginning of general_work() in Preamble, I have
>         overflows and
>         gr-perf-monitorx shows a thick red line from:
>         
>          optimize_c0 -> hack_rf_source_c0 -> dc_blocker_cc0 -->
>         Preamble
>         
>         with dc_blocker_cc0 depicted as a large blue square.
>         
>         
> 
> 
> Hi Dennis,
> 
> 
> The size (area) of the blue boxes is proportional to the amount of CPU
> a block is using. The "darkness" and thickenss of lines are how full
> buffers are. That indicates that DC blocker is using a lot of CPU and
> the buffers in front of it are full because the blocks have done all
> of their work and have filled their buffers before dc blocker can work
> on them.
> 
> 
> 1.6ms is a long time to be working on samples when your incoming rate
> is 10Msps.
> 
> 
> There's a number of ways to proceed. You can use offset tuning to
> remove the DC spike (I can't remember hackrf's input bandwidth so this
> may or may not be realistic), use some other method for DC removal, or
> try to optimize whatever might be taking a while in the dc_blocker. (I
> suggest a dynamic analysis tool like kcachegrind, AMD Code Analyst, or
> Intel vTune)
> 

It's on my list to try offset tune. My primary SDR is bladeRF but it
DOES NOT report overruns. HackRF does. That detail dogged me for over a
month. (I say "on my list" because I believe there is some other deep
problem and first I am going to try to understand it.)

A few weeks ago I ran kcachegrind and it reported minimal problems. Most
of the problems were tied to std::cout. I assumed these were common
reclamation problems. (Maybe not?) IIRC, I lost about 2k of memory.

I disabled my three threads with no positive result. One thread builds a
EC table then exits (about 60-75 seconds) whereas the other two run once
a second to maintain data structures. I thought it /remotely/ possible
these threads could be a problem because they call sleep (below) but
they are detached threads.

    std::this_thread::sleep_for( std::chrono::seconds( 1 ));

My impression is there is some low level library thrashing going on
(desperate heaps, mutex locking, etc). I am currently building
GNURadio/HEAD under FreeBSD where I can compile the world with debug. If
the code runs without trouble under FreeBSD then that's another hmm...
The real question is my level of porting patience. 

Under FreeBSD I have reported one problem (libusb) and I'm now chasing
an iconv link problem. The iconv problem is related to where iconv is
specified in the link line and /usr/local/lib IS NOT searched, and where
is the fault. 

Thanks.


> 
> A quick glance at the code makes me suspicious off the deque that is
> used in a for loop in work. Time for my wild speculation: it's
> possible there is some dynamic allocation/dellocation gone wild with
> the way this deque is implemented combined with this usgae. It seems
> like a fixed length buffer (or a deque/vector with overwriting/manual
> pointer management) would be sufficient as long as you're willing to
> do the pointer math. It's worth looking at how other blocks might be
> keeping internal vectors of samples and possibly doing the dynamic
> code analysis to confirm it is the deque.
> 
> 
> -Nathan
> 
> 
> 





reply via email to

[Prev in Thread] Current Thread [Next in Thread]