discuss-gnuradio
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Discuss-gnuradio] Memory allocation woes


From: Eric Blossom
Subject: Re: [Discuss-gnuradio] Memory allocation woes
Date: Thu, 1 Jul 2010 13:45:47 -0700
User-agent: Mutt/1.5.20 (2009-08-17)

On Thu, Jul 01, 2010 at 03:21:44PM -0400, Marcus D. Leech wrote:
> On 06/29/2010 06:56 PM, Eric Blossom wrote:
> > On
> > If you pick a size with more factors of 2 and fewer factors of 5, life
> > will get better :-)
> >
> > I have on occasion thought that it would be a good idea to switch to
> > an alternate circular buffer strategy when the size blows up too much
> > because of the alignment requirement.  The alternate would probably
> > use memcpy to duplicate the appropriate portion of the buffer on
> > return from general_work instead of the MMU trick.  I probably won't
> > get to it this lifetime, but if you're interested, let me know and
> > I'll give you my ideas on how to go about it.
> >
> > Eric
> >
> >
> >   
> OK, so I"m now seriously considering shifting the spectral processing
> portion of my app out of
>   Gnu Radio because of the memsplosion issues, so pointers to where to
> start looking into this
>   would be of some significant use!

OK, OK, OK.  Some things to think about:

First, memory is very cheap.  E.g., 4GB DDR3-1600 is $130.
ECC is a bit more, but not crazy expensive.

The effort to "fix" this is on the order of a few days.  YMMV.

Using gr_vmcircbuf_mmap_shm_open removes the 32-bit buffer size limitation.

The resident set size will not increase even though the amount of VM
that's required explodes by the factor of 16.  Another reason not to worry.


Here's where to start your study:

  All the action is gnuradio-core/src/lib/runtime.

  See in particular:  
      gr_buffer.{h,cc}          // single writer, multiple reader FIFO
      gr_block_executor.{h,cc}  // code that calls forecast & general_work
      gr_flat_flowgraph.{h,cc}  // allocate_buffer

It'll need good QA code of course.

Eric



reply via email to

[Prev in Thread] Current Thread [Next in Thread]