|From:||Anderson, Douglas J.|
|Subject:||Re: [Discuss-gnuradio] gr::buffer::allocate_buffer: warning|
|Date:||Tue, 21 Apr 2015 18:50:55 +0000|
Awesome! I'm working my way through the scheduler slides linked earlier today, and I get to where it's describing buffers on linux like "attach shmid1 to first half of shmid2 with shmat, attach shmid1 to second half of shmid2 with shmat, memory in both halves of shmid2 are mapped to the same virtual space..." and I was just really lost. I appreciate the diagram; now it's making sense that the circular buffer is implemented with 2 copies of "shmid1" back to back. I see now that the page size if just a requirement of the tools used to create this clever buffer:
shmget() returns the identifier of the shared memory segment associated with the value of the argument key. A new shared memory segment, with size equal to the value of size rounded up to a multiple of PAGE_SIZE
Very cool stuff. Thanks!
From: discuss-gnuradio-bounces+address@hidden [discuss-gnuradio-bounces+address@hidden on behalf of Marcus Müller address@hidden
Sent: Tuesday, April 21, 2015 12:33 PM
Subject: Re: [Discuss-gnuradio] gr::buffer::allocate_buffer: warning
ok, you asked for this :D
So, GNU Radio's buffers look a lot like real circular buffers to the blocks using them:
For example, assume your buffer between source block A and sink block B is large enough to store exactly 10000 of your items:
Now, A has produced 9000 items, of which B has already consumed 8000. Your buffer thus looks like this:
- - - - - - - - R W -
With R being the current position of the read pointer, i.e. the address of the first input item of the next B::work() call,
and W being the write pointer, i.e. the address of the first output_item on the next A::work() call.
Each "- " (or "R " or "W ") is 1000 items worth of storage.
Now, we can agree that in this buffer, there are thousand items that must not be overwritten by the next A::work call, and 9000 items space for new items.
What GNU Radio does is employ memory mapping magic to allow A to simply write contigously the next 9000 items; to the process, the memory looks like this:
- - - - - - - - R W -|- - - - - - - - R W -
Notice that the second half is really just a transparent image of the first, being inserted there by the memory control unit of your CPU.
Now, that is awesome on so many levels:
* it allows developers to always write their applications like the in- and output was just linear in memory, no matter where in their buffer they are
* it allows full usage of the buffer, without having extra space allocated
Of course, this has an architectural downside:
On any fully-fledged general purpose CPU platform I know, you can only do this with pages; a page is simply the smallest amount of memory you can tell your memory management unit to map somewhere.
On Linux, these pages are generally 4KB. That's usually a bit handy, because it's a power of two bytes, which means that the typical item sizes (1B for a byte/char, 2B for a short, 4B for a float or int, and 8B for a std::complex<float> == gr_complex) fit well here, but not so cool if your item's size is not a divider of 2**12. In that case, the scheduler has no chance but to find the smallest common multiple of 4096 and your item size.
On 04/21/2015 08:04 PM, Anderson, Douglas J. wrote:
|[Prev in Thread]||Current Thread||[Next in Thread]|