[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Discuss-gnuradio] [GSoC] Co-Processors Update #9

From: Tom Tsou
Subject: Re: [Discuss-gnuradio] [GSoC] Co-Processors Update #9
Date: Mon, 11 Aug 2014 10:42:32 -0700

On Mon, Aug 11, 2014 at 10:06 AM, Philip Balister <address@hidden> wrote:
> On 08/08/2014 11:54 PM, Alfredo Muniz wrote:
>> Plan for GNU Radio:
>> - From my talks with Pendlum, I think this approach will work for both Zynq
>> and Keystone and any device that has shared memory with the coprocessors.
> I doubt depending on contiguous memory will ever work for GNU Radio.
> I've heard a lot of talk about changing the guts of GNU Radio, but no
> real action. Especially given GNU Radios dependence on double mapped
> buffering to handle wrap around. For things with hard IP blocks like
> Keystone, this may be a difficult problem. Unless the IP blocks can be
> configured to operate on non-contiguous blocks. FPGA code should be
> written to avoid dependencies on specific buffer layouts. (Yes, I know I
> have made this mistake, but I ahve seen the error of my ways)

The typical use case for the TCP is variable length packets up to a
fixed maximum (6144 bits for LTE). Message passing is inherently a
better fit and the double-mapped buffer probably shouldn't apply. Each
block of (soft) bits going in/out of the TCP would be contiguous, but
subsequent chunks of memory carrying different block segments need not

> I understand why TI drives you towards the CMEM driver, but that is a
> lousy long term plan. They are just reusing code from prior generations
> of drivers. And I do want to see something work so we can evaluate the
> hard IP based GNU Radio block. My concern with your wording is that
> people might think depending contiguous memory buffers is a good idea.

At least from a high level, a message queue with a rotating set of
buffer pointers seems OK to me. Though, not being familiar with the
current Keystone transport options, what are the other preferred


reply via email to

[Prev in Thread] Current Thread [Next in Thread]