[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Discuss-gnuradio] Packet Radio

From: Eric Blossom
Subject: Re: [Discuss-gnuradio] Packet Radio
Date: Fri, 31 Mar 2006 23:42:06 -0800
User-agent: Mutt/1.5.9i

On Fri, Mar 31, 2006 at 11:42:37PM -0500, dlapsley wrote:
> The document is available at
>     http://acert.ir.bbn.com/downloads/adroit/gr-arch-changes-1.pdf
> We would appreciate feedback, sent to gnuradio-discuss, or feel free
> to email us privately if there's some reason gnuradio-discuss isn't
> appropriate.

I think the basic m-block idea looks reasonable, and achieves the goal
of extending GNU Radio without disturbing the existing framework.

In section 4.5, "Two stage, quasi-real time, hybrid scheduler": 

FYI, a given flow graph currently may be evaluated with more than one
thread if it can be partitioned into disjoint subgraphs.  I don't
think that fundamentally changes anything with regard to embedding a
flow graph in an m-block.

Section 4.5.4, second bullet: "profile header portion".  Limiting the
kind and profile lengths to 8-bits each seems like you're asking for
trouble.   For example, when combining many m-blocks from many
different sub-projects, the universe of kinds could easily exceed 256.

Are you assuming that this gets passed across the air, or just within
a given node?  If within a node, for the kind I'd suggest something
reminiscent of interned symbols.  16-bits would probably be big
enough, if each block mapped their arbitrary kind name (string) into
an interned 16-bit value at block init time.

I'd also make sure you've got some way to ensure that the data portion
is aligned on the most restrictive architectural boundary (16-bytes on
x86 / x86-64)

Section 4.5.5 Standardized Time:

In reading what's there, I don't see how you're going to solve the
problems that I think we've got.  Perhaps an end-to-end example would
help illustrate your proposal?

For example, Table 4.2 says that "Timestamp" carries the value of the
transmit-side "sampling-clock" at the time this message was
transmitted.  If I'm a "source m-block" generating, say a test
pattern, what do I put in the Timestamp field?  Where do I get the
value?  Consider the case where the "real" sampling-clock is across
USB or ethernet.

If I want to tell the ultimate downstream end of the pipeline not to
transmit the first sample of the modulated packet until time t, how do
I do that?  That's essential for any kind of TDMA mechanism.

In general, I'm not following this section.  I'm not sure if you're
trying to figure out the real time required through each m-block
and/or if you're trying to figure out the algorithmic delay through
each block, and/or if you're trying to figure out the NET to NET
delay between multiple nodes, ...

Also, an example of how we'd map whatever you're thinking about on to
something that looked like a USRP or other h/w would be useful.

I guess I'm missing the overall statement of intention.  I.e., what do
the higher layers care about, and how does your proposal help them
realize their goals?

Meta data:

General questions about meta-data: Does an m-block just "copy-through"
meta-data that it doesn't understand?

Or in the general case, why not just make it *all* key/value pairs?
Why restrict yourself to a single distinguished "data portion"?

Section 4.5.8: Scheduler.

I'm not sure I follow Figure 4.8.  Perhaps once I understand the
timing stuff it'll make more sense.

Section 4.5.9: Memory Mgmt

With regard to reference counting, we've had good luck with the
boost::shared_ptr stuff.  It's transparent, interacts well with
Python, and just works.

Section 4.5.10: Implementation Considerations

* Reentrancy:  I think we need to distinguish between multiple
instances of a block each running in a separate thread, vs a given single
instance running in multiple threads.  I don't see an overwhelming
need to have a given instance be reentrant, with the possible
exception of communicating commands to it at runtime.  But in that
case, a thread safe queue of commands might suffice.

That's it for now!

reply via email to

[Prev in Thread] Current Thread [Next in Thread]