discuss-gnuradio
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Discuss-gnuradio] trouble creating PMT uniform vectors in python th


From: Marcus Müller
Subject: Re: [Discuss-gnuradio] trouble creating PMT uniform vectors in python that are the same type, but differ in value
Date: Mon, 21 Dec 2015 20:39:31 +0100
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Thunderbird/38.3.0

Hi Richard,

On 21.12.2015 20:03, Collins, Richard wrote:
Hello Marcus,

Wow, Thanks for such an in-depth reply!
Ha, I had fun writing it :)

I knew I kept a reference for numpy data types about a month ago for a reason, but forgot how/when/why GNU Radio uses them.  Now it's very clear how the whole conversion in python happens based on the data type.
Also, when you write a python block's work or general work function, I think the input_ and output_items structure are numpy arrays. Not 100% sure, so please check:)

I'll print this out and work on the custom block to Tagged Stream Align method. At the end of Friday, I ran into a limit regarding the max number of output samples and figured I'd have to divide my chunks up before sending them to the PDU to Tagged Stream block. But I had some concerns:
  • would they be generated/received in the correct order? (i.e. I'm still unsure about how GNU Radio manages threads, and have never coded anything that's multithreaded), and would I have to tag them with something to indicate their order?
Yes, they should be in-order.
  • would any samples be dropped? If so, would I have to try to figure out a way to throttle the message passing through the use of mutexes or someting?
I don't think we're setting a maximum depth for the message handling queues, so, no, I don't expect things to get lost.
Well, that depends on the signal you want to transmit. Usually, you cannot just take "few" samples, and make "100 times as many" of those and actually win something by doing so -- it's still the same signal, but now "extended" by 99 samples between each of the original samples.
This method that you describe (sending tagged normal output to the Tagged Stream Align block) does seem to bypass those concerns entirely, and definitely seems a more appropriate method.
Ah, I'd love to tell you that TSBs solve all problems of this, but no.
You seem to have sample chunks larger than what PDU to Tagged Stream Block is able to produce at once, and as mentioned, Tagged Stream Blocks rely on the fact that the length tag-marked sample chunk gets passed to the block in one piece. So what PDU to tagged stream can't do, tagged stream align can't do either.
My suspicion is that since GNU Radio, when instantiating the flow graph, doesn't know how big your sample packets are going to be, reserves a sample buffer that's simply too small to fit a single one of your packets.
Solution to that is going into the "advanced" tab in GRC, and setting the minimum size of the output buffer to something >= 2*max packet size (not too sure about the factor of 2 in there, but usually, GNU Radio doesn't ask blocks to produce more than half their output buffer, but that might be different for TSBs).

I will check out the Tagged Stream Blocks page in the manual, as I see it mentions PDU length tags, and will try to look into the code to see how the Tagged Stream Align block expects its input. (So far, I've been able to create and send small chunks of samples and control signals to the USRP sink block, but have not yet created something that sends regular streaming samples with the Tagged Stream architecture. This will be good practice!)
:)

I am wondering what will happen when I no longer have samples to send, but I suppose if that ends up being a problem, I could just keep sending a pilot tone, and configure the gain on the USRP to be set to 0. But I'm probably getting ahead of myself.
That's actually the reason the USRP sink has a tagged stream block mode (which is what it operates in when you set the "length tag" property):
In that mode, it instructs UHD (which in turns instructs the host-to-device streamer/DSP core to handle that) to expect as many samples as promised by the length tag, and then to go back to "idling".

Best regards,
Marcus

Thanks again!

Richard

On Sat, Dec 19, 2015 at 2:57 PM, Marcus Müller <address@hidden> wrote:
Hi Richard,

we need more mails like yours! Sharing recipes and problems is really heavily appreciated :)

Let me comment on a few things; it's a while back that I worked with the PMT code, though.

Why does pmt.to_pmt(<python list of complex numbers>) return a vector, but fails when tested under pmt.is_uniform_vector() or any pmt.is_XXXvector() (i.e. c32 for XXX)?
Because a PMT vector is pretty much like a python list: It can contain any combination of PMT types, for example:

v = pmt.to_pmt(["This is a string", 42, complex(0,-1)])

is perfectly valid, but can't be a uniform vector. Because the input type->output type mapping should be consistent, I consider converting a python list to a PMT vector the right approach.

I personally find "someone decided it was the right thing to do" a bit of a weak argument, though ;). So, here, to at least illustrate how it's done:
pmt.to_pmt is actually a alias[0] for pmt_to_python.py:python_to_pmt(p) [1], which looks like this¹:

def python_to_pmt(p):
    for python_type, pmt_check, to_python, from_python in type_mappings:
        if python_type is None:
            if p == None: return from_python(p)
        elif isinstance(p, python_type): return from_python(p)
    raise ValueError("can't convert %s type to pmt (%s)"%(type(p),p))

The interesting part is "type_mappings", and that looks like this (just above the python_to_pmt function):
type_mappings = ( #python type, check pmt type, to python, from python
    (None, pmt.is_null, lambda x: None, lambda x: PMT_NIL),
...
    (complex, pmt.is_complex, pmt.to_complex, pmt.from_complex),
...
    (list, pmt.is_vector, pmt_to_vector, pmt_from_vector),
....
    (numpy.ndarray, pmt.is_uniform_vector, uvector_to_numpy, numpy_to_uvector),
)

So, a Python object of type "list" is always mapped to a PMT vector.
Also, you might guess what the trick to pmt.to_pmt'able uvectors is: create a numpy.ndarray, and convert it using pmt.to_pmt. numpy has handy conversion functions, as well as it allows you to allocate ndarrays of given type:

#let numpy guess dtype from contents:
arr = numpy.array([complex(-1,1), complex(1,-1)])
#or define a ndarray with given shape and type
arr = numpy.ndarray(100, dtype=numpy.complex64)

p = pmt.to_pmt(arr)

If your uvectors migth be larger, I'd recommend pre-allocating the numpy.ndarray, i.e. the second approach.

Anyway, I'll just use pmt.init_c32vector(). I'm trying to create a data payoad for a PDU to send from my custom block to a PDU to tagged stream block such that large packets of samples (tens-of-thousands-of-generated-samples) can be sent out with reliable timing. I first looked into controlling a USRP sink with asynchronous commands, but from what I read, that method has some variability on the order of microseconds (which is still awesome, but I think it might not work well enough). I'm probably going about this all wrong, but it's a learning process, so let me know if there's a glaringly obvious method that I'm overlooking. I first looked into eventstream, as described by the oshearesearch website, but I think that's a bit too far into the deep end for a beginner like myself so far.
I'd agree that Tim's gr-eventstream needs quite a bit of understanding on what's happening behind the scenes, but really, it might not be as bad as you feel right now.

Anyway, tagged stream blocks is probably the solution of choice here; however, I think the elegant solution would be to add tags to your "normal" stream with "length tags" (ie. add a tag to the first item of each "burst" of samples containing the number of samples to come in this burst), and connect your block to the "tagged stream align" block, which sees that its output is so aligned that it's Tagged Stream Block-compatible; I find it non-trivial to explain that concept, but tagged stream blocks are really just "normal" blocks, but for which it's defined that they a) always consume the whole item "chunk" they get, and b) the length of an item chunk is always defined by a value of a tag on the first item (and, c), there's no samples that don't belong to such a chunk); maybe this figure explains its better:

tagged_stream_align

Best regards,
Marcus


¹ Honestly, I just realized that is a relatively inefficient way of implementing this, but I definitely have code in there, and I surely had a reason to do it that way... hm.
[0] "from pmt_to_python import python_to_pmt as to_pmt" in https://github.com/gnuradio/gnuradio/blob/master/gnuradio-runtime/python/pmt/__init__.py#L59
[1] https://github.com/gnuradio/gnuradio/blob/master/gnuradio-runtime/python/pmt/pmt_to_python.py#L130


On 19.12.2015 04:34, Collins, Richard wrote:
Hello,

I just wanted to share some trouble I had trying to create a pmt uniform c32 vector in python, what I found as the fix, and hope to get some insight as to why things are this way.

Here's an entry from my notes:

THIS CREATES A VECTOR, BUT NOT A UNIFORM OR C32 VECTOR:
testv = pmt.to_pmt([complex(1.0), complex(-1.0)]*50)
pmt.is_vector(testv)             # True
pmt.is_uniform_vector(testv)     # False
pmt.is_c32vector(testv)          # False
THIS FAILS:
testv1 = pmt.make_c32vector(100, [complex(1.0), complex(-1.0)]*50)

THIS SUCCEEDS, but is a PITA:
testv1 = pmt.make_c32vector(100, complex(-1.0))
for i in range(pmt.length(testv)):
    if i%2 == 0:
        pmt.c32vector_set(testv, i, complex(1.0))

THIS IS THE CORRECT WAY TO DO IT:
testv2 = pmt.init_c32vector(100, [complex(1.0), complex(-1.0)]*50)

So, it took me quite a while to figure this out. Why does pmt.to_pmt(<python list of complex numbers>) return a vector, but fails when tested under pmt.is_uniform_vector() or any pmt.is_XXXvector() (i.e. c32 for XXX)?

Anyway, I'll just use pmt.init_c32vector(). I'm trying to create a data payoad for a PDU to send from my custom block to a PDU to tagged stream block such that large packets of samples (tens-of-thousands-of-generated-samples) can be sent out with reliable timing. I first looked into controlling a USRP sink with asynchronous commands, but from what I read, that method has some variability on the order of microseconds (which is still awesome, but I think it might not work well enough). I'm probably going about this all wrong, but it's a learning process, so let me know if there's a glaringly obvious method that I'm overlooking. I first looked into eventstream, as described by the oshearesearch website, but I think that's a bit too far into the deep end for a beginner like myself so far.

- Richard


_______________________________________________
Discuss-gnuradio mailing list
address@hidden
https://lists.gnu.org/mailman/listinfo/discuss-gnuradio


_______________________________________________
Discuss-gnuradio mailing list
address@hidden
https://lists.gnu.org/mailman/listinfo/discuss-gnuradio




reply via email to

[Prev in Thread] Current Thread [Next in Thread]