[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Discuss-gnuradio] set_relative_rate

From: Tom Rondeau
Subject: Re: [Discuss-gnuradio] set_relative_rate
Date: Fri, 7 Feb 2014 10:10:47 +0000

On Thu, Feb 6, 2014 at 9:14 PM, Miklos Maroti <address@hidden> wrote:
> Hi Tom,
> Thanks for the answer! I have considered both approach already. What
> you are saying is that set_relative_rate cannot capture this scenario,
> so it is impossible to set different relative rates, right?

Right; relative_rate as a value is defined as a single value for the
entire block. you can consume and produce at different rates for each
input/output stream.

> Where exactly are the relative rates used in gnuradio core? Only for
> the buffer size calculations or are they also used during runtime?

Yes, mostly the initial buffer size calculation. It's also used to
update the item offset value of a tag through a rate-changing block.

> By the way, the vector approach does not scale ideally: if I increase
> the size of vectors (to 100000 samples) or use set_output_multiple
> with that large value then the performance of the block is degraded,
> and I do not really understand why. If the block does pure streaming
> (e.g. add) and does not require large quantities of data, then
> everything works fine. I do not want to use messages, because the data
> is processed (filtered, length changed, etc) along with other
> transformations. Anyhow, what I am getting at that there is no good
> way of processing very large blocks of data.

Use gr-perf-monitorx (or in GRC just look for Performance Monitor) if
you have ControlPort enabled and building properly [1][2]. You'll
likely see the buffer in front of your block backing up while the
output buffer is fairly empty as the scheduler has to dump lots of
data into it before anything else can go, so you'll be starving the
follow-on blocks.

Another model is to try and handle the state internally. Just allow
data to flow in from each data stream and keep internal buffers. This
might allow you to work with the scheduler better.

I'm interested to see if you can get an approach that works well with
your problem. So far, what you're trying to do seems somewhat of a
non-standard use-case for GNU Radio, but I can see more people trying
to do this kind of processing in the future. Would be good to know
both the limits and why.

[1] http://gnuradio.org/doc/doxygen/page_ctrlport.html
[2] http://gnuradio.org/redmine/projects/gnuradio/wiki/PerformanceCounters


> Miklos
> On Thu, Feb 6, 2014 at 11:15 AM, Tom Rondeau <address@hidden> wrote:
>> On Wed, Feb 5, 2014 at 7:02 PM, Miklos Maroti <address@hidden> wrote:
>>> Hi Guys,
>>> Is it possible to write a c++ block that takes 2 input streams,
>>> produces 1 output streams, but to generate 1000 outputs it needs 1000
>>> inputs of the first kind and 1 input of the second kind? How do I set
>>> the set_output_rate? Does it apply to both input streams? How can I
>>> ensure that the scheduler does not create too big buffer for the
>>> second type of input?
>>> Miklos
>> There are a couple of ways to do this. It might be easiest for you to
>> use vectors of samples on input port 0. The output could be another
>> vector or you could convert it to a stream again here. This is
>> assuming that you always want to process 1000 samples at a time for
>> every 1 sample on input port 1. You set your IO signature like:
>> gr::io_signature::make2(2, 2, 1000*sizeof(type0), 1*sizeof(type1))
>> The output signature is either 1000*sizeof(type0) and you can use a
>> gr::sync_block (because 1 output item is 1 input item) or your output
>> signature is 1*sizeof(type0) but you'll use a gr::sync_interpolator
>> because now you'll be producing 1000 items after taking in a stream of
>> 1 item. See vector_to_stream for a model of this second approach.
>> You might also want to consider the tag stream interface instead of an
>> indicator on stream 1. You would then have one input stream but look
>> for the tag to process your 1000 output samples. This would be a more
>> general approach if you aren't always using 1000 items at a time.
>> Tom

reply via email to

[Prev in Thread] Current Thread [Next in Thread]