[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Discuss-gnuradio] Changes to Audio Sink / Source ?

From: Michael Dickens
Subject: Re: [Discuss-gnuradio] Changes to Audio Sink / Source ?
Date: Thu, 1 Dec 2005 22:48:48 -0500

Eric - Good discussion (IMHO); I'm learning a lot about about GR's audio sink/source model. I now have a nicely working audio-osx-sink, using the parameters discussed below + another which I could probably be talked out of (i.e. "approximate number of samples to buffer"). It handles blocking and buffering quite nicely. Now on to the "source". - MLD

Going back a few messages, you wrote:
One problem with the existing strategy and this one, is that there is
no way to query the capabilities of the card, and then make a decision.

Another strategy may be to just supply the device name to the
constructor, and then provide methods to query the capabilities, and
then some more methods that will set them.
With ALSA and JACK the number of audio channels and their capabilities
is pretty much open ended.  The user provides the "device name" which
maps through a highly indirect path down to either virtual or physical
hardware, the capabilities of which may then be queried.
I think this is going to be a two
phase solution.  The first one would be to get OSX audio working

* double sampling_rate (in an ideal world the type would be rational...)
  * const std::string device_name
  * bool do_block

...args to the constructor (possibly ignoring the don't block case),
then in phase II we rework the API for all the audio interfaces.  It's
easy enough to add the do_block arg to all of them now.  We can make
the non blocking case work a bit later.

Yes, I agree that the "sink" or "source" should be instantiated with an optional parameter to make blocking user selectable (default of TRUE to block). My bad on putting it in "work".

I also believe that the -actual- number of output channels desired by the program controlling the source should be sent to instantiate the sink; this could be done via an integer parameter or via a "device configuration" type, e.g. in a typedef'd enum. Then, the audio's instantiation would translate this parameter into a device name, or number of channels, or whatever.

No matter if this is done or not, the program controlling the source should check to make sure that the sink provides enough -actual- channels to do whatever it (the program) wants to do. For example, examples/python/audio/dial_tone.py assumes 2 channels, but does not check "sink.input_signature().max_streams()" to verify that 2 channels is OK.

Neither of these 2 parameters should change during program execution (should they? I can't think of a scenario), and they do provide the sink with more information on what's expected of it. Thus instead of dynamically checking the number of inputs to "work", that number would be know a-priori and would make coding "work" easier.

Thus, the API for instantiating audio devices would be as you say:

* double sampling_rate (in an ideal world the type would be rational...)
  * const std::string device_name
  * bool do_block
  * device_config_t device_config

Where "device_config_t" is a typedef enum including: "mono", "stereo", "4+1 surround", "5+1 surround", etc... Or could this maybe be incorporated into the "device_name"? Hmmm... maybe that's an option to reduce the number of API arguments.

On Nov 30, 2005, at 11:00 PM, Eric Blossom wrote:
On Wed, Nov 30, 2005 at 09:56:26PM -0500, Michael Dickens wrote:
My second implementation bounds the list
(FIFO ring) to a default of 20 items,

Q: how big is an item?  A single floating point sample?

An item is one incoming "work" buffer, no matter its size. Upon further thought, this parameter has been changed to "approximate number of samples to buffer per channel" with default of 1 second's worth of data before blocking or dropping. I made it a user option at instantiation, but I might take it away entirely because:

In "ok to block mode", you probably want to keep no more than 4
transfers in flight, where a transfer is something on the order of a
millisecond's worth of samples.

The amount of data to keep around seems to be (from watching the latency between GR and OSX's AudioUnit) 2-3% of the sample rate (max of about 3 ms; I'm working on getting this down further). That's just for OSX; other OS's could be different.

Note that I cannot change the OSX audio model, nor GR's model, and hence I have to use an intermediate buffer to control data flow. I can control the latency only a little bit, but given how small it is I'm not particularly concerned for the types of data we're currently throwing around (voice, maybe IP, other real-time types).

I assume there's some way to block
and wait for outstanding transfers to complete.

Blocking to wait for transfers to happen would be done inside "work". And it's quite simple to implement in OSX.

Aren't there interrupts or timers available to go off ...

Nope.  It all depends on the data rate through the sources and sinks,
over which the sources and sinks have complete control.

Ah, this explains some things (to me at least). Good to know such facts. That changes most of my comments from before. Since there are no interrupts or timers, that leaves blocking as the mechanism for keeping the data rate at the correct (average) pace. And, given GR's design, that will be done at the sink. Now that I know, no issues from me on following this "protocol".

reply via email to

[Prev in Thread] Current Thread [Next in Thread]