discuss-gnuradio
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Discuss-gnuradio] Audio Questions


From: Eric Blossom
Subject: Re: [Discuss-gnuradio] Audio Questions
Date: Sun, 27 Nov 2005 15:24:18 -0800
User-agent: Mutt/1.5.6i

On Sat, Nov 26, 2005 at 04:50:24PM -0500, Michael Dickens wrote:
> 
> Taking "sink" as an example, it looks like what I need to do is modify
> 
> 1) "audio_osx_sink::audio_osx_sink" to open and start the audio  
> channels, allocate any buffers, setup audio parameters
> 2) "audio_osx_sink::~audio_osx_sink" to close the channels, delete  
> buffers, etc...
> 3) "audio_osx_sink::work" to "play" the incoming data to the output  
> audio channels
> 
> Questions about "audio_osx_sink::work":
> int audio_osx_sink::work (int noutput_items,
>   gr_vector_const_void_star &input_items,
>   gr_vector_void_star &output_items)
> 
> a) It looks like the audio input (input_items) is in float's; is this  
> true?

True.  audio output is also in floats [-1.0, +1.0]

> b) On CoreAudio, I can open channels to use float's as inputs, and  
> thus could use the audio input ( (input_items) directly for audio  
> playback.  I believe that CoreAudio doesn't copy the input buffer,  
> but rather uses it as is; this can be done either a/synchronously.   
> But: Do I need to copy the input data to a temporary buffer before  
> playing, or will the input buffer (input_items) have a long enough  
> life to allow for its playing?

If core audio doesn't copy the data, you will need to copy the data.
The contents of the buffers passed to ::work are undefined after
::work returns.

> Can any other subroutine use this buffer simultaneously, or do all
> subroutines need to make a copy of it before doing their processing?

Not sure I understand this question.  The buffers passed to work are valid
only for the duration of the work call.

> c) When does ::work get called?  Is it on a regular period, no matter  
> how much data has come in since the last time it was called?  Or when  
> enough data is available to fill a given buffer size?

When it gets called depends on a lot of things that are pretty hard to
predict.  It depends on all of the other signal processing taking
place, rates of production of upstream sources, downstream sinks, etc.  
It doesn't have a regular period, but for most typically usage, it
will be called every few milliseconds.

A couple of rules of thumb (but see "Who controls pacing" below):

(1) An audio source should block until it can return some non-zero
amount of data from the underlying audio interface.  If it asks for
1024 samples and it gets 64 samples, it should return after the 64
samples, not reissue a blocking call for the remaining 1024-64
samples.

(2) An audio sink should consume everything passed to it.  Use a ring
buffer or something similar to enqueue the data for the lower level
audio interface.


Topic for further discussion:  Who controls pacing?

In many flow graphs there is more than 1 clock domain active.  A
simple example is the broadcast FM receiver.  In this case, there is
the master oscillator on the USRP that is controlling the rate that
samples are produced by the USRP.  At the same time, there is a
different oscillator on the audio card that is controlling the rate at
which the sound card is consuming audio samples.  These oscillators
are not synchronized, and commonly differ by something on the order of
1 part in 10,000.  The audio clock could be slow or fast relative to
the USRP clock.

Going the other direction (FM transmitter), we have a similar
condition:  the audio card produces samples at a rate determined by
one oscillator, while the USRP consumes samples at a rate determined
by a different oscillator.

In these two examples, the USRP clock should "control", and the data
fed or consumed by audio clock should be dynamically adjusted to
account for the difference in clock rates.

In a different scenario, say the "dial tone" example, there is only a
single clock domain, the audio card, and in that case the audio clock
should control pacing.


These scenarios lead me to believe that audio sources and sinks
should take an optional parameter that indicates whether they control
the pacing, or not.  In the cases that they don't control pacing, then
audio sinks and sources should never block, and should "do the right
thing" (for some definition of "right thing") when asked to produce
samples when there are none, or consume samples when there's no room
in the queue to the lower level audio driver.   I'm leaning towards
"dont_block" as the name of the attribute.

Comments?

Eric




reply via email to

[Prev in Thread] Current Thread [Next in Thread]