discuss-gnuradio
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Discuss-gnuradio] A newbie question


From: Dave Emery
Subject: Re: [Discuss-gnuradio] A newbie question
Date: Mon, 8 Apr 2002 23:01:10 -0400
User-agent: Mutt/1.2.5i

On Mon, Apr 08, 2002 at 11:15:22AM -0700, Ettus, Matt wrote:
> > Given the dynamic range of actual signals that I have observed with a 
> > spectrum analyzer I don't understand how they can be coded 
> > with adequate
> > resolution using coders that are affordable to the average 
> > hobbyiest or
> 
> This is a problem, but not as much as it seems at first.  Since you can
> oversample by a very large amount, you can gain effective resolution after
> decimation.  As long as the strongest signals don't (or rarely) saturate the
> a-d, you can often recover very weak narrowband signals, even if their
> amplitude is less than 1 bit at the high sampling rate.

        Not as much a function of oversampling/decimation directly as it
is of the fact that in the presence of random (AWGN) or at least
uncorrelated noise it is statistically quite possible to recover
narrowband or highly correllated (eg direct sequence spread spectrum)
signals whose amplitude is below the minimum sample step by averaging
multiple samples.  (I suppose oversampling and decimation, done
properly, is certainly properly considered a form of  this kind of
averaging of course).   Without the random noise component present, such
averaging does not work at all and signals near to or smaller than the
minimum sample size are undetectable or badly distorted.   This truth is
often recognized in digitizing high quality audio in that a small AWGN
broadband noise signal is deliberately added to the otherwise pure audio
for exactly this reason - that while the added noise decreases the 
signal to noise of the music, it allows weak harmonics and other signal
components to be reproduced and reproduced somewhat accurately rather
than being distorted or altogether eliminated in the sampling process.

        Another way of thinking of this is that while the added random
noise may be significantly greater in total signal power than the
desired  weak signal, if it is white (flat with frequency) noise the
actual noise power from the added noise in the bandwidth of the narrow
band signal may still be significantly less than the signal energy
present there even for very weak signals if the bandwidth is low enough.

        But in order to visualize this, one must grasp that filtering a
sampled signal into a narrow bandwidth is isomorphic with averaging and
that a decision process in the presence of random (uncorrellated) noise
is statistical (in a specific sense to be precise)  and thus has some
very interesting properties, amoung them being that even very small
correlated signals can be detected in the presence of much larger
amounts of noise by averaging lots of samples because the noise tends to
average out and the signal tends to accumulate.


> 
> High resolution sigma-delta audio ADCs and DACs use this principle.  They
> sample at 1 (sometimes 2 or 3) bits, but very fast.  Its a little more
> complicated, than what we're talking about, but the same principle.
> 

        A major issue with rf sampling is intermodulation - mixing of
strong narrowband signals in the input passband in the A/D due to
various nonlinearities in the A/D circuitry resulting in spurious ghost
mixing product signals that land on top of a desired  weak signal.  
This can obliterate the weak signal, even if it might be very detectable
by virtue of the averaging in the presence of noise principle described
above.

        And the real rf environment in most places contains multiple
strong narrow band signals, many of which appear and disappear randomly
as transmitters are keyed on and off, so chasing the holy grail of a DSP
based software radio that digitizes whole chunks of spectrum straight
from the antenna with all the signals in it big and tiny is actually
going to be more limited by intermodulation in the A/D than raw
resolution, at least if one's goal is spurious free performance
comparable to traditional high dynamic range analog receivers.
        
> > that would be practical in consumer electronics.  Are log-coders 
> > used - something similar to mu-law coders that are used for audio?
> > How wide are the coders?  How much do they cost?
> 
> I've never heard of log coders being used for anything other than low
> quality audio.

        The usual reason for log coding (or more precisely usually a
kind of floating point representation of the data, rather than true
logarithmic measurement steps) has almost always been reducing the
number of bits required to represent wide dynamic range signals while
still preserving as much fidelity as possible.  The emphasis here is
compressing the size of the resulting data (lower bit rate in
transmission, fewer bits to store) rather than allowing use of actually
lower resolution/accuracy A/D converters.  And many if not most
applications of such coding are implemented by compression in the
digital domain after actually digitizing the signal with a linear A/D
converter with enough bits of resolution to be capable of measuring the
smallest step in the digitized "floating point" result.

        In fact it is typically true that the costly and difficult thing
to do reliably over temperature, voltage, and process variations in an
A/D converter is repeatably and linearly measure small deltas (small
step sizes) accurately at high sample speeds, whilst outputing extra
bits tends to be cheaper, at least until one gets upwards in speed  into
flash A/D territory where each output bit value has an individual
comparitor associated with it.

        In any case if one thinks about digitizing a strong signal and
a weak signal and some AWGN noise on a A/D converter with log steps one
realizes that for most samples (as the strong signal swings above and
below zero) the effective sample step size is quite large and only for
those brief intervals where the composite signal is close to zero is
the step size small.   Compare that to a traditional linear A/D with
constant step size and one will quickly recognize that there is a big
difference in effective average step size if there is a strong dominant
signal present.   And I think it may be intuitively clear that the
bigger the step size the bigger the AWGN has to be for statistical
correlation averaging to work, and the bigger the AWGN is, the more
power there is in a given bandwidth and the higher the noise floor
and the stronger a signal must be to be detectable.

        But enough hand waving and foot stomping.


> 
> Matt





> 
> _______________________________________________
> Discuss-gnuradio mailing list
> address@hidden
> http://mail.gnu.org/mailman/listinfo/discuss-gnuradio

-- 
        Dave Emery N1PRE,  address@hidden  DIE Consulting, Weston, Mass. 
PGP fingerprint = 2047/4D7B08D1 DE 6E E1 CC 1F 1D 96 E2  5D 27 BD B0 24 88 C3 18




reply via email to

[Prev in Thread] Current Thread [Next in Thread]