On Mon, Sep 1, 2008 at 4:26 PM, Richard Jaeger
<address@hidden> wrote:
I have been attempting to calibrate my USRP system. I am running
four
channels and feeding the various channels to
fft sinks following de-interleaving and channel filtering. I am
using the
Basic RX boards, and the PGA in front of the ADC is set at 20 dB.
For large decimation, the sensitivity of the system seems to much
larger
than I expected,
and is a function of the decimation factor D. Overall, I can't
account for
a gain of between 60 and 80 dB.
For decimations below 96, the gain is fairly constant, changing +/-
a couple
of dB. However, for decimations above about
96, the voltage gain in dB is growing approximately linearly with
D: (for
example, when I change D from 100 to 160 the overall gain increases
by about
5.5 dB. When I change D from 160 to 222, the voltage gain increases
by
another 5.5 dB). So the gain itself is growing exponentially with
decimation rate.
I'm not an expert by any means, but this CIC documentation is telling
me the gain is:
g = (RM)^N
Since M and N are defined (M=1, N=4) and you're changing R between 160
and 220, finding the difference in gain in dB:
10*[ log10(220^4) - log10(160^4) ]
5.5 dB
Do you agree?