1) As a test, I am sending a GMSK signal (created by a signal generator, very low noise) at low symbol rates into the USRP and plotting the complex baseband that reaches the PC. One would expect to see a nice tight unit circle, and at low decimation rates (-d 16, etc) this is indeed the case. However, when I increase the decimation rate, the unit circle grows "thicker", which seems to indicate amplitude distortion. At -d 256 the distortion is quite bad. Any idea what could be going on here? This shows up whether using either the TVRX daughtercard or the BasicRX. Something going on with the DDC in the FPGA? Filter ripple?
2) I am considering making a modified version of gr_quadrature_demod that would better handle residual carrier frequency error. The output of gr_quadrature_demod_cf::work should average to 0 over the long term if there's no residual carrier, but will be nonzero otherwise. So the basic idea is to average the "d_gain * gr_fast_atan2f(imag(product), real(product));" values and subtract them out. Is this a reasonable approach? A lot of this depends on how many samples the work function receives to operate on. Who calls the work function of the various blocks in the flow graph, and what determines the size of noutput_items? Is there a way to set bounds on noutput_items? How do I insure that I'm averaging over a sufficient window?