[Top][All Lists]

## Re: [Discuss-gnuradio] Delay locked loop for the two-clock problem

 From: Fons Adriaensen Subject: Re: [Discuss-gnuradio] Delay locked loop for the two-clock problem Date: Thu, 27 Oct 2016 22:35:32 +0000 User-agent: Mutt/1.5.21 (2010-09-15)

On Wed, Oct 26, 2016 at 11:54:17PM +0200, Marcus Müller wrote:

> > The actual frequency of the clock used to measure time doesn't
> > matter as long as it has reasonable short term stability (and both sides
> > use the same clock of course).
> Exactly; that what was I was worried about. I don't have any data on the
> frequency stability of PC clocks – but I'm 100% sure a USRP's oscillator
> should be better

Only the short-term stability matters. If W and R are the two ends of the
buffer, when we obtain timestamps (tW_n, kW_n) and (tR_n, kR_n), where the
't' are read from time_now(), and the 'k' are cumulative counts of samples,
bytes, or whatever items. Now if the control loop is at the reading side
(it doesn't have to be), then whenever a block is read we obtain (tR, kR).
What we want then is kW (tR), so we can compute kW (tB) - kR, which is the
'logical' number of items buffered at time tR. 'Logical' meaning that instead
of block writes and reads we assume imaginary constant rate writes and reads.
Given two pairs (tW_n, kW_n) and (tW_n+1, kW_n+1), all we need is linear
interpolation to find kW (tB). In practice, all 't' will be a small fraction
of a second apart, so only the short term stability of time_now() matters.
The only thing we need to ensure is that the 't' are a sufficient number
of clock ticks apart, so that their difference isn't dominated by round-off
error.

Any jitter of the clock used by time_now() (and any round-off error) has
exactly the same effect as jitter of the actual write/read event times,
and is filtered by the DLL. And whatever remains is filtered again by
the main control loop (see example below).

> Hm, at 100MS/s, the integration periods to get stable rate estimates
> relative to CPU clock would probably get pretty long, sample-wise,
> wouldn't they?

It doesn't depend on the sample rate. What gets timestamped are the
block write and read operations on the buffer. It doesnt' matter what
the block contains, 256 samples at 48 kHz or 25600 at 4.8 MHz. What
matters is the average block period, and how much variation this has.

> In other words, while we still need to aggregate samples
> to get a block of samples temporally long enough for the CPU time
> estimate to be stable, buffers are already flowing over.

You mean when the system starts processing ? We don't wait, but just
start assuming the actual rate is the nominal average one. After
the first iteration a one-time correction to the buffer state is
made so it corresponds to the target value of kW - kR. After that
the control loop takes over.

> Also, I'm still
> confused: Let's say we have two rates that we need to match, $r_1$ and
> $r_2$, with $\frac{r_1}{r_2} - 1 = \epsilon_{1,2}$ for pretty small
> values of $\epsilon_{1,2}$, i.e. relatively well matched. If we now use
> a third rate, $r_3$ (namely, the clock resolution of the PC), whose
> $\epsilon_{1,2},\epsilon_{1,3} \gg \epsilon_{1,2}$, how does that work
> out? I feel like that will add more jitter, mathematically?

The rates being 'well matched' is the normal situation. It doesn't
matter what the resampling ratio is. All the control loop does is
apply a small correction to the nominal ratio which is known a priori.

> >> I think it'll be a little unlikely to implement this as a block that you
> >> drop in somewhere in your flow graph.

Really there is no problem with that under the assumption stated
previously.

In RF engineering terms, the resampling does indeed add some phase
noise, but only within the loop bandwidht (0.1 Hz is a typical value)
It is really similar to a PLL, the phase noise of the LO within the
PLL bandwidth is added to the signal. If you have another PLL
downstream with a lower bandwidth that one may well fail to lock.
But there is really no reason to do adaptive resampling in the
RF domain. Just before the audio sink is the right place.

> > In theory it would be possible. The requirement then (assuming RF in and
> > audio out) is that everything upstream is directly or indirectly triggered
> > by the RF clock, and everything downstream by the audio clock. Don't know
> > if that's possible in GR (still have to discover the internals).

> Not really, there's no direct triggering.

It doesn't have to be direct.
A module will execute (i.e. its work() is called) when it has sufficient
input and sufficient space in its output buffers. Whenever that happens,
it is triggered by some other module providing input or space for output.
So in the end everything is triggered by events produced by the HW, even
if it may take some for these event to 'ripple through'.

> > The only assumption for this to work is that there is no 'chocking point',
> > i.e. all modules are fast enough to keep up with the signal.
>
> But that assumption fails with GNU Radio in general! There's always
> faster and slower blocks.

You seem to misunderstand what I mean by 'no chocking point'. It just
means that on average your CPUs can perform the work that is required.
If that is the case, then

1. the system will be idle part of the time, just waiting for more
input, and
2. at any point there will be a well defined and on average constant
and known data rate (at least for sampled signals).

> ... and we're back at the question of how much we can trust the CPU
> clock as a base for estimating latencies :)

All modern PCs have a clock that is guaranteed to be continuous,
monotic and having a sub-microsecond resolution. Of course this
is not the sort of clock you'd use to generate an RF signal (phase
noise could well be horrible). But whatever jitter this clock has
is orders of magnitude less than the time jitter of the events it
is used to timestamp.

An example may make this a bit more clear. Assume we are receiving
an audio stream from the network and need to resample this to the
actual sample rate of our sound card. Let's say we have 200 packets
per second (of 5 ms each). For a cross-atlantic link typical jitter
on the arrival time of the packets will be some tens of milliseconds.
Every now and then a packet will arrive 300 ms late. That means we
need the average fill state of our buffer to be at least 300 ms if
we want to avoid interruptions in the signal. This sets the target
value for kW - kR (as above) and the buffer size (a bit more).

Now assume a packet does arrive 300 ms late. So the error seen by
the DLL is 300 ms. Now the value of w1 is 2 * pi * B * dt, with B
the bandwidth of the DLL and dt = 5 ms. Let's set B to 0.1 Hz,
then w1 ~= 1/300. So of the 300 ms error 1 ms remains in t0, which
is the value seen by the main control loop. If this has a similar
bandwidth, the effective error that remains is again divided by
300, so we have something close to 3 microseconds. This error is
passed on to the resampler which will try to correct it with a
time constant of around 10 / B, i.e. one second. So the relative
correction to the resampling ratio will be around 3 ppm.

Ciao,

--
FA

A world of exhaustive, reliable metadata would be an utopia.
It's also a pipe-dream, founded on self-delusion, nerd hubris
and hysterically inflated market opportunities. (Cory Doctorow)