gpsd-dev
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [gpsd-dev] Errors, questions, and FAQ notes


From: John W. Nicholson
Subject: Re: [gpsd-dev] Errors, questions, and FAQ notes
Date: Sun, 31 Mar 2013 16:52:58 -0700 (PDT)

Well, I am not a data expert also (along with GPSs, programming, and gpsd). But I did asked a short while ago one of the authors if they tested the GPSs in different orientations, I did not think of all these things stated By Ed or Greg, and they appear to know about it way more than me. So, am wondering should I buy four USB GPSs and a hub and give it a try? How would I merge the data with the current gps tools with gpsd? Or, does anyone think they can make it work?

 
John W. Nicholson


From: Greg Troxel <address@hidden>
To: Ed W <address@hidden>
Cc: address@hidden; address@hidden; address@hidden; address@hidden; address@hidden
Sent: Sunday, March 31, 2013 8:08 AM
Subject: Re: [gpsd-dev] Errors, questions, and FAQ notes


Ed W <address@hidden> writes:

> http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=6166295

> Hmm, After a quick scan of his results I'm unconvinced.  I think the
> author has a misunderstanding of gaussian noise?

I skimmed this, and I didn't see them claim the errors are gaussian.
Just that the error sources are complicated so one (implied) might as
well apply the central limit theorem.  The actual statement is about
equally likely positions within a circle, which doesn't make sense to
me.  The appeal to CLT seems reasonable to me, except that it glosses
over that having or not having ephemeris is locally non-gaussian and
that over short time scales errors are more correlated.

> He notes that after 1 min all the receivers exhibit some kind of
> random walk around the known position, but the average position is
> closer with 2 GPS inputs than 1.  Then he compares over 4 mins and
> gets the same shape of results, but all shifted down in accuracy. It
> seems to me therefore that he is simply sampling the random walk more
> frequently by using X GPS units rather than a single GPS input, hence
> he converges on the centre faster (probably at a rate of
> 1/sqrt(samples) )

Your logic is probably partly right, but an important issue is the
temporal correlation of errors.  It's well known that three important
error sources are clock offsets, orbital prediction erors, and the
ionospheric delay.  Those tend to be stable over very short timescales.
So averaging positions taken hourly for a month should do much better
than 10 Hz positions taken over 24*30/10 seconds.  I say you are
probably partly right because your proposed procedure will help with
errors that are uncorrelated across 0.1s intervals.

> 10Hz GPS units are available, I shouldn't be surprised if these don't
> exhibit a random walk over 60 seconds which is roughly like having 10x
> 1Hz GPS units over the same period?

I would be surprised, but please do the experiment and publish :-)

One of the variables between receivers will be which satellites they
have ephemeris for.  With a single unit, that will be constant.

> I think a more promising area of research is to look at integrating
> the veolocity vector from the GPS with the change in position? It's
> not clear to me if this is already used in a standard Kalman filter
> that the GPS uses, hence whether this is already done?  The velocity
> vector seems to be often quite accurate (at non stationary speeds),
> coupled with knowledge of the change in position vector (and better
> yet some vehicle inertial nav), and you might be able to integrate the
> information more accurately
>
> It's on my todo list to play with some of the Kalman filtering
> variants which incorporate significant lookback history

That sounds interesting.  Commodity navigation receivers are quite
opaque.

Specific comments on the paper (no, I wasn't a reviewer, but this is
what I would have said):

  Presumably a position in WGS84 (for the days of the experiment) was
  obtained from NGS, but this isn't stated.  I have no issue with the
  local coordinate system approach, but the reference to Great Circle
  should have been to geodesics on the ellipsoid.

  Were the receivers powered up at the beginning of the periods?  With
  almanac, presuambly, but with ephemeris?  Or had they been on
  continuously?  For how long?

  Was SBAS (WAAS, presumably) used?  Acquisition time issues for that?

  How separated in time were the various measurements?

  There is no discussion of statistical significance of the results.
  Maybe I skimmed too fast, but I didn't see how many samples over how
  many days were used to produce the final plots.  The table seems to
  indicate three trials of 4 minutes, which seems far too few.

  I don't understand why all 4 weren't run, and then analysis done of 1,
  1/2, 1/2/3, and 1/2/3/4, to avoid noise from the results from receiver
  #1 changing each time.

  I don't understand why the relative receiver distances weren't
  measured and adjusted for; it's not that hard and it's needlessly
  confounding.

  There is no discussion of the quantization noise of the receiver
  output format.  Given the plots, this is probably not that important,
  but with Garmin position format (e.g. on a GPS II+) the quantization
  noise is significant.

  There is no link to the raw data, which might answer some of the
  above.
    https://lists.gnu.org/archive/html/emacs-orgmode/2012-02/msg00698.html

  There is no discussion of height.  I realize that orthometric heights
  are far far more difficult than horizontal, but analysis of
  ellipsoidal heights would have been interesting.

  It would be really interesting to compare 1 receiver for 20 minutes
  with the average of 4 for 5 minutes (repeated 100 times at different
  times of day to avoid the same orbital.  That would let you compute a
  figure of merit for multiple receiver vs extra time (adjacent extra
  time).

  Given all the above, it seems like the right thing to do is to log
  data from 4 receivers for a week straight and then analyze.  Or at
  least 24 hours.  The point of the paper is really about the degree of
  correlation of errors; are they about the receiver, or about the
  incoming signals?  In other words, to what extend can one buy down
  observation time with more receivers?  Is this approach reliable?  But
  that's addressed only obliquely.


Despite my criticisms, I think it's a contribution that the authors did
this and shared the results; I think they've shown that this technique
helps more than I expected.  (I expect averaging over time to help a
lot, much more so than multiple receivers at the same time.)

Greg



reply via email to

[Prev in Thread] Current Thread [Next in Thread]