Position determination

From OLPC
Revision as of 02:23, 17 April 2007 by MitchellNCharity (talk | contribs) (tweaks, typos)
Jump to: navigation, search

A mesh of nearby XO's should be able to roughly calculate their 2D and 3D relative positions.

Approaches:

  • optical tracking
  • sound-delay distance measurement
  • what else...

Different approaches are very much better at different things, so several seem likely to be implemented. If we architect it right, the estimates from multiple approaches might be combined to yield more accurate estimates.

Getting position from sound-delay distance measurements

One approach is to derive localization from pairwise distance measurements made using sound delay (trilateration). An XO initiates or observes an audible speaker event and announces it on wireless. Other XO's hear and recognize the event, and derive distance from speed-of-sound delay.

Robustly assembling such distance measurements into location reports has been a research problem for almost a decade. Dealing with noisy measurements and positional uncertainty are the big issues. The sound measurements are multipath and thus their distributions are multimodal. If XO's are moving rapidly with respect to the measurement rate, then predictive motion correction is needed.

Apparently there is existing code for XO's to measure mutual distance using sound delay. (cjb, #olpc, 2007-04-17) <10% error in the room it was tried.

Resources:

Data fusion

If multiple approaches are used to estimate position, it would be nice to combine them in a way which increased accuracy. There is also the architectural challenge of sharing raw data and processed estimates among laptops.

Possible uses

  • Walking around as input.
  • Exposing social dynamics.