Position determination: Difference between revisions
RafaelOrtiz (talk | contribs) mNo edit summary |
(→Getting position from sound-delay distance measurements: Could use envirnomental noise too.) |
||
Line 10: | Line 10: | ||
===Getting position from sound-delay distance measurements=== |
===Getting position from sound-delay distance measurements=== |
||
One approach is to derive localization from pairwise distance measurements made using sound delay (trilateration). An XO initiates or observes an audible speaker event and announces it on wireless. Other XO's hear and recognize the event, and derive distance from speed-of-sound delay. |
One approach is to derive localization from pairwise distance measurements made using sound delay (trilateration). An XO initiates or observes an audible speaker or mic event and announces it on wireless. Other XO's hear and recognize the event, and derive distance from speed-of-sound delay. |
||
Robustly assembling such distance measurements into location reports has been a research problem for almost a decade. Dealing with noisy measurements and positional uncertainty are the big issues. The sound measurements are multipath and thus their distributions are multimodal. If XO's are moving rapidly with respect to the measurement rate, then predictive motion correction is needed. |
Robustly assembling such distance measurements into location reports has been a research problem for almost a decade. Dealing with noisy measurements and positional uncertainty are the big issues. The sound measurements are multipath and thus their distributions are multimodal. If XO's are moving rapidly with respect to the measurement rate, then predictive motion correction is needed. |
Revision as of 15:49, 18 April 2007
A mesh of nearby XO's should be able to roughly calculate their 2D and 3D relative positions.
Approaches:
- Optical tracking
- Sound-delay distance measurement
- what else...
Different approaches are very much better at different things, so several seem likely to be implemented. If we architect it right, the estimates from multiple approaches might be combined to yield more accurate estimates.
Getting position from sound-delay distance measurements
One approach is to derive localization from pairwise distance measurements made using sound delay (trilateration). An XO initiates or observes an audible speaker or mic event and announces it on wireless. Other XO's hear and recognize the event, and derive distance from speed-of-sound delay.
Robustly assembling such distance measurements into location reports has been a research problem for almost a decade. Dealing with noisy measurements and positional uncertainty are the big issues. The sound measurements are multipath and thus their distributions are multimodal. If XO's are moving rapidly with respect to the measurement rate, then predictive motion correction is needed.
Apparently there is existing code for XO's to measure mutual distance using sound delay. (cjb, #olpc, 2007-04-17) <10% error in the room it was tried.
Resources:
- Cricket is perhaps the most heavily exercised such system. This paper(PDF) describes some of the lessons learned. We might transliterate some algorithmic code from the cricket v2 java source.
- Robust Distributed Network Localization with Noisy Range Measurements (2004) is addressing the same(?) problem. But note the unfortunate "degree 10" requirement, though perhaps in practice it might not be a problem. Algorithm pseudocode included.
- GPS-free positioning in mobile ad hoc networks (2001)
- Wireless Sensor Network Localization Techniques - a recent review article. Limited usefulness.
Data fusion
If multiple approaches are used to estimate position, it would be nice to combine them in a way which increased accuracy. There is also the architectural challenge of sharing raw data and processed estimates among laptops.
Possible uses
- Walking around as input.
- Exposing social dynamics.