Augmented reality: Difference between revisions

From OLPC
Jump to navigation Jump to search
(OpenCV info moved to OpenCV page.)
 
Line 26: Line 26:
* determine the vertical/horizontal offset of the user. So the user can change their angle of view through the XO "porthole", simply by shifting their head. There will be field of view limitations due to the XO's off-center camera.
* determine the vertical/horizontal offset of the user. So the user can change their angle of view through the XO "porthole", simply by shifting their head. There will be field of view limitations due to the XO's off-center camera.


This can be done using [[OpenCV]], apparently with acceptable performance.
These require locating the user's eyes in the camera's images. There is open source software to do this, though performance may be an issue.

*http://code.google.com/p/ehci/ ehci - GSoC OpenCV-based head and hand tracking. Has python api.
*http://opencv.willowgarage.com/wiki/ OpenCV


== See also ==
== See also ==

Latest revision as of 07:06, 4 November 2008

Augmented reality is overlaying computer-based information on your view of the real world.

Picture holding an XO up to the sky, and the XO displaying the stars you would see there (if you weren't indoors, the Sun was down, etc). Or pointing an XO down towards the ground, and seeing through the Earth, to continents on the other side.

The Inertial navigation peripheral hack provides compass, accelerometers, and a gyro. The gyro makes it too expensive to build into every XO, but the others are relatively inexpensive. Here are some ideas for software which might be written for it.

Ideas

See the stars

Where ever you point the XO, the XO shows that section of sky. Even if you point it down.

See the Earth

When you point the XO horizontally or downwards, you see through the Earth. Crust, core, and the geography on the other side.

Respecting relative head position

Rather than assume the user's eyes are a fixed distance from the XO, and centered on the screen, it would be nice to:

  • determine how far the user's eyes are from the screen. So when eyes are moved closer to the screen, the program displays a larger angular slice of whatever.
  • determine the vertical/horizontal offset of the user. So the user can change their angle of view through the XO "porthole", simply by shifting their head. There will be field of view limitations due to the XO's off-center camera.

This can be done using OpenCV, apparently with acceptable performance.

See also

Resources