3D scanning creates a 3D model of a real-world object or environment.
There are a great many possible, overlapping, approaches.
OLPC has it's usual advantages (many collaborating laptops available (thus many cameras, cpus, screens), known hardware, open source, availability of recognizable objects (XO's)), and disadvantages (hardware other than XO's may be unavailable, low-quality cameras, weak cpus and gpus).
Discussion of approaches
Combine camera views from more than one laptop.
Laptops can be moved quite far apart, permitting very wide baseline stereoscopy. Eg, scanning buildings, clouds, mountains.
For example, the Sun shadow of a hand (point) or yard-stick ruler (line), waved over an object.
Seeing oneself stereoscopically from across the room is said to be a powerful, out-of-body like experience. We don't have a "viewing stereo images" page yet, or software, but with a mirror, or practice fusing, one can view 3d on an XO. The high (mono) resolution is quite nice for this. Not something to be stared at for extended periods of time, but otherwise fun. Add two more XO's to be eyes, and one has a potentially nice activity. Move the eyes apart, and you can gain stereo depth perception of distant objects (clouds, etc). This item is somewhat off topic since it doesn't require generating a 3d model - your brain does that.
These pages contain various software ideas for improvements or changes to the various software elements of the OLPC. This category could also point to pages about entirely new software applications.