Talk:Hardware specification

From OLPC
Jump to navigation Jump to search

Software Specification

You have done a nice job on providing a very detailed look at the hardware for the OLPC. However, the Software specification page is not so happy. Perhaps some of the OLPC insiders could see that the Software spec page is brought up to the same level of usefulness as the Hardware spec?

Hardware Specification

Could someone care to update the hardware specifications on the main hardware page instead of in the "Talk" section. There are a number of things that need to be changed : faster processor, SD card slot, integrated 640x480 webcam, etc.

The processor hasn't changed - the description on the main page is correct - JordanCrouse (Talk to me!) 14:53, 31 August 2006 (EDT)
Nope, the AMD specs page says that the GX2-500 actually runs at 366 MHz, not 400 MHz as is said just below...
The "real speed" is correct now, but the marketing name is wrong, it should not contain "GX2", only "GX"
CPU
AMD Geode 400 MHz x86
Memory
128 Mb 133 MHz DRAM
BIOS
LinuxBIOS stored on 512k flash ROM
Storage
512 Mb SLC NAND Flash memory
Video
693 x 520 pixel color display capable of switching to a 1200 by 900 sunlight-readable monochrome mode
Network
internal 802.11 wi-fi with mesh networking capability
Keyboard
various depending on target language. They will include two 5-key cursor-control pads.
Mouse
a touchpad pointing device
Interfaces
4 USB ports
Power
input jack for DC from 10 up to 25 volts. A human-powered generator, probably foot powered, will be bundled with the unit. The internal rechargable battery has 5 NiMH cells.
Sound
built-in stereo speakers and microphone

Maintenance and diagnostics

A tacit assumption of the OLPC project is that these systems will enjoy limited - or even zero - field maintenance. The best one might hope for is that some onboard diagnostics could warn of impending problems to help the end-user plan for the demise (and possibly the remanufacture) of his unit. In the First World, most are used to tossing out electronic stuff long before it wears out, on account of "technical obsolescence" But OLPC units may see use long after they are "obsolete", for complex reasons that don't care about "Moore's Law". And who knows what setbacks will befall schools in the places these machines might go? Consider a country that suffers a decade-long "technology" embargo, for example. -docdtv

Units should be designed to keep maintanence as modular and staightforward as possible. Documentation and diagnostics should be exhaustive and could be included in the firmware- or as accessible and independent from the software as possible. -Chas

Yes, most sources of failure of conventional machines are absent in our machine, and we hope they will last for a long time. But given that many of the failures of mechanical components do not exist in this machine, it isn't clear how much warning can be had of impending failure. This leaves the backlight lifetime; we do not know yet how long this will be and depends strongly on patterns of use we don't yet fully understand; backlight degredation will be a gradua process, however. We are investigating the possibility of a replaceable backlight assembly.

The process of making it as cheap as possible for manufacturing also helps greatly in enabling field repair, to the extent that today's technologies permit. - jg

Mass storage

I have not built devices with flash memory, but I understand that it can't be rewritten an indefinite number of times - maybe just a million? (Something done twice a minute is done a million times in only a year.) I assume that measures are taken to load-balance the wear on the various blocks of the memory. But IF one suspects that the flash memory may wear out before other laptop components, it would be useful if the end-user could determine how much life was left in it. When the day came the unit did not function, a hint that flash memory wear killed it might keep remanufacture cheap enough to make sense. (At least "run the numbers" before dismissing my concerns!) Of course the same goes for any other parts (e.g. crank) whose usage (and inferred wear) might be measured. (cf. S.M.A.R.T. technology for PCs with hard drives. -docdtv

Linux supports the JFFS2 file system which addresses the issue of limited number of Flash write cycles. (Incidentially, JFFS2 was developed by David Woodhouse of Red Hat.

Given the good wear leveling of JFFS2 compute the amount of wear of the flash as .5 gigabytes times of order 100,000 times. You actually get a lot of writing before the first bad flash block should happen due to wear. (50,000 gigabytes of writing). The chips likely have a higher random failure rate than that. - jg

Recycling of components - even "Frankenstein" repairs in remote field locations with multiple broken units - might well be designed into the system from the ground up. Moreover, it could also prove useful to benchmark software applications for their life-shortening properties. For example, an application that did frequent automated backups of a document being edited might wear the flash memory more than worthwhile. Remember, in the CRT age, "screensavers" once EARNED their name, rather than serving other purposes on cheap displays easily tossed when worn! -docdtv

Yup, we expect frankenlaptops will be common, and are doing what we can to make that reasonable. Similarly, we expect folks will hack the battery packs in the field. -jg

The "prospectus" for the OLPC laptop looks to a five-year-plus life to justify it as a textbook replacement alone. This is hardly the only reason for getting one of these machines. - jg I think that flash memory "forgets" after a decade, so perhaps the "permanent" part of the flash content might also be regenerated episodically, too. If it is a big portion of the total flash memory, perhaps "rotating the tires", by swapping in which physical part of memory it dwells would allow superior wear-balancing as well. -docdtv

JFFS 2 in essense does this already; it occasionally moves blocks to keep the wear level, so so long as there is some write activity, eventually all the bits should end up being rewritten over a long period. -jg

Upkeep will be the problem of the host country, but maybe one should think about it now before the design is complete. Will host countries want to revise the "permanent" memory content one or more times (e.g. via a USB port) before the machine cannot be used? Maybe First World "Gen-Netters" like revving freeware versions monthly, but poor people burdened with long hours of tiring physical work will not want to make a "hobby" of tweaking a laptop. While USAers might prize customization, another culture might want to keep these "personal" units as harmonized as possible, that cross-peer instruction prove easier. "Shudder": they may value common content over the potential to experiment! But perhaps a village "server" which archives multiple flash "images" could embrace experimentation and homogeneous potential at the same time. Maybe you'd want something like this anyway, as important software flaws emerge late - making a patching infrastructure desirable. (The "server" needn't be a real computer- just a collection of one or more USB flash drives.) -docdtv

Due to security updates, we have to keep systems up to date in the field of critical problems, so the idea of never touching a system isn't reasonable for a network enabled system. - jg

So in looking to keep the OLPC laptops working, one should perhaps try to think like a carpenter planning to sail with Magellan. There will be no way to "Fedex" in replacement parts overnight. -docdtv

So perhaps a number of slots for smaller flash drives rather than one big one? Yes, wear-balancing would make it likely that they all age at the same rate, but when random failure does occur the modularity would facilitate Frankensteining. -Chas

Can't afford slots; better the flash just not wear out. - jg

What do you think about a hardware-based remote control like INTEL's Active Management Technology? Thus you would have a built-in remote access system working, as long as there is a network connection, power and functioning hardware. Furthermore this would provide a common management basis for different software installations on top. (contact: mahltig@bondyratech.de)

"Mass storage: 512MB SLC NAND flash" (8 April 2006 version of the hardware specification)

Does this business talk translate into: "there is an unpopulated internal 40pin/44pin connector with 2mm spacing allowing to connect another IDE/CompactFlash as slave?" - ff

No, it's just using the interface pins of the chipset that can connect either to flash or to IDE. Using IDE itself as the interface to flash would prevent the use of a decent flash file system like jffs2 or logfs. - arnd

CPU

CPU, esp, running firefox with non-latin fonts - isn't it too slow? I have a similiar spec desktop linux machine actually, and whenever I load multilingual pages (even just a wikipedia page with the names of languages listed on the side in Arabic and Hindi and Korean. Rendering a WP page of Korean page on FireFox takes time!), the font rendering seems real slow... I'm running a K6-3@400Mhz.

dunno: depends on many factors, and Firefox is getting lots of work on the memory front. Also, your font cache in your X server might be too small; that can kill performance (you need a bigger cache for eastern languages) - jg

The CPU is constrained by "5W max heat dissipation" requirement. AMD Geode System already consumes almost 2 W under load, leaving only 3 W for display, audio, storage and wireless. At Wikipedia it was suggested that lower power Alchemy be used which would allow for higher clock rates, but apparently x86 was chosen for compatibility reasons.

Depends on exactly which Geode you are talking about, at what clock rate. It turns out that none of the other low power chips like that have an FPU, rather than x86 compatibility per se'. This is a killer when porting applications to whatever architecture. If an alchemy with and FPU existed, it would have been sincerely temping. - jg

This AMD document lists Alchemy with "MIPS32" FPU. Is this a real FPU or some kind of emulation?

Apparently it's emulated, which is a bit of a shame since it does seem like a better candidate processor than the Geode. According to the AMD data book for the Alchemy 1100 [1], page 39:

"The Au1 [aka Alchemy 1100] core does not implement hardware floating-point. As a result, all floating-point instructions generate the Reserved Instruction (RI) exception, which permits floating-point operations to be emulated in software."

How about trying to build a CPU that would suit your needs with help from the Opencores people? Their OpenRisc 1000 core seems to be quite advanced already. Since they already have a lot of open source building blocks available, you would only need someone to put them together. The only missing part I see is that there is no graphics core (just display logic cores). Might be an idea to investigate for future generations of products.

Given the volumes involved in this project and that a custom chip will be included anyway (to control the special LCD) this would seem like a very good idea. But there don't seem to be any hardware people outside of Quanta in the project so the preference for well known chips is understandable. I like the OpenRisc and another strong open source CPU is the Sparc compatible Leon3. There are several floating point options and since it runs at 400MHz in a 130nm ASIC putting in two would give you plenty of computing power at a very low cost. On the graphics side there is the Open Graphics Project.--Jecel 16:32, 19 May 2006 (EDT)

What we do in generation 2 of the OLPC system is an interesting question. There will be by then many other possibilities than the Geode in that time frame. AMD will have to earn our business. Time will tell; we'll burn that bridge after we cross it. - jg

Have you considered the Texas Instruments OMAP3430 ARM Cortex-A8 based processors? While they do lack x86 compatability, they possess a significant number of features that make them appealing in low power deployments. (lkcl: people who have used these processors in mobile phones are now lamenting the fact. the general consensus is instead of wasting time with low-clock-rate "special dedicated blocks", just get a faster more general-purpose CPU).

For anything other than a mass-market chip to gain traction, somebody needs to design the core, build a system around it, port Linux and show it running the Sugar collection. In fact, the same thing could be said about off-the-shelf CPUs like PXA720 and other ARM cores. This is actually a lot easier than it sounds because undergraduate EE students regularly do design cores, build systems around them and so on. It would make an interesting project for a university that wanted to show they can do MIT one better.

OpenCores.org - there are a number of processors available, there - including the OpenRISC1200 - which has had the entire Gnu toolchain _and_ the linux kernel _and_ has a bochs (or maybe qemu) emulator for it _and_ a demonstration linux distribution running on it. you can therefore pick-and-mix the components that you require: they have an IP block for a full 32-bit MAC. the power consumption at 250mhz in .18micron is an estimated < 1 Watt. so assuming that the chip has no critical paths that make it difficult to reach 500mhz at 90nm - yep: you're definitely in the right power-category. lkcl03jun2006.

Also, i've worked with someone to design an asynchronous parallel-processor core concept, based around a real-world asynchronous processor from 1992-or-so, with only 6000 gates that ran at 200mhz, when pentiums were struggling to reach 90mhz. you want to talk "low power" consumption?? asynchronous processors don't _need_ "power management" - no processing equals no power! in 1 million gates, it would be easy to fit 16 of these mini-async CPUs onto a chip - all running at 1 Ghz @ 90nm - and still have the peak power consumption be under 10 watt (and with 16Ghz of processing power available, who needs a graphics chip - and also, if you don't _like_ 10watt peak power consumption STOP doing so much processing!!). at that power consumption, plastic packaging would be feasible, bringing the CPU's volume-production cost down into the $10-$15 range. ceramic packaging whacks the price up of any processor into the $25-$35 range. lkcl03jun2006.

Is it possible to loosen the 5W requirement? This might be counter-current to the suggestions made already, but the Geode GX doesn't support SSE, and since this type of deployment meant that we know what type of hardware is available, we could write optimizations specific to the CPU. However, the SIMD extensions available to the GX requires special care in not clobbering the FPU, and switching mode, and also smaller registers compared to SSE. I guess this case can be made for other processor with similar extension. Calyth 02:27, 5 September 2006 (EDT)

Heat dissipation

5W heat dissipation requirement was recently dropped and changed to 10W. So there might have been the possibility of using a faster CPU after all. 130.149.23.44 12:33, 9 March 2006 (EST)

No, the spec was changed to reflect reality of the worst case situation; we couldn't add before ;-). 5 watts for CPU, then you have the display and backlight full bore, and 2 watts for USB, and 2 watts for audio, and losses in the power supply. The low speed chip is chosen so that low power idle is as low as possible. Even more remarkable, though, is what we hope our best case (screen idle) power consumption will be, which is of order half a watt- jg

Regarding page rendering, using non-XUL-based Browser such as Epiphany, Konqueror or Opera should speed things up.

When we say "firefox" we mean the gecko engine. We hope to use it as it is the most complete and most likely to work of current browser technology. But that doesn't mean we'll be necessarily using conventional Firefox; it may be embedded into a simpler environment. And if the Mozilla project can't get us a browser we can tolerate the footprint of in time, there are the obvious alternatives. - jg

While this would cause problems for using the speaker outputs as a DC source, using a class D audio amplifier (SSM2302, $0.91) would cut the audio power consumption in half (by the above estimation that's 10% of the total!), cost ~ $0.11 more (assuming your using 2 of the SSM2211 at $0.40 right now)--EldenC 19:30, 6 September 2006 (EDT)

BIOS

BIOS, isn't it smart to have OpenBIOS? (ie. FORTH?)

At the time this question was first asked, OpenBIOS was a quite immature implementation of the former IEEE standard.

In September, Sun put its very mature implementation of open firware out under a BSD license, and the same is happening with the equally mature Firmwork's version (these have very close geneological relationships. So we do plan to replace Linux as bootloader with OFW soon. This was not an option we had until very recently.

RAM

I'm worried that 128M of DRAM is going to hobble the machine. Can you get twice as much DRAM if you're willing to take broken DRAMs with parts that don't work, and use the VM hardware to map around the bad spots in them? It's a simple hack and if it cuts the cost of the DRAM chips by 50% then you can put in twice as many. Or are all the DRAMs with failing bits now being used in telephone answering machines where a dropout doesn't matter? -- gnu

Turns out that trying to use bad chips isn't worthwhile, so the experts tell me, and putting in more chips means more power. It is possible to solder in a larger memory chip, however, at higher cost. - jg

SOUND

I'm worried that cost considerations might rule out any sound generating and speakers. However I think that to teach analphabets to learn and write, I will need sound, to show for example syllabes and their sound.

At least some provision must be provided for a USB device able to handle and produce sound. Such device might also be useful for blind people, and people that can´t read. - dagoflores

speakers, microphone, line in, line out all included. And the audio in can be used for DC sensors! - jg

I have been trying to find out more about the "sensor input" mode mentioned several places. I can't seem to find anything special in the AD1888 datasheet, however. All I see is line in, with +12dB to -34.5dB adjustment. That's the same as other codecs. Is the "sensor input" mode unique to the AD1888? If not, isn't the AD1888 kind of overkill for a laptop (6 channels, etc)?

There is some additional logic via the embedded controller for the sensor input that bypasses the AC coupling. The AD1888 was chosen relative to some others as it has similar power consumption (and you turn off channels not being used), and can be completely powered down (many codecs consume some power even if not in use). And the price was right. - jg

I really hope that there will be sound. I'm currently looking into the 3dnow extensions of the geode processor, planning for a (not too) small synthesizer to play around with. Mx44 06:45, 17 August 2006 (EDT)

Remaining issues around sound

Too much sound: in class, at night, with others present. Use earbuds or earphones?

Not enough sound: in noisy environments, for presentation to many. Use speakers or earphones? Nitpicker 17:27, 16 October 2006 (EDT)

Power Requirement

I see that you have settled on a 14 volt power specification. I think that is a mistake. Living in West Africa for many years now I see that people are very comfortable with 12volt power systems. In almost every remote village one can find an enterprizing person who has fitted out a small TV to run on a car battery. Eventhough there are no solar panels or other power systems in thier village, they will use that battery to run thier "cinama" and then strap that car battery on the back of a bicycle and pedel 20km or more to a town that has some power and a battery charger.

Making the power system 14 volts then mismatches your device with a well understood and locally vibrant technology, that is unless your specification is so "loose" that the device will run on 12 volts. - eu

Please specify the 14 volt power with the allowable voltage tolerance. The unloading end voltage of a 6 cell lead-acid battery (which is probably the most widely spread) is at about 11.8 V (say 11.6 Volt including some resistive voltage drop until it reaches the device) so a specification of a 14V (-18%) would be fine for the lower limit. As written above (eu) 12V is the sweet spot of available voltages. It is very important not to miss it, rather consider going for 7 NiMH cells instead of 8 NiMH cells! - ff

Specs have changed since this comment was noted. Actually, we're using 5 NiMH cells; and the machine will take almost anything above that voltage, to 24 volts, + or - for charging - jg

Maximum power: 500 mA (total)

This information given on the hardware specification is most likely uncorrect. A current (Ampere) is not a power (Watt) and a maximum current of 500 mA would lead to a very long time to reach full charge if the laptop is on. Probably the stated 500 mA refer to the maximum output current of the USB ports?

Built in Crank Handle

I suggest that you re-visit the idea of a built in crank handle. While the power output of a built in crank is pretty low and the stresses caused on the body of the laptop can be extremely high I have spent time where power is difficult to come by. The built in crank gives users the ability to power when no other power is possible, even if the crank time/use time is as poor as 1/2.

There will be a human-power option, but we've moved it off the laptop itself. I twill be part of the power adaptor. Walter 14:17, 30 July 2006 (EDT)

I would also suggest that you make it possible to connect bare wires to the laptop for power with connectors similar to, but much more rugged, as those on speakers. This gives people the ability to connect to a variety of power sources without cannibalizing the standard power cable. They can use almost anything. You would need to include some kind of reset for when the power fluctuates outside of acceptable ranges.

We've considered something along the lones of banana plugs and may revisit it in Gen. 2. Walter 14:17, 30 July 2006 (EDT)

Power in third world areas is usually very dirty when available. In the industrialized world we have access to reasonably clean power and still have dirty power issues with computers and computerized machines. I think anyone who believes that accessories and power will be available is naive about third world nations. The nicer places that many people go visit may have some intermittent basic services and this gives people a false understanding of the availability of services in third world nations. As another person pointed out, people will bicycle to a place with power rather than fabricate a charging system for a 12V battery out of a bicycle.

We are trying to build as much flexibility into the human-power system as we can so that it can support and foster local innovation. We've also built in extra robustness into the entire power system with the expectation of "dirty" power. Walter 14:17, 30 July 2006 (EDT)

You may want to consider including instructions on how to fabricate things in the flash memory of these laptops. Old style Mother Earth News and Peace Corp style fabrication and instructions.

Excellent idea!! Walter 14:17, 30 July 2006 (EDT)

Awesome project.

Crank as an Input Device, and Vice-Versa

I request that it be possible to use the crank as an input device. I realise with moving the device off the computer this becomes more difficult, but even a simple way for software to read the amount of voltage currently being generated (measured in seconds, not hours) would be useful.

Let me give you an extreme example of where I'm going... A computer powered by a mouse, or a computer powered by a joystick, and also with other more interesting user interfaces. In fast action games this might produce a fair amount of power.

But what I am suggesting now is a much more limited "turn the crank to see next page of ebook" kind of design, where cranking is built in to the application usage model rather than being seen as an added burden. It would also be integrated into gaming, for example foot-pedaling the generator might move your character forwards in a game. This is partly for the purposes of training kids in cranking, and also partly for making the system more immersive and for exploring novel user interface techniques. It also adds a level of physical exercise and interaction into computers that I think is missing, and which might be expected from game players in other countries.

The battery will give us a lot more/better information than most laptop batteries. People have already been thinking of doing games with the information. Also, we have a string pull generator device that should be much more effective than most cranks. Jim Gettys

14V DC or 10V DC? Why up to 25V?

I'm a bit confused here. 8 AA NiMH cells, in series, discharge at 10V DC and would be charged with something like 11.6V. So why 14V DC?

Now the battery pack is a 6 cell. This will need bare minimum of 12V to charge decently and 12.5V would be better. 10V will not be able to charge the battery.

Is there really a need to accept 25V input? This will complicate the charger as it will need to be a switching regulator rather than a linear regulator or you will have a lot of extra heat.

We're ending up with 5 cells, after all is said and done. It turns out it is always easier to down convert voltage than handle both up and down. And we can go to about 25 volts before incurring any additional cost, so we're doing so. The polarity is also protected, so that you can't damage the machine by plugging it into the voltage source backwards.

What will happen to the unit if 110V or 240V AC power is plugged or shorted directly into its power input? There are many levels on which this could be dealt with, e.g.: A. Machine charges using some of the available energy. B. Machine doesn't charge, but doesn't break. C. Machine breaks but doesn't let out toxic smoke. D. Machine breaks, catches fire, and spews melted plastic and rubber on foolish child. E. When out-of-spec power is supplied, machine screams (audio) for help before breaking ten or twenty seconds later.

While "testing to destruction" those 500 units, you should also try plugging AC power into each of the other ports (USB & audio). Pour kool-aid into the "ears" while AC power is applied to the audio port. Wash the laptop with harsh soap and a scrub brush every week. Swim across a lake while toting a laptop on a strap (while it's got an ebook open and is playing music). Move the laptop close to a fire (close enough to not need a backlight to read it) and see what melts first. Hmm, I wonder if you could generate power from heat, e.g. an attachment that a kid could toss on the woodstove or in a fire that plugs into the power port. Etc. The imaginations of kids are unlimited.

Question: why 25 Volts? If 'by accident' connected to a truck (nominal voltage 24 volts), the OLPC will be slightly overloaded (up to 28 volts approx.). Suggestion: a maximum of 30 volts would be fine. What about a bridge rectifier, this would allow simple chargers (mains transformer, manually/'pedually' operated generator etc.) and would eliminate any revers polarity problems.

The testing is very extensive. I liked particularly when Quanta explained the conventional keyboard test, which was of order "take a cup of coffee, with cream and sugar, and pour it between the g and h keys..."

So above and beyond the "usual" testing of this order that they do for "normal" laptops, we added quite a bit of additional testing: wider temperature ranges, higher falls, and so on.

And yes, it has a bridge rectifier; we were talking about nominal input voltages; people understand that truck batteries, nominally 24 volts, actually provide more, and generators are very spikey indeed.

Battery Type/Voltage

The battery pack should accept AA NmH cells, that can be replaced by the teacher. These can also be purchased at a reasonable price, as opposed to a "battery pack". 10 cells equals 12 volts, as said elsewhere this is a standard that should be adhered to. Allowing a 12v input would not charge the battery properly, but this could be connected to a larger (sealed lead acid) battery. With 2.2 AH cells, and a 12 watt consumption, what is the actual operating life of the battery? If teachers are reading this, this is important to them.

This is a link to a page that talks about an electricity generator that could improve your laptop project http://www.lacapital.com.ar/2006/03/08/general/noticia_275466.shtml

I hope it helps!!!

We had the same naive view when we started. The problem is, with multiple cells, the connection between cells is not reliable. And computers, rather than flashlights, must have absolutely reliable contacts in their batteries. We do hope the battery packs won't be very hard to service in the field, however. On top of this, UL won't certify anything in the separate cell vein, even if it worked. And we want to be able to swap packs quickly, not with a pile of batteries which can be inserted in the wrong direction. It also looks like 2000 cycles is possible on the cells, by care in the charging circuitry. - jg

Your design already supports AA NiMH cells without any change to the design. For years people have been using external battery packs to charge (or extend use time) of PDAs and digital cameras. The OLPC will be no different, especially if someone can rig up a battery pack out of overripe bananas and seawater or something similar. Same goes for charging. Somebody is going to refit electric motors to make DC charging units that you can hook up to your water-buffalo. Your design allows a wide range of power inputs to be safely applied and that gives it maximum flexibility.



I'm not so sure I agree with the FUD on the page about the batteries. ("High charging efficiency", "no environmental concerns", "no safety problems", "removable packs being cheaper") If you're picking NiMH, but not picking a format that can use readily available cells as has been mentioned, I only see one reason to use NiMH over LiIon, and that reason isn't even mentioned at all.

Charging LiIon is easier than charging NiMH. In either case you can't just pump energy into either container at will -- you'll have to throttle them both. But the end-of-charge detection for one chemistry is much easier than the other, as far as I can tell.

And that's not the worst of the problems. If you're going to use typical NiMH cells, you only get 1.2V per cell. You'll have to gang them up in series to get the voltage you want. The worst thing about NiMH packs is the fact that a single cell in the pack can start to get weak and really hurt the pack due to charge/discharge differences. Over the course of charge/discharge cycles, this problem is exacerbated, leading to a self-toasting pack. I don't see that these packs are user-serviceable, so in the end, how can this be better for the environment?

This is assuming you don't put taps in between each of the cells (4 for a 5 cell pack) to make sure imbalances don't happen and don't propagate. How can this be easier or cheaper than, say, a 2 cell lithium ion pack which only needs one tap to track the cells, and has an easier end-of-charge detection mechanism for each cell?

As far as safety is concerned, these are energy storage devices, and they can both blow up. Some people might refer to the recent hubbub with Apple/Dell battery packs, but I think those are somewhat special cases (more social than technical). Bear in mind that lithium ion has been happily powering our camcorders and other electronic devices for a long time without blowing up. Where was the rash of camcorder fires? What about cell phones melting down? On top of that, lithium technology has gotten better, and is still getting better.

And while we are on that subject... why exactly are you designing a new pack to begin with? Why can't you use an already existing format? OK, so you supposedly have reasons to not use single-cell AA's. Why can't you adopt an already popular battery format? Say... camcorder battery packs? It's clear that manufacturers already know how to make those quite cheaply, and they have guard circuitry built into them already. The chargers are already widespread as well. This one I really don't get.

Internal USB Connector

It would be nice if the space for an internal USB Connector could be found (20x50x8 mm³ and a hole for a mounting screw). This would greatly reduce the risk of mechanical harm or loosing for an USB memory stick. (Rationale: one of the more often seen criticisms of the OLPC is its relatively small mass storage, a relatively high percentage of users might want to upgrade over the laptops lifecycle. Other stuff like an internal USB radio could also make sense).

FWIW: we're taking great care on mechanical ruggedness of the connectors by moulding them into the case. I suspect the USB device will die before the laptop will. There exist some very small USB memory sticks that won't stick out much. That being said: USB is power hungry (not well designed for low power), so I don't think you'll want to leave USB devices plugged in while on battery power if you can avoid it - jg)

  • Maybe migrating to a PCIe board, with all of those nifty features available, could tackle any USB power hungry issue with a bit of driver/software tweaking (aka the OLPC turning off any USB devices at 10% or so). There was also a PCIe spec about a potential replacement to USB, but I cannot find it. I'm not saying USB should be replaced, but if an OLPC could integrate both into the same port, there might be some potential power savings somewhere. Very small flash mem sticks indeed. (it is a slightly out of date link since they have 4gb ones now. but still!) <{Phil.andy.graves 02:07, 17 August 2006 (EDT)}
You would need to add a seperate PCI->PCIe bridge and controller, because the CS5536 can't grok PCIe natively so we wouldn't be able to take advantage of those nifty features.--JordanCrouse (Talk to me!) 12:19, 17 August 2006 (EDT)
The bios/drivers control PCIe, not the cpu (however that would be extremely nice to have as an option).
I recommend that you read [2] - you need at the very least something to provide the Root Complex and then some way for that to talk to the processor. You can't simulate PCIe functionality with just software.--JordanCrouse (Talk to me!) 10:55, 18 August 2006 (EDT)
There will be an SD-card slot, which should address the concerns regarding upgrading the mass storage. --65.78.14.167 07:54, 18 August 2006 (EDT)

MMC/SD Card Connector

The 2B1 machine announcement says there's an SD card slot on the first production run. (Doesn't say whether it's internal or external, I presume external, e.g. under one of the "Ears".) It's a good way to be able to expand the flash capacity cheaply and with low power.

One problem with the SD card spec is that it includes DRM. When bulk-purchasing cards for this slot, it would be best to get MMC (non-Secure but compatible) cards instead. The flash vendors all want customers to adopt DRM -- I'm not sure why. So they've stuck it in much of the available flash. E.g. I haven't seen 2GB MMC cards, only 2GB SD cards. But for a large purchase it should be possible to buy without DRM -- and get it cheaper too. --gnu

While SD allows for DRM, it doesn't require it if you don't wish to use it. We should be able to use 1.10 compliant SD cards quite happily with Linux. - JordanCrouse (Talk to me!) 13:07, 28 August 2006 (EDT)

An internal MMC or SD Card connector would be nice if the specifications for IO-Extensions to the MMC or SD protocols (SDIO or MMCIO) are available (this is an alternative to an internal USB connector (rationale is given there). Probably more power-effective, but less gadgets available today).

Not this time around; we have nothing to plug one of these into, even if we had the cost of the connector lying around free -jg.

You might consider putting in a 0.1" 7-pin header with appropriate pinout - you can connect (not ruggedly, but decent if you put it in once and leave it) an SD/MMC card using a right-angle 0.1" header. It only takes 3 I/O if you do it right, bit-banged. (4 I/O if you want to be able to toggle the chip select, which isn't necessary if the I/O aren't shared with anything else. On the other hand, use 4 and 3 of them are reusable for anything that doesn't need simultaneous access.) I believe two or three pullup resistors (10k or so) would also be needed. This might be a nice "hackable" feature if you can spare the space somewhere for that 0.1" header. (Or put the 0.1" header pads in for GPIO hackability, have this be just one possibility.) --DTVZ

They now have duo SD cards available which have a USB connector built in. Should I dare to mention USB connected multicard readers? Offering only a single solution on the OLPC leans towards proprietary-ness.<{Phil.andy.graves 02:29, 17 August 2006 (EDT)}

Instant/Always On - No Boot

Will the first version be instant-on no-boot?

Initial boot will take a while (tens of seconds). We intend to make suspend to RAM and resume extremely fast, so the normal state will be "instant on". In fact, we'll be suspending the machine when you go idle for almost any period of time; but as the screen can stay on, you'll never notice. -jg

Use depinit (http://www.nezumi.plus.com/depinit) - on such a minimalist system, you will probably find that the boot time is cut to about ten seconds once the kernel starts running (debian kernel itself, with all the module detection, takes ten seconds on its own), and the stop-time to be about three to five seconds. my laptop _without_ depinit boots up in ninety seconds into x-windows; my laptop _with_ depinit boots up in 25. richard lightman's view of the present initscripts setup is a very dim one: remember it was created when context-switching was horrendously expensive, and so everything was run sequentially. many startup scripts include things like "while not done yet sleep 1; do something; done" for example the hotplug "initial trigger" scripts which is such a waste of time. depinit is a proper parallel dependency and daemon management system - plus, the scripts that richard has created have a lot less stupidity in them. lkcl03jul2006.

Or Apple's launchd, http://en.wikipedia.org/wiki/Launchd ? Great, great hardware work by the way! -- skierpage 2006-06-09
Or see Serel, http://www.fastboot.org. It analyzes dependencies among startup scripts, then runs as many as possible simultaneously.

Test Machines

Will there be a possibility for developers to buy or borrow some of these machines to do some software testing ? From what date ? At which price ? Who would be eligible ? Who should one contact ?

Yes, there is a Developers Program starting in June. The bare boards will be supplied at no charge (as they have not yet been through regulatory approval, they cannot be sold) - jg

In the meantime, perhaps the OLPC Python Environment or Sugar will be of use to you.

Wireless Mesh Networking

WiFi-range

Hi, I am adding some more informations about the $100-Laptop in the german wikipedia (link: http://de.wikipedia.org/wiki/100-Dollar-Laptop). I would like to get to know what long way can be two laptops away from each other

  1. in a building
  2. under perfect conditions outside a building.

If someone gives me reply I will insert it in the german wikipedia. User: Betbuster 134.2.57.213 08:50, 19 December 2006 (EST)

Chip selection

Apparently the Marvell 88w8388 WLAN chip (with advanced 802.11 a/b/g RF transceiver) was selected at least partly for its inherently low-power operation. The 88w8388 includes a built-in ARM CPU and a "thick MAC" stack, giving it the ability to relieve the AMD Geode host CPU of TCP/IP stack-handling chores. Consequently, the 88w8388 also has the ability to forward packets (e.g., within in the WLAN mesh) even when the host CPU is in the S3/off state, resulting in considerable power savings over conventional WLAN chips (like the Atheros chip considered previously). This is old news.

Newer news, however, is that a later chip, the Marvell 88W8686, maintains these capabilities with even less power consumption (under 400 mW) and smaller footprint (50 square mm), at least partly because it is implemented at 90nm instead of 0.15 microns. Marvell's press release about this came out on 7-20-2005 (http://www.marvell.com/press/pressNewsDisplay.do?releaseID=527).

Since Marvell is a member of OLPC, presumably the OLPC team is aware of the newer, lower-powered chip. Nonetheless, this OLPC website still seems to reflect only the 88w8388 in its discussion. Because of the project's emphasis on low-power operation and cost reduction, it would seem appropriate to at least have some discussion of two chips' respective merits, relative to the project. Although it's likely that the 88W8686 arrived too late to be incorporated into the first-generation, perhaps it would be appropriate for the second, unless some other factor(s) like cost or availability rule it out. It would be interesting to get authoritive input on this aspect.

It would also be good for an OLPC team member to weigh in on whether 802.11a mode will also be supported in addition to 802.11b/g, since this capability is inherent in both chips mentioned here.

Sausage making requires the right volume of components available at the right time. - jg

Multicasting issues

Mesh networking is cool because it creates a spontaneous network, including one that can span vast distances compared to a conventional WiFi hotspot established with a single WAP. But I am curious how one manages things when nodes get very dense, e.g. in a classroom where every node might be directly connectable to every other one. Surely there is no harm in enjoying the reliability of highly redundant alternative paths given unicast traffic. But does anything unfortunate happen when you want to support multicast traffic?

I assert multicast is important for a classroom setting where we want each child's laptop screen to replace the pre-cybernetic method of sharing a common image - the blackboard/whiteboard. (Note this modern method obviates the delay of walking to and from the board!)

My knowledge of WiFi multicast is limited to the tutorial: http://www.wi-fiplanet.com/tutorials/article.php/3433451 in which one begins a multicast session by sending unicast dataframes to an access point. Within the mesh networking scheme planned, will any laptop then act as a suitable access point for such a multicasting methodology? Or are there complications I don't understand and which have not been addressed?

The cited tutorial also warns of the hazard of having a single "spoiler" client in power-savings mode when one is multicasting. Is this issue also germane here?

If multicasting will be supported, what means might be available to prevent its abuse by naughty kids? When I was a child many decades ago, my classmates would sometimes design at disrupting the classroom decorum by passing packets (using paper for OSI Layer One!) Will securing the teacher's supremacy of the airwaves be a matter for the software to address? Or should only a special "teacher" laptop be able to multicast packets?

Perhaps you might chastise me for not posting this discussion to the Software_specification, but the data link layer is very alien to nearly all programmers!

- Docdtv 01:33, 25 October 2006 (EDT)

Since I wrote the stuff above, we've been advised that OLPC is opening the kimono with regard to the Sugar-XO User Interface (UI). Reviewing the UI docs starting to appear in this Wiki makes it manifest that the mesh nework is deeply integrated - not a mere afterthought for apps to exploit.

We all know the phrase "It's not a laptop project; it's an education project." But do I now hear someone in the back of the room saying "It's not an education project; it's a groupware project." ? <G>

Anyway, the "blackboard replacement" question about which I pondered aloud above is addressed by the (multiple) "bulletin boards" - one for each of the various levels of pseudo-geographical zoom - as well as for each of the separate live "activities".

Since these features are a matter of software, I'll write no more about them here. But my curiosity about the multicasting efficiency issue remains. It's just that now it is obviously all the more vital.

- Docdtv 03:59, 2 December 2006 (EST)

Apropos of this discussion, item 4 in the 2006-12-09 OLPC News looks to the fallout from breaking the symmetry of all the nodes in a mesh by introducing "server" nodes: the mesh protocol is subject to change. Note that the Libya project includes servers.

- Docdtv 01:58, 14 December 2006 (EST)

The server as a node on the mesh has been part of the plan from day one. It is not driven by any special need in Libya. There will hopefully be other devices in the mesh as well, e.g., a projector, perhaps the occasional printer, etc. --Walter 16:02, 14 December 2006 (EST)


Doc; The following might be interesting as a description for messing with someone's packets and preventing said mess can be as instructive as a ton of dry specs. Go down to man-in-the-middle here. or
Surpressing attacks. After establishing the structure of the mesh, redundant packets are broken up and sent by alternate routes to the destination machine which checks for integrity and identifies problem nodes. So the mesh is a lot safer than a straight line hand-off. For a discussion and display of constructivist education you should go to Moodle which I have used for nearly five years with great success. You can see that it is in thousands of schools world-wide so the disingenuous "concern" of critics over the poor state of education in other countries is often a trick of misdirection. As in "evolution is only a theory."

Bob calder 20:00, 2 December 2006 (EST)

Wireless mesh-network standards?

Where can I learn about the standards that will be used for the wireless mesh network? As far as I know, there is not yet a widely accepted standard for wide-area mesh networking, but I was hoping I might find some ideas here.?

IEEE 802.11s is under development. Unfortunatly, IEEE's process is not transparent to the degree the IETF, and so you can't just download the standard-under-development. - jg

Should there be provision for a mouse?

I was looking at the design of the case for the laptop and wondered where the mouse would be put when the children were moving the machine from place to place and then wondered whether there was a mouse or a built-in trackball. Further investigation found that there is a trackpad. I had never heard of trackpads so I searched at Yahoo and found the following link.

http://en.wikipedia.org/wiki/Trackpad

Within that article is the following.

Quote

Touchpads commonly operate by sensing the capacitance of a finger, or the capacitance between sensors. Capacitive sensors are laid out along the horizontal and vertical axis of the touchpad. The location of the finger is determined from the pattern of capacitance from these sensors. This is why they will not sense the tip of a pencil or even a finger in a glove. Moist and/or sweaty fingers can be problematic for those touchpads that rely on measuring the capacitance between the sensors.

End quote

I am wondering, possibly naively, but even if naively maybe it is better if asked, whether it is certain that a trackpad which is to work in various climates with children and which relies on fingers being neither moist nor sweaty is going to be both reliable and suitable for 100% of the children at all times?

Is there provision that a conventional PC mouse could be plugged into the laptop and the trackpad disabled if the accessibility needs of any particular student need to be met?

Certainly, I have not used a trackpad system, yet feel that maybe asking these questions anyway is what I should do.

William Overington

8 April 2006

With USB ports, a mouse can be added. Mice, in quantity, are very very inexpensive. Probably less expensive than trackpads. Maybe the trackpad should be dropped and a mouse included.

Note that we have recently identfied (at least one) vendor with a novel dual mode pointing device, that can function either like a conventional touch pad, or with a pencil/stylus as an absolute pointing device. The pointing device will be as wide as the screen. This is novel to our system at this time. A decision of which vendor will be made around June 10.- jg

Keyboard?

How will be the keyboard? AZERTY, like in France, or QWERTY, like in UK?

Keyboards are always set up for local languages; what keyboard engravings you get will depend on your geography. -jg

The specification says the following about the keyboard.

  • Keyboard: 80 keys, 1.2mm stroke; sealed rubber-membrane key-switch assembly

Does "sealed rubber-membrane key-switch assembly" mean that the "keyboard engravings" mentioned above are all done, at some stage in the manufacture, onto a single piece of material?

Yes - jg

If so, could you possibly publish the graphic art used for one of them please?

Interesting request - seems reasonable to me, once we have such artwork. -jg

(This has now been done at Keyboard Artwork Library. --gnu)

This would be useful as it could enable people wanting minority script support to make the artwork needed. Could you possibly say how straightforward or not straightforward it is to make a keyboard for a minority script and what the minimum number needed would be for such special manufacture to be viable please?

I suspect (but don't know) it is a silkscreen sort of process. Keyboards for other scripts are SOP (standard operating procedure) for computer manufacturers.

Are the "keyboard engravings" actually engravings or are they simply printings?

Printing of some sort, from the samples I've seen. - jg

Would a field programmable version based on the idea for a piece of electronic paper as mentioned in the Hardware Ideas - The Keyboard page be useful to the project?


For many programs, the script on the keys have no relation to what the key is used for. The key is just 'a button' but assigned a different function than the button(s) next to it. One such example could be TamTam, where the keys could be used to play different notes. Or a game where you pick up different "stuff" depended on the situation.

Would it be possible to have the keys coloured in a chromatic accordion pattern? Two slightly different shades of (stylish colour of choice) would do the trick. This would make it easier to relate to a screen shot of shortcuts based on keyboard with a different language layout. One such application could be a cheat-sheet on screen with BASIC mnenomics, a bit like the sinclair spectrum, allowing for fast typing of programs. It would possibly also be easier to learn to touchtype without constantly staring at the script on the keys. Oh yes, of course, it would also make a lot of sense should you happen to use it as a musical instrument :-D Mx44 07:43, 17 August 2006 (EDT)

Stylus, and its storage

The laptop includes a large area on which the child can write "with a stylus". Will a stylus be provided with the OLPC?

If so, where will the stylus be stored? All such palmtops, for example, include a slot for keeping the stylus in, to avoid its getting separated from the machine, or lost.

If not, that writing pad area is going to get a lot of ink and/or graphite on it, as the kids use whatever comes to hand to write or draw on it.

Alternative use of audio device

Is it also possible to generate voltage with the audio device? In electronics it is important to have diffent voltage, AC voltage, rectange, dirac,...

If I understand your question, it is yes, the audio device can be used to directly measure voltage, allowing many cheap sensors to be directly interfaced to the machine without additional hardware. This will be very useful for kids learning basic science and/or music, to name a few subjects where such devices can be very useful. - jg

The OP probably meant: is there an option to switch the audio output (the 3.5mm stereo jack) into DC mode? Together with the DC input mode this would allow to use the OLPC in control loops (OLPC is attractive there because of WLAN & display & battery & nonvol. memory). DC coupling of the output should be an option as permanent DC coupling is probably not compatible with all audio devices. Eventually this would require only about $0.04 for one or two FET transistors.

To make electronic experments you need a device to measure voltage and a power supply. This power should be deliverd from the laptop. The laptop can be used as a (low frequenz) oszilloscope, the power supply must deliver AC power in this case additionialy to DC.

Testing for Humidity and Dust

In india, the dust and humidity levels are ridiculously high compared to US/europe. For a desktop, very fine soot-like dust accumulates into thick layers within weeks within the fan blades and in the heat sink over the cpu (this is within a reasonably well protected room). Mostly you know when to clean it up when your system keeps crashing way too often. For the OLPC laptop, neither is there a fan, nor is the cpu going to heat up so much?? to cause the same problem?. Shouldn't it be tested more rigorously (as in not to US standards) against dust atleast. Its affects are as bad as heating no?

The dust in some countries is not the dust we know. Espcially in deforested areas there is lots of dust and it goes everywhere. Some roads are not passable because of the dust on the road. It's maybe easier to make something waterproof than dust proof.

There is no fan. There are no moving parts. We're taking care to encase external connectors, both to help seal the machine and for strength. And yes, we're testing to more rigorous than usual standards; 500 machines will be tested to destruction this fall, and the standards are much higher than a conventional laptop (go look at the spec's; some of the environmental stuff is there.

Has conformal coating the board been considered? It is a great way to deal with condensing humidity and dust. This is a common practice for ruggedized equipment, but I'm not too sure of the cost. conformal coating

Display issues

LCD Breaking

One bad poke to the LCD and it's gone. Maybe the bluray scratch resistant layer or something similar could be used, at least for scratches? Shock dapeners aren't going to do anything for bored kids, their hastily drawn circle target, and a handful of darts. <{Phil.andy.graves 02:57, 17 August 2006 (EDT)}

  • I am more concerned about the pivot. Careful people will not poke, but they may use the pivot, since it is "meant" to be used. Laptops even with two hinges often get floppy or broken/cracked at the hinges. A pivot is going to break very easily. I don't see what advantage it has? Who wants to turn the screen around 180 degrees to face away from the keyboard? The whole design would be a lot simpler (so cheaper) and more durable (cheaper in the long run) if the pivot is replaced with some good, solid, sturdy hinges, with properly designed mountings to the case. -- Nerf 01:37, 2 November 2006 (EST)

Graphics

Is this just a 'bare frame buffer' - or is there some kind of graphics accelleration available? Kids software needs cool graphics - and it would be nice to know what sort of level to shoot for. SteveBaker

The Geode has built in bit-blit, alpha blending, as well as YUV->RGB and scalers for video. What we don't get this generation is hardware acceleration for 3D graphics. Sorry.... - jg

LED Lighting

I saw mention of adding LED lights to enable usage at night. Weekly News Jun 17 2006 In conjunction with the Ebook mode, utilize the Scroll/Enter keys to control the lighting. If the LCD controller has unused I/O minimal additional hardware would be required. Also, allow simple/documented direct access for software control, brightness/pulsation. If the LED's are positioned correctly, in Ebook mode, the light could be used for home illumination in addition to the stated use. --chuck todd--69.92.206.49 01:58, 27 June 2006 (EDT) y not? -- Ownut

They are not yet available--JordanCrouse (Talk to me!) 15:39, 8 August 2006 (EDT)
It sounds like they might eventually be available. Do you know if they will be complete and if they will be locked open with a Copyleft license such as the GPL? -- Ownut

Braille

I know virtually nothing about braille terminals, except, I have installed one to a blind boy's computer. And Linux has pretty good support for such "displays". Of course, such an option would need an alternative keyboard, some software development to get the user interface working for blind persons. Plus optionally, speech synthesis and/or recognition software.

"Black and White" quoted everywhere, do you mean Greyscale?

When mentioning higher resolution and longer battery life, I often see mentioned the term "Black and White". Do you actually mean Black and White (which tends to be unreadable for anything other than a terminal), or shades of grey? If you actually mean shades of grey, you should say that. If you actually mean black and white, you should mention that too.

greyscale - jg

Display characterization (as we outsiders understand it)

Define "pixel" as the spatially smallest element of any display whose intensity can be varied independently. (This is unlike the definition in Microsoft's ClearType™ discussion.) The OLPC display then is a 1200x900 array of square-spaced pixels whose intensities are set to 6-bit (64 level) precision, and the total number of pixels is 1,080,000. The display operates in one of two modes.

Monochrome mode The backlight is turned off and the only light the user sees is that reflected from the pixels using ambient lighting. All the pixels appear chromatically "white" - i.e. it is a "monochrome" display. The display is readable either in direct sunlight or simple daylight. Power consumption is an extremely modest 0.2 watts.

Colo(u)r mode With the backlight turned on, power consumption rises to a full watt. Each of the pixels is given one of three primary chromaticities: red, green or blue. (The intensity of each pixel is still 6 bits precise.) The suface is contiguously tiled with a pattern using the following 3x3 periodic repeat element:

R G B
B R G
G B R

Note that the primaries form diagonal lines. This means that rendering imagery on this complex mosaic favors neither the horizontal nor the vertical axis. Latin-related alphabets seem to benefit by using enhanced horizontal resolution; but the OLPC LCD will be used with almost any type of (potentially colo(u)red) human symbol.

It is subtle to compare the information content of such a display to one in which pixels can be set to arbitrary chromaticities (e.g. CRTs). It depends on how clever one is in the rendering process, and even on the nature of the source imagery. A thoughtful extended discussion of these issues can be found in a now-ancient blog here. (The blog's author appreciates the historical assessment of his work offered here and would also bring due attention concerning these matters to the very insightful founder and CTO of Clairvoyante, Candice H. Brown Elliott.)

Crudely, one might say the OLPC in colo(u)r mode is equivalent to a 1,080,000 / 3 = 360,000 pixel CRT-like display, which works out to 693x520 pixels using a 4:3 aspect ratio, i.e. a bit finer than VGA resolution. But again, clever rendering can improve the effective resolution.

The page maintainers currently observe: "photographing a display is remarkably difficult"

Yet the overall screenshot they provide is not terribly useful for critical analysis, save to suggest the capacity of the overall display. They are asked to supplement it with a photo of a small section of the display. By choosing a large enough zoom factor, one of course obviates the resampling issue.

Assuming they capture a raw image with a linearized camera, one then relies on Wiki users to linearize their own displays. Color gamut is of course remains problematic, because the three-primary hypothesis is imperfect, even if the primaries are highly saturated like laser light.

Yes, as someone observed, space-divison-multiplex coloring limits brightness, but Wiki users can pull tricks like using projection displays at low magnification, or viewing reconstructions in partial darkness, if they want some subjective impressions on displays they own.

But mainly, a zoomed image would also allow objective studies of stuff like pixel cell structure (e.g. the fill-factor), display homogeneity, etc.

Let's have some additonal quantitative display specs if possible, please. Is Dr. Jepsen filing one or more patents on her work?

Docdtv 03:21, 2 December 2006 (EST)

Camera

It is remarkable the OLPC guys squeezed a color camera in at the last moment without fear of busting the budget! (Is this "mission creep"?) Anyway, now that users can take photos of local people, plants, etc., the issue of color verisimilitude between capture and display endpoints rears its head. What do photos of the OLPC staff look like on the OLPC display? As I write this, only a saturated-palette graphic is shown. (Aside: With a camera available at ALL times, it is possible to let the OLPC user shoot a viewing-time white reference to tweak how photos are displayed - but I guess maybe the display brightness is so low in color mode there can't be very strong ambient light anyway.)

Docdtv 03:21, 2 December 2006 (EST)

Please add Category

Just copy and paste the Category line [[Category: Developers]]