Talk:XS Server Hardware

From OLPC
Revision as of 14:56, 23 April 2007 by Wad (talk | contribs)
Jump to navigation Jump to search

Please keep leaving comments. They are read, and taken seriously. The XS spec will get overhauled in the next week to reflect recent decisions and plans.--Wad 10:56, 23 April 2007 (EDT)

Scalability

Doing some quick math based on Argentina Statistics, at the national level you have 5,151,856 kids in K-12 grades in 27,888 public shools, giving ~180 laptops / school (or ~3 servers / school).

Does the internet connection scale for larger schools ? Does a 60 student school have the same access as a 240 student school ?--Wad 00:22, 2 February 2007 (EST)

Good or bad? Don't know.

PRO
Multiple servers could improve coverage, redundancy and available storage.
CON
Higher costs per school (depending on alternatives), multiple failure points and/or administration.

We are deploying just one (overpowered) school server per early trial, and will obtain a better idea of the requirements. The final number may be 120, or 50...--Wad 00:22, 19 March 2007 (EDT)

As a note specific to Argentina, the educational system is split in 4 three-year chunks: EGB I, EGB II, EGB III + polimodal (being only EGBs compulsory). Multiple servers could support 'administrative' layers or frontiers between these different 'levels'—acting like some 'sub-network' division.

This is just a guess-list. Please add at will. PoV is subjective... so a con may sometimes be thought of as a pro depending on the overall context. For example, redundancy could be thought as good if it's simple and straight forward out-of-the-box(es), but if to administer the multiple servers you must do it in a totally different way than when you just needed one, then it can be thought as a con given the penalty to growth.

There is no question that a single school server will be targetting a maximum number of students, the only question is "how many?" I can design a server capable of serving 20 students, which will fail miserably if you try to use it stand-alone in a school of 200. I can design a server which can handle 800 students, but it would be overkill for a school of 30.--Wad 00:22, 2 February 2007 (EST)

One possible XSX hardware

I've been running FC6 on an HP dc5750 (and just ordered another). It can be bought in a standard minitower. AMD chips, ATI Radeon XPRESS 1150 graphics, Broadcom gigE. Ten USB slots (2 on internal headers). Has fans, but they are very slow/quiet in normal operation. I can't hear it in my living room unless my head is very close to the fan. I bought it with the Athlon64 X2 Dual Core Processor 3800+ (35W max), which reduces the CPU max power/heat by half, and with the 80PLUS power supply which is more efficient, thus less power/heat. ($40 and $20 options.) Has two SATA disk slots and 4-way mobo SATA controller. No PATA. 2 PCI slots, 1 PCIe 1x, 1 PCIe 16x (you'd need to add an Ethernet card, since this only has one GigE).

PRO: available from stock, or with a few weeks leadtime for custom config. Quiet, low power, no Microsoft tax (buy the "FreeDOS" version).

CON: ATI chip undocumented and poorly supported (works great for 2D graphics, nothing better). Minor issue when running FC6: Linux driver for the ATI chip barfs over the optional LightScribe DVD+-RW, for unknown reason, though FC6 install DVD worked great during installation from the same drive! Solved by adding a $17 Rosewill RC-210 PCI SATA card for the DVD (it also gives you an external SATA port for a faster ext hard drive than USB can provide). Linux has no trouble accessing disk drives on the motherboard SATA - just the lightscribe DVD writer. lm-sensors can't see the motherboard sensors; it's probably the *%&$$ ATI chip again. (Maybe yr buddies at AMD/ATI can help you with this.)

HP's optional EM718AA memory card reader (plugs into the floppy slot and a motherboard USB header) does NOT support SDHC, so avoid it. I tossed theirs, and added a $30 YE-DATA YD-8V08 memory card reader and floppy drive, which DOES support SDHC. Newegg.com has it (and the RC-210 SATA).

I've noticed that when plugging in USB or SD cards, sometimes Linux doesn't immediately notice and DTRT. If you plug in something else on USB, then it figures it out. Don't know whether this is hardware or FC6 software problem.

Thanks for the tip. We are currently looking for fanless systems, and are exploring miniITX motherboards. They easily meet the requirements listed for the XSX.--wad

Should spec GigE w/autoX

The server should talk GigE. In 2007 there's no point in deploying 100BASE-T. In any school with more than one server, these boxes are all going to want to talk to each other, accessing each others' disks, etc. Ethernet is not just the uplink to the Internet; it's the disk access bus for the whole school. Why bottleneck NFS, remote backups, DVD/CD reading and burning, etc to save 15c?

Actually, it is more like $6, and the wiring to make use of it is slightly more expensive. Yes, it would be nice to have 1000baseT for the school backbone but it is not worth making a requirement, yet. In small schools with a single server, it is a wasted cost. Even in larger schools, the backbone may be wireless.
The difference between a brand new PCI card complete with packaging and shipped to me with 1GbE and 100Mbps is less than $6 in the UK. I would expect the cost of an integrated solution on the board to be around $2 extra --Jabuzzard 03:02, 22 April 2007 (EDT)

I strongly advise making the ethernet port(s) on the server do the automatic-crossover thing, so that there's only one kind of Ethernet cable, whether or not you're using a hub or just plugging two servers together directly with cat5. And put a link light on it, so there's immediate physical level feedback when it can see the other end of the cable.

Very good suggestions. Automatic crossover is part of 1000baseT, but still might carry a cost on 100baseT. The need for link lights is well known.

I suggest sidelining the powerline ethernet stuff. The last thing you want to have to debug in the field is half-assed networking. (I finally bought a USB ethernet card for my XO - Linksys USB200M - and am much happier.) Major companies have been pushing powerline ethernet for years and they get no traction. Why? Haven't tried it myself but the feeling is that it isn't reliable, doesn't get real spec'd throughput, etc. RJ45/cat5 ethernet is so simple, cheap, and so widely deployed that it's almost impossible to screw it up. Once switches replaced hubs (and there were no 50 ohm terminating resistors from the coax days) it all just went plug-and-play; with the widely deployed spanning-tree algorithm, even loops don't faze it. If somebody really wants to run powerline, there are external adapters from cat5 to powerline.

The concern is wiring costs (which are always much more than just the cost of the wire). Power is likely to already be available at all servers. I have tested powerline networking, and it can work fine, but can also be stymied by some devices such as home computers and small electronic appliances which have input circuits which try to minimize the RF intrusion. Phoneline networking works much better, but phone lines are less likely to be in place than power, and if wiring must be done there it probably costs the same to install Cat5e as Cat1. --wad
The main cost in doing wiring is the labour. There is a massive difference between putting in a single link to connect two servers and doing full structural wiring. The first can be done quite cheaply, the second requires all sorts of additional stuff like patch rooms, panels, racks, containment, ... Powerline networking is a hugely expensive technology that performance wise sucks. Given the cheap cost of labour in the target countries putting in Ethernet links is the sensible route. It also builds local expertise up which can only be a good thing. --Jabuzzard 03:02, 22 April 2007 (EDT)

Or, another option: If you build powerline ethernet into the server, build it straight into the power plug/power supply. One plug both powers the unit and hooks it to powerline networking. Include a separate GigE port or two. Then, as soon as you plug it into power, if the powerline stuff works, great, it'll be networked. But there'll always be another method that's known to work. Make sure you can turn off the powerline stuff (or that it's off by default) in case it hashes other things by adding RFI to the power wiring. Also see what effect it has on nearby Marvell chips :-). (PS: If nobody builds a PC power supply that includes powerline networking on the same cord, maybe there's a reason why.)

Actually, the reason is mostly due to the economics of the PC industry...
It won't affect nearby Marvell chips, but it will greatly affect shortwave AM and SSB radio reception in the surrounding area. I am not a fan of powerline networking by any means, but if it worked, in many situations it would be a better candidate than 802.11a, the other economic (no new wiring) option. 802.11g is out as we don't want the backhaul in the same spectrum as used for the mesh networking. --wad

Failure Points

Drives and fans will get you. If you can't eliminate them, at least plan for frequent failures at awkward moments. AlbertCahalan 10:32, 22 March 2007 (EDT)

The plan is to have no fans in the XS school servers. The very early XSX prototypes (off-the-shelf) will probably have a fan (w. bearings) in the power supply, also helping with the overall system (disk and processor) cooling.
Drives are a different matter. We need them to economically provide the amount of storage the school server needs. And due to economic factors, we aren't considering RAIDs. The XS server will, however, include sufficient OS on NAND flash that it will continuing providing networking functionality even if the disk fails. And user generated content on the school server should be backed up continuously. --wad
Not sure what you mean by "user generated content on the school server should be backed up continuously". Where "should" it be backed up TO? And do you mean that the humans there "should" back it up (somehow), or that the server software "will" back it up?
There's no substitute for offline backups -- particularly offline backups with write-protect switches. No software glitch can erase them while offline, nor can they be easily trashed when plugged in for recovery, if the write protection is engaged. (Can you tell I was trained in the mainframe era?) External hard drives (eSATA or USB) are the obvious thing, but few come with working write-protect switches.
If a school has two XS or XSX, they ought to back each other up automatically. Big drives that can hold twice the data have only a small price premium over small drives. Making this automatic for two servers would be pretty easy; but getting the general case right (N servers on the LAN, or N servers divided up on several locally linked LANs) is hard without manual configuration. The idea is that no piece of data is stored on a single spinning drive; it has to be replicated til it exists on two drives, preferably on two different servers. And then, if one fails and a new drive is installed, there's a simple path to full recovery. (A company I work with, called ReQuest.com, makes home theatre music systems with hard drives. They automatically back up your music collection if you have several of their units. Saves many customers from trouble. Making them buy two servers if they want backup is expensive, so it probably shouldn't be the only way to back things up. But since many customers will already have two or more, having them back each other up is almost free, and vastly improves your chances of keeping the kids' and teachers' data.)

Power Budget

You are never going to make a 8W power budget for the server. It is utterly unrealistic. A 3.5" hard drive will take that alone. One thing I would do is pick a wide range input power supply for the main board, there are lots of these available that take anything from 6-30V input, something similar to the XO itself. You can then use a battery backed PSU instead of an UPS. These are much more efficient, have less components and are thus more reliable. --jabuzzard

Hehe. I just made an account to leave exactly the same comment. For example the Seagate 7200.10 300GB drive takes 9.3watts IDLE. You'll have better luck with 2.5" laptop drives. The Toshiba MK2035GSS is around 0.85 watts idle and about 2watts under load, but I expect such drives to be more expensive into the near future. If the 8w number is a real target it might make sense try to keep the normal working set in ram/flash so that the drive can sleep most of the time. --Gmaxwell 01:28, 15 April 2007 (EDT)
Not only is a 2.5" drive much more costly, the performance tends to suck. The largest 7200RPM 2.5" drive, a Seagate Momentus 7200.2 is only 160GB, but for the same price you can get a 500GB 3.5" drive! The largest 2.5" drive is a Fujitsu Mobile MHX2300BT, but this only has a spindle speed of 4200RPM and is not yet shipping. The largest currently available capacity on a 2.5" drive is 200GB still with a spindle speed of 4200RPM, at the same price as a 750GB 3.5" drive. There is no way you are going to handle 50 users with a 4200RPM drive, and for the price of one 2.5" 7200RPM drive you could have a RAID-1 of 250GB 3.5" drives. It is clear that the power budget was proposed by someone with not the faintest clue. --Jabuzzard

External SATA

The suggested specification has USB2 for adding extra hard drives. Think is that hard drive performance on USB2 is terrible, even Firewire at 400Mbps hugely outperforms it. However from a cost perspective most chipsets these days come with at least two, and many have four SATA ports. All you need to do is bring these to an external header. A eSATA enclosure is cheaper simpler, more reliable, uses less power and faster than messing about with USB2. That said I would look to have at least two if not more hot pluggable slots to take SATA drives in the main server case to enable swapping them and upgrading. Having drives dangling of USB or eSATA cables in the proposed deployment scenarios for a server is a recipe for disaster, and the drives are going to fail. --Jabuzzard 03:15, 22 April 2007 (EDT)