Talk:XS Server Hardware

From OLPC
Revision as of 03:52, 24 March 2007 by Wad (talk | contribs) (correcting typos in reply)
Jump to navigation Jump to search

Scalability

Doing some quick math based on Argentina Statistics, at the national level you have 5,151,856 kids in K-12 grades in 27,888 public shools, giving ~180 laptops / school (or ~3 servers / school).

Does the internet connection scale for larger schools ? Does a 60 student school have the same access as a 240 student school ?--Wad 00:22, 2 February 2007 (EST)

Good or bad? Don't know.

PRO
Multiple servers could improve coverage, redundancy and available storage.
CON
Higher costs per school (depending on alternatives), multiple failure points and/or administration.

We are deploying just one (overpowered) school server per early trial, and will obtain a better idea of the requirements. The final number may be 120, or 50...--Wad 00:22, 19 March 2007 (EDT)

As a note specific to Argentina, the educational system is split in 4 three-year chunks: EGB I, EGB II, EGB III + polimodal (being only EGBs compulsory). Multiple servers could support 'administrative' layers or frontiers between these different 'levels'—acting like some 'sub-network' division.

This is just a guess-list. Please add at will. PoV is subjective... so a con may sometimes be thought of as a pro depending on the overall context. For example, redundancy could be thought as good if it's simple and straight forward out-of-the-box(es), but if to administer the multiple servers you must do it in a totally different way than when you just needed one, then it can be thought as a con given the penalty to growth.

There is no question that a single school server will be targetting a maximum number of students, the only question is "how many?" I can design a server capable of serving 20 students, which will fail miserably if you try to use it stand-alone in a school of 200. I can design a server which can handle 800 students, but it would be overkill for a school of 30.--Wad 00:22, 2 February 2007 (EST)

One possible XSX hardware

I've been running FC6 on an HP dc5750 (and just ordered another). It can be bought in a standard minitower. AMD chips, ATI Radeon XPRESS 1150 graphics, Broadcom gigE. Ten USB slots (2 on internal headers). Has fans, but they are very slow/quiet in normal operation. I can't hear it in my living room unless my head is very close to the fan. I bought it with the Athlon64 X2 Dual Core Processor 3800+ (35W max), which reduces the CPU max power/heat by half, and with the 80PLUS power supply which is more efficient, thus less power/heat. ($40 and $20 options.) Has two SATA disk slots and 4-way mobo SATA controller. No PATA. 2 PCI slots, 1 PCIe 1x, 1 PCIe 16x (you'd need to add an Ethernet card, since this only has one GigE).

PRO: available from stock, or with a few weeks leadtime for custom config. Quiet, low power, no Microsoft tax (buy the "FreeDOS" version).

CON: ATI chip undocumented and poorly supported (works great for 2D graphics, nothing better). Minor issue when running FC6: Linux driver for the ATI chip barfs over the optional LightScribe DVD+-RW, for unknown reason, though FC6 install DVD worked great during installation from the same drive! Solved by adding a $17 Rosewill RC-210 PCI SATA card for the DVD (it also gives you an external SATA port for a faster ext hard drive than USB can provide). Linux has no trouble accessing disk drives on the motherboard SATA - just the lightscribe DVD writer. lm-sensors can't see the motherboard sensors; it's probably the *%&$$ ATI chip again. (Maybe yr buddies at AMD/ATI can help you with this.)

HP's optional EM718AA memory card reader (plugs into the floppy slot and a motherboard USB header) does NOT support SDHC, so avoid it. I tossed theirs, and added a $30 YE-DATA YD-8V08 memory card reader and floppy drive, which DOES support SDHC. Newegg.com has it (and the RC-210 SATA).

I've noticed that when plugging in USB or SD cards, sometimes Linux doesn't immediately notice and DTRT. If you plug in something else on USB, then it figures it out. Don't know whether this is hardware or FC6 software problem.

Thanks for the tip. We are currently looking for fanless systems, and are exploring miniITX motherboards. They easily meet the requirements listed for the XSX.--wad

Should spec GigE w/autoX

The server should talk GigE. In 2007 there's no point in deploying 100BASE-T. In any school with more than one server, these boxes are all going to want to talk to each other, accessing each others' disks, etc. Ethernet is not just the uplink to the Internet; it's the disk access bus for the whole school. Why bottleneck NFS, remote backups, DVD/CD reading and burning, etc to save 15c?

Actually, it is more like $6, and the wiring to make use of it is slightly more expensive. Yes, it would be nice to have 1000baseT for the school backbone but it is not worth making a requirement, yet. In small schools with a single server, it is a wasted cost. Even in larger schools, the backbone may be wireless.

I strongly advise making the ethernet port(s) on the server do the automatic-crossover thing, so that there's only one kind of Ethernet cable, whether or not you're using a hub or just plugging two servers together directly with cat5. And put a link light on it, so there's immediate physical level feedback when it can see the other end of the cable.

Very good suggestions. Automatic crossover is part of 1000baseT, but still might carry a cost on 100baseT. The need for link lights is well known.

I suggest sidelining the powerline ethernet stuff. The last thing you want to have to debug in the field is half-assed networking. (I finally bought a USB ethernet card for my XO - Linksys USB200M - and am much happier.) Major companies have been pushing powerline ethernet for years and they get no traction. Why? Haven't tried it myself but the feeling is that it isn't reliable, doesn't get real spec'd throughput, etc. RJ45/cat5 ethernet is so simple, cheap, and so widely deployed that it's almost impossible to screw it up. Once switches replaced hubs (and there were no 50 ohm terminating resistors from the coax days) it all just went plug-and-play; with the widely deployed spanning-tree algorithm, even loops don't faze it. If somebody really wants to run powerline, there are external adapters from cat5 to powerline.

The concern is wiring costs (which are always much more than just the cost of the wire). Power is likely to already be available at all servers. I have tested powerline networking, and it can work fine, but can also be stymied by some devices such as home computers and small electronic appliances which have input circuits which try to minimize the RF intrusion. Phoneline networking works much better, but phone lines are less likely to be in place than power, and if wiring must be done there it probably costs the same to install Cat5e as Cat1. --wad

Or, another option: If you build powerline ethernet into the server, build it straight into the power plug/power supply. One plug both powers the unit and hooks it to powerline networking. Include a separate GigE port or two. Then, as soon as you plug it into power, if the powerline stuff works, great, it'll be networked. But there'll always be another method that's known to work. Make sure you can turn off the powerline stuff (or that it's off by default) in case it hashes other things by adding RFI to the power wiring. Also see what effect it has on nearby Marvell chips :-). (PS: If nobody builds a PC power supply that includes powerline networking on the same cord, maybe there's a reason why.)

Actually, the reason is mostly due to the economics of the PC industry...
It won't affect nearby Marvell chips, but it will greatly affect shortwave AM and SSB radio reception in the surrounding area. I am not a fan of powerline networking by any means, but if it worked, in many situations it would be a better candidate than 802.11a, the other economic (no new wiring) option. 802.11g is out as we don't want the backhaul in the same spectrum as used for the mesh networking. --wad

failure points

Drives and fans will get you. If you can't eliminate them, at least plan for frequent failures at awkward moments. AlbertCahalan 10:32, 22 March 2007 (EDT)