Talk:XS Server Hardware: Difference between revisions
No edit summary |
(Suggested specification) |
||
Line 89: | Line 89: | ||
== Updates == |
== Updates == |
||
Should the upgrades possibly be downloaded from the Internet, and then flashed to the respective devices (Bios, mass storage). In the event of the server's internal storage going corrupt, the server should be able to boot off of a USB drive (as no optical drives will be available). |
Should the upgrades possibly be downloaded from the Internet, and then flashed to the respective devices (Bios, mass storage). In the event of the server's internal storage going corrupt, the server should be able to boot off of a USB drive (as no optical drives will be available). |
||
== Suggested specification == |
|||
I have been mulling it over for the last couple of weeks and here are my thoughts on the XS, based on my experience of trying to do something similar for a small low powered ethernet/wireless gateway come home server. Obviously I had to go with off the shelf components, but much of this is still applicable. I managed a 20W system though much of that was down to using a 2.5" drive. |
|||
One presumes that you are having custom boards made specifically for the XS in this. I also presume that it is not quite as cost sensitive as the XO due to the smaller numbers. |
|||
=== Processor === |
|||
I would go with an AMD Geode LX-900, advantage is that it gives you the same tool chain as the XO. Means you can easily develop software for an XS without the complications of cross compiling (and storage penalty) or having a tool chain on your server (not a terribly sensible idea as it makes it that bit easier for hackers). The power consumption is only a bit over the LX700 in the XO, but the extra CPU speed is worth it. |
|||
For RAM I would use at least 1GB. The reason being that you sound like you are going to be using a standard hard desktop drive for the XS. I don't exactly know what the access pattern is going to be but 100 users on a 7200RPM drive is going to be tricky, you need lots of RAM to compensate. It should also mean you don't need swap which makes life simpler. |
|||
=== Display === |
|||
Assuming that you are using the same OpenFirmware/LinuxBIOS, or even a pure OpenFirmware choice, I would drop all keyboard and display and use a serial console. A cheap USB to serial converter can then be used to do the stuff that would normally require a keyboard and monitor. Just make sure you use a decent D socket, the cheap ones are only good for 50 mating cycles, where the best are good for 500. |
|||
There are options for serial consoles over ethernet using IPMI, and it might even work wirelessly, though this might be a bit of a security risk. |
|||
Normal maintenance could be done either via ssh or a custom web interface. |
|||
=== Power === |
|||
A wide ranging DC input like the XO should do the trick. That would allow lots of power options from universal AC-DC power bricks to solar panels. Note you are going to need significant quanties of current at 12V to power 3.5" drives if you used those. |
|||
No server should be run without a UPS of some sort. The most efficient way is to build this into the system as a battery backed power supply, similar to a laptop than a traditional AC-DC-AC separate UPS. I was going to suggest sealed lead batteries but LiFeP would also work and is more environmentally friendly. A scheme where several XO batteries are used to for this to boost capacity would be |
|||
worth exploring. Commonality of components is always a good idea. |
|||
=== Storage === |
|||
I would stick at least 512MB if not 1GB of onboard flash to take the OS. If the machines have enough RAM then the magnetic storage can be used purely for data storage. I am assuming that you are going to use 3.5" for the main storage for cost reasons. Though 2.5" is much lower power. |
|||
For the main storage I would provide four SATA interfaces with NCQ. The NCQ should help a bit with dealing with 100 users on a 7200RPM drive, and it does give you the option of pluging in SAS drives if you want (all SAS drives will work on a SATA interface). Personally I would avoid external storage altogether and provide four hotswap disk caddies per machine. Firstly hard drives fail, and will be the number one failure point by quite some margin. Making it easy to change them when they go wrong is a good idea. Secondly external boxes are prone to damage, being unplugged etc. they also require power, and it is way more efficient if that power is provided by the main machine. You can provide cheap plastic fillers for unused slots. There are lots of four drives in the size of three half height 5.25" bay devices that would be suitable. |
|||
I would reconsider the rejection of RAID. Hard drives fail, and this is going to bite you real hard if you only have single disks. I cannot emphasis this enough, don't do it. On this line some hardware accelerate RAID would be worth considering. Also consider some of the M+N cluster file systems, for automatic fail-over in schools with more than one XS. The mesh networking could make it all seamless; server goes down but through the mesh you pick up a second server and just keep going. |
|||
Smart monitoring hooked up to some external alarm indicator would be sensible, especially with RAID. |
|||
=== Wireless === |
|||
I would used USB2 based thumb drives on the end of a USB extension cable. The reason behind this is that it allows for easy antenna placement away from the machine without having to worry about precision coax cables. A wireless antenna near a metal case sucks, as the case tends to act as screen. With USB you can get 5m/15' from the case no problems. |
|||
=== Networking === |
|||
You only need one actual ethernet link, and I would make it 1GbE. Forget about powerline it is expensive and has lots of other problems. Stringing a bit of Cat 5e/6 to link two XS's is really quite cheap. It is only full structured wiring where things get expensive. |
|||
To provide uplinks I would provide a PCI slot, which can be used to provide ADSL, ISDN, analogue modem and if necessary a ethernet. Keeps things neater and tidier and less prone to unplug problems than separate boxes. It is also much lower power, and you can use the internal UPS to power it all. For example a PCI ADSL card is about 2.5W, where an external ADSL router with power brick is typically 20W. There are some issues with binary blobs for PCI ADSL cards you could perhaps commission your own with these stored in on board flash to avoid these problems. There are also binary blob issues for most PCI fax modem cards. Again on board flash or real serial port on the modem would fix that. |
|||
=== Enclosure === |
|||
The bulk of the enclosure size will be for the drives, especially if you go for 3.5" drives. If you want to go for a sealed case, you main problem is going to be the drives, again especially if you go for 3.5" Take a look at the Hush range of machines [http://www.hushtechnologies.net/] if you want to go sealed, but it will add significantly to the cost as you are going to need some large chunks of |
|||
external aluminium heatsinks and heatpipes. Fanless is one thing, sealed is whole other ball game. |
|||
For drives I '''really''' recommend making them hotswap, something like the following [http://www.span.com/catalog/product_info.php?cPath=18_2001_794&products_id=8319] I repeat, drives fail. At work by enterprise SCSI drives in dust free air conditioned rooms still fail. Seen failures within a year of new systems. |
|||
I would say something the size of a four bay 5.25" SCSI case would be about the right size, if you are going to allow for four internal drives and internal batteries. Something wall mounting along the lines of the PACK-BOX is worth considering. [http://www.icp-epia.co.uk/index.php?act=viewProd&productId=77] |
|||
--[[User:Jabuzzard|Jabuzzard]] 06:34, 6 May 2007 (EDT) |
Revision as of 10:34, 6 May 2007
Please keep leaving comments. They are read, and taken seriously. The XS spec will get overhauled in the next week to reflect recent decisions and plans.--Wad 10:56, 23 April 2007 (EDT)
Scalability
Doing some quick math based on Argentina Statistics, at the national level you have 5,151,856 kids in K-12 grades in 27,888 public shools, giving ~180 laptops / school (or ~3 servers / school).
- Does the internet connection scale for larger schools ? Does a 60 student school have the same access as a 240 student school ?--Wad 00:22, 2 February 2007 (EST)
Good or bad? Don't know.
- PRO
- Multiple servers could improve coverage, redundancy and available storage.
- CON
- Higher costs per school (depending on alternatives), multiple failure points and/or administration.
We are deploying just one (overpowered) school server per early trial, and will obtain a better idea of the requirements. The final number may be 120, or 50...--Wad 00:22, 19 March 2007 (EDT)
As a note specific to Argentina, the educational system is split in 4 three-year chunks: EGB I, EGB II, EGB III + polimodal (being only EGBs compulsory). Multiple servers could support 'administrative' layers or frontiers between these different 'levels'—acting like some 'sub-network' division.
This is just a guess-list. Please add at will. PoV is subjective... so a con may sometimes be thought of as a pro depending on the overall context. For example, redundancy could be thought as good if it's simple and straight forward out-of-the-box(es), but if to administer the multiple servers you must do it in a totally different way than when you just needed one, then it can be thought as a con given the penalty to growth.
- There is no question that a single school server will be targetting a maximum number of students, the only question is "how many?" I can design a server capable of serving 20 students, which will fail miserably if you try to use it stand-alone in a school of 200. I can design a server which can handle 800 students, but it would be overkill for a school of 30.--Wad 00:22, 2 February 2007 (EST)
One possible XSX hardware
I've been running FC6 on an HP dc5750 (and just ordered another). It can be bought in a standard minitower. AMD chips, ATI Radeon XPRESS 1150 graphics, Broadcom gigE. Ten USB slots (2 on internal headers). Has fans, but they are very slow/quiet in normal operation. I can't hear it in my living room unless my head is very close to the fan. I bought it with the Athlon64 X2 Dual Core Processor 3800+ (35W max), which reduces the CPU max power/heat by half, and with the 80PLUS power supply which is more efficient, thus less power/heat. ($40 and $20 options.) Has two SATA disk slots and 4-way mobo SATA controller. No PATA. 2 PCI slots, 1 PCIe 1x, 1 PCIe 16x (you'd need to add an Ethernet card, since this only has one GigE).
PRO: available from stock, or with a few weeks leadtime for custom config. Quiet, low power, no Microsoft tax (buy the "FreeDOS" version).
CON: ATI chip undocumented and poorly supported (works great for 2D graphics, nothing better). Minor issue when running FC6: Linux driver for the ATI chip barfs over the optional LightScribe DVD+-RW, for unknown reason, though FC6 install DVD worked great during installation from the same drive! Solved by adding a $17 Rosewill RC-210 PCI SATA card for the DVD (it also gives you an external SATA port for a faster ext hard drive than USB can provide). Linux has no trouble accessing disk drives on the motherboard SATA - just the lightscribe DVD writer. lm-sensors can't see the motherboard sensors; it's probably the *%&$$ ATI chip again. (Maybe yr buddies at AMD/ATI can help you with this.)
HP's optional EM718AA memory card reader (plugs into the floppy slot and a motherboard USB header) does NOT support SDHC, so avoid it. I tossed theirs, and added a $30 YE-DATA YD-8V08 memory card reader and floppy drive, which DOES support SDHC. Newegg.com has it (and the RC-210 SATA).
I've noticed that when plugging in USB or SD cards, sometimes Linux doesn't immediately notice and DTRT. If you plug in something else on USB, then it figures it out. Don't know whether this is hardware or FC6 software problem.
- Thanks for the tip. We are currently looking for fanless systems, and are exploring miniITX motherboards. They easily meet the requirements listed for the XSX.--wad
Should spec GigE w/autoX
The server should talk GigE. In 2007 there's no point in deploying 100BASE-T. In any school with more than one server, these boxes are all going to want to talk to each other, accessing each others' disks, etc. Ethernet is not just the uplink to the Internet; it's the disk access bus for the whole school. Why bottleneck NFS, remote backups, DVD/CD reading and burning, etc to save 15c?
- Actually, it is more like $6, and the wiring to make use of it is slightly more expensive. Yes, it would be nice to have 1000baseT for the school backbone but it is not worth making a requirement, yet. In small schools with a single server, it is a wasted cost. Even in larger schools, the backbone may be wireless.
- The difference between a brand new PCI card complete with packaging and shipped to me with 1GbE and 100Mbps is less than $6 in the UK. I would expect the cost of an integrated solution on the board to be around $2 extra --Jabuzzard 03:02, 22 April 2007 (EDT)
I strongly advise making the ethernet port(s) on the server do the automatic-crossover thing, so that there's only one kind of Ethernet cable, whether or not you're using a hub or just plugging two servers together directly with cat5. And put a link light on it, so there's immediate physical level feedback when it can see the other end of the cable.
- Very good suggestions. Automatic crossover is part of 1000baseT, but still might carry a cost on 100baseT. The need for link lights is well known.
I suggest sidelining the powerline ethernet stuff. The last thing you want to have to debug in the field is half-assed networking. (I finally bought a USB ethernet card for my XO - Linksys USB200M - and am much happier.) Major companies have been pushing powerline ethernet for years and they get no traction. Why? Haven't tried it myself but the feeling is that it isn't reliable, doesn't get real spec'd throughput, etc. RJ45/cat5 ethernet is so simple, cheap, and so widely deployed that it's almost impossible to screw it up. Once switches replaced hubs (and there were no 50 ohm terminating resistors from the coax days) it all just went plug-and-play; with the widely deployed spanning-tree algorithm, even loops don't faze it. If somebody really wants to run powerline, there are external adapters from cat5 to powerline.
- The concern is wiring costs (which are always much more than just the cost of the wire). Power is likely to already be available at all servers. I have tested powerline networking, and it can work fine, but can also be stymied by some devices such as home computers and small electronic appliances which have input circuits which try to minimize the RF intrusion. Phoneline networking works much better, but phone lines are less likely to be in place than power, and if wiring must be done there it probably costs the same to install Cat5e as Cat1. --wad
- The main cost in doing wiring is the labour. There is a massive difference between putting in a single link to connect two servers and doing full structural wiring. The first can be done quite cheaply, the second requires all sorts of additional stuff like patch rooms, panels, racks, containment, ... Powerline networking is a hugely expensive technology that performance wise sucks. Given the cheap cost of labour in the target countries putting in Ethernet links is the sensible route. It also builds local expertise up which can only be a good thing. --Jabuzzard 03:02, 22 April 2007 (EDT)
Or, another option: If you build powerline ethernet into the server, build it straight into the power plug/power supply. One plug both powers the unit and hooks it to powerline networking. Include a separate GigE port or two. Then, as soon as you plug it into power, if the powerline stuff works, great, it'll be networked. But there'll always be another method that's known to work. Make sure you can turn off the powerline stuff (or that it's off by default) in case it hashes other things by adding RFI to the power wiring. Also see what effect it has on nearby Marvell chips :-). (PS: If nobody builds a PC power supply that includes powerline networking on the same cord, maybe there's a reason why.)
- Actually, the reason is mostly due to the economics of the PC industry...
- It won't affect nearby Marvell chips, but it will greatly affect shortwave AM and SSB radio reception in the surrounding area. I am not a fan of powerline networking by any means, but if it worked, in many situations it would be a better candidate than 802.11a, the other economic (no new wiring) option. 802.11g is out as we don't want the backhaul in the same spectrum as used for the mesh networking. --wad
Failure Points
Drives and fans will get you. If you can't eliminate them, at least plan for frequent failures at awkward moments. AlbertCahalan 10:32, 22 March 2007 (EDT)
- The plan is to have no fans in the XS school servers. The very early XSX prototypes (off-the-shelf) will probably have a fan (w. bearings) in the power supply, also helping with the overall system (disk and processor) cooling.
- Drives are a different matter. We need them to economically provide the amount of storage the school server needs. And due to economic factors, we aren't considering RAIDs. The XS server will, however, include sufficient OS on NAND flash that it will continuing providing networking functionality even if the disk fails. And user generated content on the school server should be backed up continuously. --wad
- Not sure what you mean by "user generated content on the school server should be backed up continuously". Where "should" it be backed up TO? And do you mean that the humans there "should" back it up (somehow), or that the server software "will" back it up?
- There's no substitute for offline backups -- particularly offline backups with write-protect switches. No software glitch can erase them while offline, nor can they be easily trashed when plugged in for recovery, if the write protection is engaged. (Can you tell I was trained in the mainframe era?) External hard drives (eSATA or USB) are the obvious thing, but few come with working write-protect switches.
- If a school has two XS or XSX, they ought to back each other up automatically. Big drives that can hold twice the data have only a small price premium over small drives. Making this automatic for two servers would be pretty easy; but getting the general case right (N servers on the LAN, or N servers divided up on several locally linked LANs) is hard without manual configuration. The idea is that no piece of data is stored on a single spinning drive; it has to be replicated til it exists on two drives, preferably on two different servers. And then, if one fails and a new drive is installed, there's a simple path to full recovery. (A company I work with, called ReQuest.com, makes home theatre music systems with hard drives. They automatically back up your music collection if you have several of their units. Saves many customers from trouble. Making them buy two servers if they want backup is expensive, so it probably shouldn't be the only way to back things up. But since many customers will already have two or more, having them back each other up is almost free, and vastly improves your chances of keeping the kids' and teachers' data.)
Power Budget
You are never going to make a 8W power budget for the server. It is utterly unrealistic. A 3.5" hard drive will take that alone. One thing I would do is pick a wide range input power supply for the main board, there are lots of these available that take anything from 6-30V input, something similar to the XO itself. You can then use a battery backed PSU instead of an UPS. These are much more efficient, have less components and are thus more reliable. --jabuzzard
- Hehe. I just made an account to leave exactly the same comment. For example the Seagate 7200.10 300GB drive takes 9.3watts IDLE. You'll have better luck with 2.5" laptop drives. The Toshiba MK2035GSS is around 0.85 watts idle and about 2watts under load, but I expect such drives to be more expensive into the near future. If the 8w number is a real target it might make sense try to keep the normal working set in ram/flash so that the drive can sleep most of the time. --Gmaxwell 01:28, 15 April 2007 (EDT)
- Not only is a 2.5" drive much more costly, the performance tends to suck. The largest 7200RPM 2.5" drive, a Seagate Momentus 7200.2 is only 160GB, but for the same price you can get a 500GB 3.5" drive! The largest 2.5" drive is a Fujitsu Mobile MHX2300BT, but this only has a spindle speed of 4200RPM and is not yet shipping. The largest currently available capacity on a 2.5" drive is 200GB still with a spindle speed of 4200RPM, at the same price as a 750GB 3.5" drive. There is no way you are going to handle 50 users with a 4200RPM drive, and for the price of one 2.5" 7200RPM drive you could have a RAID-1 of 250GB 3.5" drives. It is clear that the power budget was proposed by someone with not the faintest clue. --Jabuzzard
External SATA
The suggested specification has USB2 for adding extra hard drives. Think is that hard drive performance on USB2 is terrible, even Firewire at 400Mbps hugely outperforms it. However from a cost perspective most chipsets these days come with at least two, and many have four SATA ports. All you need to do is bring these to an external header. A eSATA enclosure is cheaper simpler, more reliable, uses less power and faster than messing about with USB2. That said I would look to have at least two if not more hot pluggable slots to take SATA drives in the main server case to enable swapping them and upgrading. Having drives dangling of USB or eSATA cables in the proposed deployment scenarios for a server is a recipe for disaster, and the drives are going to fail. --Jabuzzard 03:15, 22 April 2007 (EDT)
Updates
Should the upgrades possibly be downloaded from the Internet, and then flashed to the respective devices (Bios, mass storage). In the event of the server's internal storage going corrupt, the server should be able to boot off of a USB drive (as no optical drives will be available).
Suggested specification
I have been mulling it over for the last couple of weeks and here are my thoughts on the XS, based on my experience of trying to do something similar for a small low powered ethernet/wireless gateway come home server. Obviously I had to go with off the shelf components, but much of this is still applicable. I managed a 20W system though much of that was down to using a 2.5" drive.
One presumes that you are having custom boards made specifically for the XS in this. I also presume that it is not quite as cost sensitive as the XO due to the smaller numbers.
Processor
I would go with an AMD Geode LX-900, advantage is that it gives you the same tool chain as the XO. Means you can easily develop software for an XS without the complications of cross compiling (and storage penalty) or having a tool chain on your server (not a terribly sensible idea as it makes it that bit easier for hackers). The power consumption is only a bit over the LX700 in the XO, but the extra CPU speed is worth it.
For RAM I would use at least 1GB. The reason being that you sound like you are going to be using a standard hard desktop drive for the XS. I don't exactly know what the access pattern is going to be but 100 users on a 7200RPM drive is going to be tricky, you need lots of RAM to compensate. It should also mean you don't need swap which makes life simpler.
Display
Assuming that you are using the same OpenFirmware/LinuxBIOS, or even a pure OpenFirmware choice, I would drop all keyboard and display and use a serial console. A cheap USB to serial converter can then be used to do the stuff that would normally require a keyboard and monitor. Just make sure you use a decent D socket, the cheap ones are only good for 50 mating cycles, where the best are good for 500.
There are options for serial consoles over ethernet using IPMI, and it might even work wirelessly, though this might be a bit of a security risk.
Normal maintenance could be done either via ssh or a custom web interface.
Power
A wide ranging DC input like the XO should do the trick. That would allow lots of power options from universal AC-DC power bricks to solar panels. Note you are going to need significant quanties of current at 12V to power 3.5" drives if you used those.
No server should be run without a UPS of some sort. The most efficient way is to build this into the system as a battery backed power supply, similar to a laptop than a traditional AC-DC-AC separate UPS. I was going to suggest sealed lead batteries but LiFeP would also work and is more environmentally friendly. A scheme where several XO batteries are used to for this to boost capacity would be worth exploring. Commonality of components is always a good idea.
Storage
I would stick at least 512MB if not 1GB of onboard flash to take the OS. If the machines have enough RAM then the magnetic storage can be used purely for data storage. I am assuming that you are going to use 3.5" for the main storage for cost reasons. Though 2.5" is much lower power.
For the main storage I would provide four SATA interfaces with NCQ. The NCQ should help a bit with dealing with 100 users on a 7200RPM drive, and it does give you the option of pluging in SAS drives if you want (all SAS drives will work on a SATA interface). Personally I would avoid external storage altogether and provide four hotswap disk caddies per machine. Firstly hard drives fail, and will be the number one failure point by quite some margin. Making it easy to change them when they go wrong is a good idea. Secondly external boxes are prone to damage, being unplugged etc. they also require power, and it is way more efficient if that power is provided by the main machine. You can provide cheap plastic fillers for unused slots. There are lots of four drives in the size of three half height 5.25" bay devices that would be suitable.
I would reconsider the rejection of RAID. Hard drives fail, and this is going to bite you real hard if you only have single disks. I cannot emphasis this enough, don't do it. On this line some hardware accelerate RAID would be worth considering. Also consider some of the M+N cluster file systems, for automatic fail-over in schools with more than one XS. The mesh networking could make it all seamless; server goes down but through the mesh you pick up a second server and just keep going.
Smart monitoring hooked up to some external alarm indicator would be sensible, especially with RAID.
Wireless
I would used USB2 based thumb drives on the end of a USB extension cable. The reason behind this is that it allows for easy antenna placement away from the machine without having to worry about precision coax cables. A wireless antenna near a metal case sucks, as the case tends to act as screen. With USB you can get 5m/15' from the case no problems.
Networking
You only need one actual ethernet link, and I would make it 1GbE. Forget about powerline it is expensive and has lots of other problems. Stringing a bit of Cat 5e/6 to link two XS's is really quite cheap. It is only full structured wiring where things get expensive.
To provide uplinks I would provide a PCI slot, which can be used to provide ADSL, ISDN, analogue modem and if necessary a ethernet. Keeps things neater and tidier and less prone to unplug problems than separate boxes. It is also much lower power, and you can use the internal UPS to power it all. For example a PCI ADSL card is about 2.5W, where an external ADSL router with power brick is typically 20W. There are some issues with binary blobs for PCI ADSL cards you could perhaps commission your own with these stored in on board flash to avoid these problems. There are also binary blob issues for most PCI fax modem cards. Again on board flash or real serial port on the modem would fix that.
Enclosure
The bulk of the enclosure size will be for the drives, especially if you go for 3.5" drives. If you want to go for a sealed case, you main problem is going to be the drives, again especially if you go for 3.5" Take a look at the Hush range of machines [1] if you want to go sealed, but it will add significantly to the cost as you are going to need some large chunks of external aluminium heatsinks and heatpipes. Fanless is one thing, sealed is whole other ball game.
For drives I really recommend making them hotswap, something like the following [2] I repeat, drives fail. At work by enterprise SCSI drives in dust free air conditioned rooms still fail. Seen failures within a year of new systems.
I would say something the size of a four bay 5.25" SCSI case would be about the right size, if you are going to allow for four internal drives and internal batteries. Something wall mounting along the lines of the PACK-BOX is worth considering. [3]
--Jabuzzard 06:34, 6 May 2007 (EDT)