NAND Testing
This page describes basic testing of NAND Flash devices, controllers, and wear levelling filesystems performed in advance of future XO hardware designs. Raw results from the tests are available here.
Intro
The non-volatile storage subsystem of the XO has limited design lifetime. It uses an ASIC (the CaFE) to provide an interface to a NAND Flash device. The CaFE is limited in Flash page size, making it unsuitable for future generations of NAND Flash devices. As part of the search for a replacement, OLPC is testing a variety of solutions to gauge their performance.
The goals of the storage subsystem testing are as follows:
- Evaluate the Flash wear leveling algorithms
- Evaluate the storage error rate of the devices
- Evaluate the relative access latency of the devices
Wear Leveling Algorithms Testing
A common flaw in early Flash wear leveling algorithms was only leveling across the remaining unused blocks. The test for this is to fill up most of the disk, then continue to write/erase repeatedly, forcing the write/erase cycles to use the small number of remaining free blocks.
Assume we fill all but 5 MB of the media (leaving 2.5K blocks). We can continue to write at approx. 250 blocks (0.5 MB) per second. Assuming no wear leveling, this should result in a write failure in approx. 200 thousand seconds (100K cycle lifetime). Assuming naively simple wear leveling, a failure should occur in around one million seconds (100K cycle lifetime), or slightly over a week.
Assuming some percent withheld
Managed NAND devices (such solid-state drives, SD cards, and newer single chip NAND devices) typically set aside between 4 and 8% of the media for wear leveling and bad block replacement. This complicates the test somewhat, but is ameliorated by the reduced W/E cycle lifetime expected with newer NAND Flash devices.
Assume we fill all but 1 MiB of the media (6% of 4 GiB is roughly 250 MiB, leaving up to 251 MiB/125 KBlocks actually free). Assume a maximum write rate of 500 blocks per second. Assuming naively simple wear leveling, a failure should occur after 5K write cycles of all free blocks, or roughly 750M block writes. This will require around 18 days of continuous writing to trigger.
JFFS2 Test as Implemented
In the case of the XOs with raw NAND and a JFFS2 management layer, we fill all but around 32 MB of the media (leaving 16K blocks). While the device doesn't withhold any blocks for wear leveling, we expect better than naive wear leveling from JFFS2. Given a 1 GiB device (512K blocks) and W/E lifetime of 100K cycles, we might expect that it will take be 50 billion block write cycles (100 TB of data written) before we start seeing significant errors.
The current test writes 10K blocks/35 seconds, giving us an expected time of 5.5 years of testing before we see failure due to write fatigue.
Storage Error Rate Testing
There is concern that the error rate of MLC devices is not acceptable for use as the primary storage for Linux. As all of the devices being tested are MLC parts, we have an opportunity to evaluate the error rate of the devices.
If we assume that read errors dominate, we can test about 780 passes of an entire device per machine per week. This can be done in conjunction with other tests (c.f. wear leveling), reducing the coverage/speed but not affecting the results.
Unfortunately, NAND manufacturers indicate that write disturbances are a larger problem than read errors, so error testing can't be this simple. I am proposing to verify the consistency of data stored on the vast majority of the media, while writing to the remainder of the media. Note that since there is at least one level of indirection between the test program and the media, it is difficult to simplify the consistency check to blocks possibly affected by a write disturb error.
Access Latency Testing
If the wear leveling algorithm is actually functioning, the latency required to terminate a write may vary widely. Information about this timing may be gathered as part of other tests.
Unfortunately, obtaining realistic timing requires that the disk be realistically fragmented...
Error Rate Assumptions
The stated write/erase cycle lifetime for the devices we are currently using in the XO is 100K cycles -- OLPC has not verified these claims.
The error rate for newer storage devices varies. Toshiba claims that its SLC parts have a 10K cycle lifetime, and its MLC parts have a 5K cycle lifetime.
Timing assumptions
Time estimates in this document are made using the following information, obtained by Mitch Bradley:
JFFS2 reads at between 5.6 and 12 MB/sec (data-dependent, note c), using 100% of the CPU (real time == system time).
- Current test show similar bandwidth (with similar large variance!) --wad
LBA-NAND reads at 5.2 MB/sec, using <1% CPU (real time >> system time).
- Current tests show closer to 4 MB/s --wad
JFFS2 writes at 760 kB/sec, using 100% CPU.
- Current tests slow closer to 0.9 MB/s, but again large variance. --wad
LBA-NAND writes at 1.25 MB/sec, using <2% CPU.
- Actual measurements seem closer to 0.7 MB/sec... --wad
Test Plan
The best laid schemes o' mice an' men... --John Steinbeck
Samples under Test
The current set of tests is devoted to testing four candidate SD cards for XO-.5 production. The current status of these tests is recorded here.
These are the storage media and access methods that have been tested:
- JFFS2: Five laptops using existing raw NAND plus JFFS2 software Flash translation
- UbiFS: The upgrade to JFFS2. Three laptops started testing on Nov. 25.
- SD cards: Four laptops are testing SanDisk Extreme III (Class 6) SD cards. Another three laptops are testing Transcend (Class 6) 4GB cards.
- IDE/NAND (SSD) controllers: We are currently testing three samples from SMI and four samples from Phison.
- LBA-NAND: We had eight laptops with a 4-GB Toshiba LBA-NAND installed. Five laptops with 2 GB of LBA-NAND were also tested. Almost all devices failed catastrophically.
We are actively working to get additional devices into the mix, such as:
- eMMC NAND: Basically an MMC card without the wrapper, available from multiple vendors.
- PCIe/NAND (SSD) controllers: two evaluation units from Marvell are on their way
Wear & Error Test
This will be a combined test which will try to test the wear leveling mechanism of the storage device, while also regularly checking for errors in accessing stored data.
The plan is:
- While executing from a separate storage device
- Format as much of the media as possible as a single ext2 partition. The JFFS2 test case will use a JFFS2 partition, and the UBIFS test case will use a UBIFS partition.
- Create test data filling up all but 32 MB of the partition. This test data will be pseudo-random in nature (white noise), and will be duplicated on the storage device. It has been suggested to instead record signatures of the test data. Since the data files are large (multiple media blocks in size), there is little danger of dual-failure (in both files) causing a comparison to give a false negative.
- Start a test script which continuously alternates between:
- Reading a file and its duplicate from the stored data, reporting any differences.
- Reading a file and its duplicate from the "hot" data, reporting any differences, then overwriting both files with new data.
The test software should log errors onto a storage device other than the device under test.
Step 4.1 is walking through a data set too large to fit into the kernel page cache. Naively done, however, Step 4.2 isn't effective if the kernel page cache is working, as the files being read were recently written to the storage media. The fix (available in newer kernels) is to flush the disk cache before comparing the files (see http://linux-mm.org/Drop_Caches):
echo 1 > /proc/sys/vm/drop_caches
This was properly added to version 1.2 of the test program.
Testing
These are notes detailing the implementation of the testing on the different platforms.
Common
Some elements of the testing, such as the test scripts and the log post-processing, are common to all test platforms.
Test Scripts
In order to minimize the runtime support needed for the testing, both the test and initialization scripts are written in Bourne shell. Sources are available from the OLPC git repository.
Disk Initialization
After formatting and placing a filesystem on a test device, it should be initialized with test data (for the Wear and Error Test. This is a duplicate set of random data, almost filling the device.
After mounting the test device, you can either use a script which automatically fills a directory/partition with matched sets of data:
- fill.sh - a script for filling a partition with matched sets of random data
Or do it manually. Create a directory on it for the first set of data (the name is important), and change to it:
mkdir /nand/setA cd /nand/setA
Now run the fill_random.sh script to fill half of the test device with a number of 32-MB random data files (the number of files to create is the sole argument.)
- fill_random.sh - a script for generating the random data
For a 4-GB device, this number is usually 58 to 60. For a 2-GB device, this number is usually 27 to 29.
You can always delete files as needed to drop back below 50% device utilization. Now copy the directory of random data you have just created:
cd .. cp -r setA setB
You need to ensure that there is at least 30 MB of free space when done. If less, delete a single data file from the setA directory. There shouldn't be more than 50 MB of free space available. Copy data files in setB to fill extra space.
In most cases, it is quicker to generate the random data once, placing it on a USB storage device, then using one of the following scripts to transfer it onto the test device:
- fill_jffs.sh - the script actually used to fill the JFFS2 devices
- fill_cp.sh - the script actually used to fill the LBA-NAND devices
Test Script
- test.sh - the script which actually performs the test
- parselogs.py - the script which takes one or more logs and produces statistics
LBA-NAND Initialization
The following are necessary only on LBA-NAND test laptops:
- boot - a directory containing the OS used for the LBA-NAND tests
- setup.sh - a script for setting up LBA-NAND laptops (deprecated, as it is now /etc/init.d/rc.usbnandtest in the boot ramdisk)
Ubifs Initialization
The following are necessary only on UBIFS test laptops:
Logging
In most cases, logging is done to an external USB device. In some systems under test (JFFS2 and UbiFS laptops), this is the only storage media other than the device under test. It was used instead of logging the serial console of a laptop due to previous experience trying to collect and maintain serial logs from tens of machines --- the USB bus or serial/USB adapters would occasionally hiccup for unknown reasons and cause the logging to halt.
Logs may be processed using the parselogs.py script. It either takes a list of log files as arguments or processes all log files in the current directory if none are specified. It outputs statistical and error information aggregated from all log files processed.
Logs are being aggregated at http://dev.laptop.org/~wad/nand/. A summary of each machine's status is shown, with a link to individual log files.
Control
Coming soon, the destruction of a SATA drive through continuous writing...
JFFS2
There are five XOs at 1CC running the tests on top of JFFS2. Build 8.2-760 was freshly installed on the laptops using Open Firmware's copy-nand command.
A problem with the existing driver appeared immediately as all five crashed overnight on 9/22 (about ten hours into the testing, according to the logs), three definitely with the same kernel error (#8615), one with a dark screen, and one with a white screen (not hardware). Three (JFFS1, JFFS2, and JFFS4) were restarted with a console serial port attached to log error messages. Further information is on Trac ticket #8615. These crashes have continued with almost daily frequency.
A related problem was that JFFS2 might start a test, but after a couple of hundred write/erase cycles, it has run out of disk space for further writes. On these machines, I have gradually been deleting read data as disk space decreases (is consumed by fragmentation ?)
A kernel patch for #8615 was kindly provided by David Woodhouse, and a kernel RPM by Deepak Saxena. This was applied to all machines, after which these problems have not been seen.
The current test rates are roughly 10 sec/test step 4.1, and 25 sec/test step 4.2. This translates into a 6.5 MByte/s read rate, and a 0.9 MByte/s write rate.
Laptop | Serial # | Test | Total Written |
---|---|---|---|
JFFS1 | CSN748003DB | Wear & Error | 3780 |
JFFS2 | CSN74805706 | Wear & Error | 3788 |
JFFS3 | SHF80702F53 | Wear & Error | 3844 |
JFFS4 | SHF7250022F | Wear & Error | 3758 |
JFFS5 | SHF725004D4 | Wear & Error | 6404 |
Total Written refers to the total amount of data written to date to the storage device in an attempt to test wear levelling and W/E lifetime, in GiB. For the current tests, each pass is 0.02 GiB.
A sixth laptop (JFFS6) was briefly used to verify that the kernel bug (#8615) was also present in earlier Sugar releases (such as build 656). Once verified, this laptop was withdrawn from testing.
JFFS2 Setup Notes
If this is the first time, see the next section. If restarting a test, boot the laptop, and insert a USB device containing the test.sh script. Then simply type:
/usb/test.sh
A new logfile will automatically be created on the USB device (in /usb/logfile-xxxxx).
JFFS2 Initialization
Note: For these tests to have a valid effect, the storage device should not be re-formatted or re-initialized for the duration of the wear leveling test!
Install a fresh copy of release 8.2-760 from a USB device using Open Firmware:
copy-nand u:\os760.img
Boot, and insert a USB device containing the patched kernel RPM. Install it using:
rpm -ivh kernel-2.6.25-20081025.1.olpc.fix_gc_race.i586.rpm cp -a /boot/* /versions/boot/current/boot/
Reboot, and insert a USB device containing several scripts:
- fill_jffs.sh - a script for filling the NAND with random data
- fill_random.sh - an alternative script for filling the disk
- random - a directory containing over 400 MB of random data, in 32-MiB files (optional)
- test.sh - a script for running the wear leveling and error checking test
If using an earlier OLPC build (say 656), you will have to install the cmp utility:
yum install diffutils
Create a link from the mount point for the USB device to /usb:
ln -s /media/<USB_DEVICE_NAME> /usb
Now you need to fill the NAND Flash partition ("/" on the stock XO build). This can be done using the same method used for LBA-NAND devices. If the random directory is provided on the USB device, type:
/usb/fill_jffs.sh
An alternative. slower approach to filling the NAND with data, which doesn't require pre-computed random data on the USB device, is to manually:
mkdir /setA cd /setA /usb/fill_random.sh 11 cp -r /setA /setB
UBIFS
There are three laptops running these tests on a UbiFS filesystem. Additional information on how the UBIFS image was created and some of Deepak's notes are available
Laptop | Serial # | Test | Total Written |
---|---|---|---|
UBI1 | CSN749030BD | Wear & Error | 1900 |
UBI2 | CSN7440003E | Wear & Error | 2950 |
UBI3 | SHF73300081 | Wear & Error | 3870 |
Total Written refers to the total amount of data written to date to the storage device in an attempt to test wear levelling and W/E lifetime, in GiB. For the current tests, each pass is 0.02 GiB.
UBIFS Setup Notes
If this is the first time, see the next section. If restarting a test, boot the laptop, and insert a USB device containing the test.sh script. Then simply type:
/usb/test.sh
UBIFS Initialization
Note: For these tests to have a valid effect, the storage device should not be re-formatted or re-initialized for the duration of the wear leveling test!
Install a fresh copy of firmware q2e22 from a USB device using Open Firmware (OFW):
flash u:\q2e22.rom
Boot the laptop, escaping into OFW, and insert a USB device containing the following files:
At the OFW prompt, type:
dev nand : write-blocks write-pages ; dend
You will likely get a message like write-blocks isn't unique. You can ignore this message.
update-nand u:\data.img
At this point OFW will erase the flash and copy the contents of the nand.img file to flash. When complete, reboot the system.
Now insert a USB device containing
- fill_jffs.sh - a script for filling the NAND with random data
- random - a directory containing over 1 GiB of random data, in 32-MiB files (optional)
- test.sh - a script for running the wear leveling and error checking test
Create a link from the mount point for the USB device to /usb:
ln -s /media/<USB_DEVICE_NAME> /usb
Now you need to fill the NAND Flash partition ("/" on the stock XO build). This can be done using the same method used for LBA-NAND devices. If the random directory is provided on the USB device, type:
/usb/fill_jffs.sh
PATA Flash Controllers
There are currently six PATA Flash controllers undergoing tests, with more samples requested. Each drive is carefully partitioned and a Linux ext2 filesystem was placed on it. The system interface to the Flash controller is a PATA driver.
The tests are run from standard Linux desktop machines, each testing one or two systems.
Test Unit | Host | Device | Test | Total Written |
---|---|---|---|---|
SMI1 | SM223 + 1 Hynix MLC | Wear & Error | failed after 2036 GiB | |
SMI2 | SM223 + 1 Hynix MLC | Wear & Error | failed at 1761 GiB | |
SMI3 | marvell sd0 | SM2231 + 1 Hynix MLC | Wear & Error | 3436 |
SMI4 | marvell sd1 | SM2231 + 1 Hynix MLC | Wear & Error | 150 |
PHI1 | smi sd0 | Phison 3006 + 1 Hynix MLC | Wear & Error | 2896 |
PHI2 | phison sd0 | Phison 3006 + 1 Hynix MLC | Wear & Error | 2365 |
PHI3 | amd sd0 | Phison 3006 + 1 Hynix MLC | Wear & Error | 1896 |
PHI4 | amd sd1 | Phison 3006 + 1 Hynix MLC | Wear & Error | 1896 |
PHI5 | sawzall sd0 | Phison 3007 + 1 Samsung MLC | Wear & Error | |
PHI6 | smi sd0 | Phison 3007 + 1 Samsung MLC | Wear & Error |
Total Written refers to the total amount of data written to date to the storage device in an attempt to test wear levelling and W/E lifetime, in GiB. For the current tests, each pass is 0.02 GiB.
PATA Flash Setup Notes
If this is the first time, see the next section. If restarting a test, mount the test device, change directories to the device under test, then run the test.sh script:
sudo fsck /dev/sda1 # optional, if not shutdown cleanly sudo mount /dev/sda1 /sd0 # use appropriate device cd /sd0 ls # check state of directory sudo ~/test.sh
On most modern Linux systems, the PATA drives are assigned /dev/sd[abcdef] device names and enumerated before any SATA drives. The test partition is always the first (e.g. /dev/sda1, /dev/sdb1, etc...)
PATA Flash Initialization
Note: For these tests to have a valid effect, the storage device should not be re-formatted or re-initialized for the duration of the wear leveling test!
Touch /ssd when setting up a new test host --- the test script uses this to tell it where to place the logs.
Partition a new drive using fdisk, placing the first partition at sector 512 (byte offset of 256K). This usually involves the following sequence of commands to fdisk:
u # to change the units to sectors p # show the existing partition table d # (optional) delete any existing partition n # create new partition (primary, number 1, start at sector 512, continue to end of device) w # write out new partition table and quit
Install an ext2 filesystem on the drive, using 2048-byte blocks:
mke2fs -b 2048 -m 0 /dev/sdb1 mount /dev/sdb1 /sd0
Now you need to fill the NAND Flash partition ("/sd0 or /sd1"). This can be done using the fill_random.sh script.
LBA
The LBA-NAND parts being tested have almost all failed. More information about testing this part is available at LBA NAND Testing.
Laptop | Serial # | Test | Total Written | Comments |
---|---|---|---|---|
LBA1 | CSN74700D03 | Wear & Error | 4906 | |
LBA2 | CSN74702D30 | Wear & Error | 4882 | device failed |
LBA3 | SHF808021E4 | Wear & Error | 985 | Device failed (sample #1) |
LBA4 | CSN749013AF | Wear & Error | 2467 | Device Failed (sample #3) |
LBA5 | CSN75001985 | Wear & Error | 2111 | Device failed (sample #4) |
LBA6 | CSN74702A8E | Wear & Error | 2491 | Device failed (sample #5) |
LBA7 | CSN748040B6 | Wear & Error | 3022 | device failed |
LBA8 | CSN74900B3C | Wear & Error | 2329 | Device failed (sample #2) |
LBA13 | SHF808021E4 | Wear & Error | 1645 | 2 GiB part - device failed |
LBA14 | CSN749013AF | Wear & Error | 120 ? | 2 GiB part - device failed |
LBA15 | CSN75001985 | Wear & Error | 120 ? | 2 GiB part - device failed |
LBA16 | CSN74702A8E | Wear & Error | 2269 | 2 GiB part - device failed |
LBA18 | CSN74900B3C | Wear & Error | 120 ? | 2 GiB part - device failed |
SD Cards
All SD cards follow the same initialization and setup procedures. Two types are currently being tested, with more possibly added in the future.
Sandisk Extreme III
There are four XOs at 1CC running the tests on a SanDisk Extreme III 4-GB SD card. Build 8.2-760 was freshly installed on the laptops.
The current test rates are roughly 3.8 sec/test step 4.1, and 4.1 sec/test step 4.2. This translates roughly into a 17 MByte/s read rate, and a 5.7 MByte/s write rate.
Laptop | Serial # | Test | Total Written |
---|---|---|---|
SAN1 | SHF725004D1 | Wear & Error | 30,550 |
SAN2 | SHF7250048F | Wear & Error | 28,335 |
SAN3 | SHF80600A54 | Wear & Error | 28,730 |
SAN4 | CSN74902B22 | Wear & Error | 29,250 |
Total Written refers to the total amount of data written to date to the storage device in an attempt to test wear levelling and W/E lifetime, in GiB. For the current tests, each pass is 0.02 GiB.
Transcend Class 6
There are three XOs at 1CC running the tests on a Transcend class 6 4-GB SD card. Build 8.2-767 was freshly installed on the laptops.
The current test rates are roughly 3.9 sec/test step 4.1, and 5 sec/test step 4.2. This translates roughly into a 17 MByte/s read rate, and a 5.7 MByte/s write rate.
Laptop | Serial # | Test | Total Written |
---|---|---|---|
TR1 | ? | Wear & Error | 14505 |
TR2 | ? | Wear & Error | Failed after 15710 GB |
TR3 | ? | Wear & Error | 5026 |
Total Written refers to the total amount of data written to date to the storage device in an attempt to test wear levelling and W/E lifetime, in GiB. For the current tests, each pass is 0.02 GiB.
SD Card Setup Notes
If this is the first time, see the next section. If restarting a test, boot the laptop, with a USB stick containing the test.sh script, and type:
/usb/test.sh
A new logfile will automatically be created on the USB key (in /usb/logfile-xxxxx).
SD Card Initialization
Note: For these tests to have a valid effect, the storage device should not be re-formatted or re-initialized for the duration of the wear leveling test!
Install a fresh copy of release 8.2-760 from a USB key using Open Firmware:
copy-nand u:\os760.img.
Boot, and insert a USB device containing several scripts:
- fill_random.sh - an alternative script for filling the disk
- test.sh - a script for running the wear leveling and error checking test
Go to the Journal and unmount the SD card.
You will need to create a link from the mount point for the USB device to /usb:
ln -s /media/<USB_KEY_NAME> /usb
Repartition the storage device using:
fdisk /dev/mmcblk0
Type 'u' to see the information in sectors. Then list the partitions already on the card and remember the starting sector and ending sector for the factory installed partition.
Delete any existing partitions, and create a single partition either using the same partition boundaries as the factory installed partition or start the first partition on a 4 MByte boundary (used by production images).
Then format the device using either (ext3):
mke2fs -b 2048 -j /dev/mmcblk0p1
or, if wanting a simple test (ext2):
mke2fs -b 2048 -m 0 /dev/mmcblk0p1
Now, mount the device as /nand, and start filling it with random data:
mkdir /nand mount /dev/mmcblk0p1 /nand /usb/fill_cp.sh umount /nand rmdir /nand
Reboot, and create a link from the mount point for the SD card to /nand:
ln -s /media/<SD_CARD_NAME> /nand
You are ready to start the testing, with:
/usb/test.sh