NAND Testing: Difference between revisions

From OLPC
Jump to navigation Jump to search
Line 390: Line 390:
<table>
<table>
<tr><th>Laptop</th><th>Serial #</th><th>Test</th><th>Total Written</th></tr>
<tr><th>Laptop</th><th>Serial #</th><th>Test</th><th>Total Written</th></tr>
<tr><td>SAN1</td><td>SHF725004D1</td><td>Wear & Error</td><td align=center>13,336</td></tr>
<tr><td>SAN1</td><td>SHF725004D1</td><td>Wear & Error</td><td align=center>30,550</td></tr>
<tr><td>SAN2</td><td>SHF7250048F</td><td>Wear & Error</td><td align=center>12,905</td></tr>
<tr><td>SAN2</td><td>SHF7250048F</td><td>Wear & Error</td><td align=center>28,335</td></tr>
<tr><td>SAN3</td><td>SHF80600A54</td><td>Wear & Error</td><td align=center>13,583</td></tr>
<tr><td>SAN3</td><td>SHF80600A54</td><td>Wear & Error</td><td align=center>28,730</td></tr>
<tr><td>SAN4</td><td>CSN74902B22</td><td>Wear & Error</td><td align=center>13,077</td></tr>
<tr><td>SAN4</td><td>CSN74902B22</td><td>Wear & Error</td><td align=center>29,250</td></tr>
</table>
</table>



Revision as of 01:44, 4 April 2009

Intro

The non-volatile storage subsystem of the XO has limited design lifetime. It uses an ASIC (the CaFE) to provide an interface to a NAND Flash device. The CaFE is limited in Flash page size, making it unsuitable for future generations of NAND Flash devices. As part of the search for a replacement, OLPC is testing a variety of solutions to gauge their performance.

The goals of the storage subsystem testing are as follows:

  1. Evaluate the Flash wear leveling algorithms
  2. Evaluate the storage error rate of the devices
  3. Evaluate the relative access latency of the devices

Wear Leveling Algorithms Testing

A common flaw in early Flash wear leveling algorithms was only leveling across the remaining unused blocks. The test for this is to fill up most of the disk, then continue to write/erase repeatedly, forcing the write/erase cycles to use the small number of remaining free blocks.

Assume we fill all but 5 MB of the media (leaving 2.5K blocks). We can continue to write at approx. 250 blocks (0.5 MB) per second. Assuming no wear leveling, this should result in a write failure in approx. 200 thousand seconds (100K cycle lifetime). Assuming naively simple wear leveling, a failure should occur in around one million seconds (100K cycle lifetime), or slightly over a week.

Assuming some percent withheld

Managed NAND devices (such solid-state drives, SD cards, and newer single chip NAND devices) typically set aside between 4 and 8% of the media for wear leveling and bad block replacement. This complicates the test somewhat, but is ameliorated by the reduced W/E cycle lifetime expected with newer NAND Flash devices.

Assume we fill all but 1MiB of the media (6% of 4GiB is roughly 250 MiB, leaving up to 251 MiB/125 KBlocks actually free). Assume a maximum write rate of 500 blocks per second. Assuming naively simple wear leveling, a failure should occur after 5K write cycles of all free blocks, or roughly 750M block writes. This will require around 18 days of continuous writing to trigger.

LBA Test as Implemented

In the case of the LBA-NAND parts, we fill all but 32 MB of the media (leaving 16K blocks). Assume the device withholds 6% of the blocks for wear leveling and bad block replacement (120K blocks). Assuming naive wear leveling, this should result in a write failure in approx. 136K x 5K or 680M block writes (1.4 TB writen). We can continue to write at approx. 350 blocks (0.7 MB) per second, giving a time to failure of 22 days.

The current test program only writes in step 4.2, giving 20 MB/45 sec., or 220 blocks per second. This gives a times to failure of 35 days. But at the same time it is performing storage error rate testing at 42K blocks/sec --- checking the entire 4 GiB device 40 times a day.

JFFS2 Test as Implemented

In the case of the XOs with raw NAND and a JFFS2 management layer, we fill all but around 32 MB of the media (leaving 16K blocks). While the device doesn't withhold any blocks for wear leveling, we expect better than naive wear leveling from JFFS2. Given a 1 GiB device (512K blocks) and W/E lifetime of 100K cycles, we might expect that it will take be 50 billion block write cycles (100 TB of data written) before we start seeing significant errors.

The current test writes 10K blocks/35 seconds, giving us an expected time of 5.5 years of testing before we see failure due to write fatigue.

Storage Error Rate Testing

There is concern that the error rate of MLC devices is not acceptable for use as the primary storage for Linux. As all of the devices being tested are MLC parts, we have an opportunity to evaluate the error rate of the devices.

If we assume that read errors dominate, we can test about 780 passes of an entire device per machine per week. This can be done in conjunction with other tests (c.f. wear leveling), reducing the coverage/speed but not affecting the results.

Unfortunately, NAND manufacturers indicate that write disturbances are a larger problem than read errors, so error testing can't be this simple. I am proposing to verify the consistency of data stored on the vast majority of the media, while writing to the remainder of the media. Note that since there is at least one level of indirection between the test program and the media, it is difficult to simplify the consistency check to blocks possibly affected by a write disturb error.

Access Latency Testing

If the wear leveling algorithm is actually functioning, the latency required to terminate a write may vary widely. Information about this timing may be gathered as part of other tests.

Unfortunately, obtaining realistic timing requires that the disk be realistically fragmented...

Error Rate Assumptions

The stated write/erase cycle lifetime for the devices we are currently using in the XO is 100K cycles -- OLPC has not verified these claims.

The error rate for newer storage devices varies. Toshiba claims that its SLC parts have a 10K cycle lifetime, and its MLC parts have a 5K cycle lifetime.

Timing assumptions

Time estimates in this document are made using the following information, obtained by Mitch Bradley:

JFFS2 reads at between 5.6 and 12 MB/sec (data-dependent, note c), using 100% of the CPU (real time == system time).

Current test show similar bandwidth (with similar large variance!) --wad

LBA-NAND reads at 5.2 MB/sec, using <1% CPU (real time >> system time).

Current tests show closer to 4 MB/s --wad

JFFS2 writes at 760 kB/sec, using 100% CPU.

Current tests slow closer to 0.9 MB/s, but again large variance. --wad

LBA-NAND writes at 1.25 MB/sec, using <2% CPU.

Actual measurements seem closer to 0.7 MB/sec... --wad

Test Plan

The best laid schemes o' mice an' men... --John Steinbeck

Samples under Test

These are the storage media and access methods currently being tested:

  • JFFS2: Five laptops using existing raw NAND plus JFFS2 software Flash translation
  • LBA-NAND: We had eight laptops with a 4GB Toshiba LBA-NAND installed. Three remain. We are restarting testing on several laptops with 2GB LBA-NAND.
  • SD cards: Four laptops are testing SanDisk Extreme III (Class 6) SD cards. Another three laptops are testing Transcend (Class 6) 4GB cards.
  • UbiFS: The upgrade to JFFS2. Three laptops started testing on Nov. 25.

We are actively working to get additional devices into the mix, such as:

  • eMMC NAND: Basically an MMC card without the wrapper, available from multiple vendors.
  • IDE/NAND (SSD) controllers: Available cheaply from at least two companies. Phison makes the SSD controller used in both the Acer Aspire and the Asus EEE.
  • PCIe/NAND (SSD) controllers: two evaluation units from Marvell are on their way

Wear & Error Test

This will be a combined test which will try to test the wear leveling mechanism of the storage device, while also regularly checking for errors in accessing stored data.

The plan is:

  1. While executing from a separate storage device
  2. Format as much of the media as possible as a single ext2 partition. The JFFS2 test case will use a JFFS2 partition, and the UBIFS test case will use a UBIFS partition.
  3. Create test data filling up all but 32MB of the partition. This test data will be pseudo-random in nature (white noise), and will be duplicated on the storage device. It has been suggested to instead record signatures of the test data. Since the data files are large (multiple media blocks in size), there is little danger of dual-failure (in both files) causing a comparison to give a false negative.
  4. Start a test script which continuously alternates between:
    1. Reading a file and its duplicate from the stored data, reporting any differences.
    2. Reading a file and its duplicate from the "hot" data, reporting any differences, then overwriting both files with new data.

The test software should log errors onto a storage device other than the device under test.

Step 4.1 is walking through a data set too large to fit into the kernel page cache. Naively done, however, Step 4.2 isn't effective if the kernel page cache is working, as the files being read were recently written to the storage media. The fix (available in newer kernels) is to flush the disk cache before comparing the files (see http://linux-mm.org/Drop_Caches):

echo 1 > /proc/sys/vm/drop_caches

This was properly added to version 1.2 of the test program.

Testing

These are notes detailing the implementation of the testing on the different platforms.

Common

Some elements of the testing, such as the test scripts and the log post-processing, are common to all test platforms.

Test Scripts

In order to minimize the runtime support needed for the testing, both the test and initialization scripts are written in Bourne shell. Sources are available from the OLPC git repository.

The following scripts are provided:

  • test.sh - the script which actually performs the test
  • parselogs.py - the script which takes one or more logs and produces statistics
  • fill.sh - a script for filling a partition with matched sets of random data
  • fill_random.sh - another script for generating the random data
  • fill_jffs.sh - the script actually used to fill the JFFS2 devices
  • fill_cp.sh - the script actually used to fill the LBA-NAND devices

The following are necessary only on LBA-NAND test laptops:

  • boot - a directory containing the OS used for the LBA-NAND tests
  • setup.sh - a script for setting up LBA-NAND laptops (deprecated, as it is now /etc/init.d/rc.usbnandtest in the boot ramdisk)

The following are necessary only on UBIFS test laptops:

  • data.img - the data partition of a UBIFS laptop
  • nand.img - the nand partition of a UBIFS laptop

Logging

In most cases, logging is done to an external USB device. In some systems under test (JFFS2 and UbiFS laptops), this is the only storage media other than the device under test. It was used instead of logging the serial console of a laptop due to previous experience trying to collect and maintain serial logs from tens of machines --- the USB bus or serial/USB adapters would occasionally hiccup for unknown reasons and cause the logging to halt.

Logs may be processed using the parselogs.py script. It either takes a list of log files as arguments or processes all log files in the current directory if none are specified. It outputs statistical and error information aggregated from all log files processed.

Logs are being aggregated at http://dev.laptop.org/~wad/nand/. A summary of each machines status is shown, with a link to individual log files. A summary aggregating all logs for a device type is also available.

Control

Coming soon, the destruction of a SATA drive through continuous writing...

JFFS2

There are five XOs at 1CC running the tests on top of JFFS2. Build 8.2-760 was freshly installed on the laptops using Open Firmware's copy-nand command.

A problem with the existing driver appeared immediately as all five crashed overnight on 9/22 (about ten hours into the testing, according to the logs), three definitely with the same kernel error (#8615), one with a dark screen, and one with a white screen (not hardware). Three (JFFS1, JFFS2, and JFFS4) were restarted with a console serial port attached to log error messages. Further information is on Trac ticket #8615. These crashes have continued with almost daily frequency.

A related problem was that JFFS2 might start a test, but after a couple of hundred write/erase cycles, it has run out of disk space for further writes. On these machines, I have gradually been deleting read data as disk space decreases (is consumed by fragmentation ?)

A kernel patch for #8615 was kindly provided by David Woodhouse, and a kernel RPM by Deepak Saxena. This was applied to all machines, after which these problems have not been seen.

The current test rates are roughly 10 sec/test step 4.1, and 25 sec/test step 4.2. This translates into a 6.5 MByte/s read rate, and a 0.9 MByte/s write rate.

LaptopSerial #TestTotal Written
JFFS1CSN748003DBWear & Error3780
JFFS2CSN74805706Wear & Error3788
JFFS3SHF80702F53Wear & Error3844
JFFS4SHF7250022FWear & Error3758
JFFS5SHF725004D4Wear & Error6404

Total Written refers to the total amount of data written to date to the storage device in an attempt to test wear levelling and W/E lifetime, in GiB. For the current tests, each pass is 0.02 GiB.

A sixth laptop (JFFS6) was briefly used to verify that the kernel bug (#8615) was also present in earlier Sugar releases (such as build 656). Once verified, this laptop was withdrawn from testing.

JFFS2 Setup Notes

If this is the first time, see the next section. If restarting a test, boot the laptop, and insert a USB key containing the test.sh script. Then simply type:

/usb/test.sh

A new logfile will automatically be created on the USB key (in /usb/logfile-xxxxx).

JFFS2 Initialization

Note: For these tests to have a valid effect, the storage device should not be re-formatted or re-initialized for the duration of the wear leveling test!

Install a fresh copy of release 8.2-760 from a USB key using Open Firmware:

copy-nand u:\os760.img

Boot, and insert a USB key containing the patched kernel RPM. Install it using:

rpm -ivh kernel-2.6.25-20081025.1.olpc.fix_gc_race.i586.rpm 
cp -a /boot/* /versions/boot/current/boot/

Reboot, and insert a USB key containing several scripts:

  • fill_jffs.sh - a script for filling the NAND with random data
  • fill_random.sh - an alternative script for filling the disk
  • random - a directory containing over 400 MB of random data, in 32 MiB files (optional)
  • test.sh - a script for running the wear leveling and error checking test

If using an earlier OLPC build (say 656), you will have to install the cmp utility:

yum install diffutils

Create a link from the mount point for the USB key to /usb:

ln -s /media/<USB_KEY_NAME> /usb

Now you need to fill the NAND Flash partition ("/" on the stock XO build). This can be done using the same method used for LBA-NAND devices. If the random directory is provided on the USB key, type:

/usb/fill_jffs.sh

An alternative. slower approach to filling the NAND with data, which doesn't require pre-computed random data on the USB key, is to manually:

mkdir /setA
cd /setA
/usb/fill_random.sh 11
cp -r /setA /setB

UBIFS

There are three laptops running these tests on a UbiFS filesystem. Additional information on how the UBIFS image was created and some of Deepak's notes are available

LaptopSerial #TestTotal Written
UBI1CSN749030BDWear & Error1900
UBI2CSN7440003EWear & Error2950
UBI3SHF73300081Wear & Error3870

Total Written refers to the total amount of data written to date to the storage device in an attempt to test wear levelling and W/E lifetime, in GiB. For the current tests, each pass is 0.02 GiB.

UBIFS Setup Notes

If this is the first time, see the next section. If restarting a test, boot the laptop, and insert a USB key containing the test.sh script. Then simply type:

/usb/test.sh

UBIFS Initialization

Note: For these tests to have a valid effect, the storage device should not be re-formatted or re-initialized for the duration of the wear leveling test!

Install a fresh copy of firmware q2e22 from a USB key using Open Firmware (OFW):

flash u:\q2e22.rom

Boot the laptop, escaping into OFW, and insert a USB stick containing the following files:

At the OFW prompt, type:

dev nand : write-blocks write-pages ; dend

You will likely get a message like write-blocks isn't unique. You can ignore this message.

update-nand u:\data.img

At this point OFW will erase the flash and copy the contents of the nand.img file to flash. When complete, reboot the system.

Now insert a USB key containing

  • fill_jffs.sh - a script for filling the NAND with random data
  • random - a directory containing over 1 GiB of random data, in 32 MiB files (optional)
  • test.sh - a script for running the wear leveling and error checking test

Create a link from the mount point for the USB key to /usb:

ln -s /media/<USB_KEY_NAME> /usb

Now you need to fill the NAND Flash partition ("/" on the stock XO build). This can be done using the same method used for LBA-NAND devices. If the random directory is provided on the USB key, type:

/usb/fill_jffs.sh

PATA Flash Controllers

There are currently six PATA Flash controllers undergoing tests, with more samples requested. Each drive is carefully partitioned and a Linux ext2 filesystem was placed on it. The system interface to the Flash controller is a PATA driver.

The tests are run from standard Linux desktop machines, each testing one or two systems.

60
Test UnitHostDeviceTestTotal Written
SMI1marvell sd0SM223 + 1 Hynix MLCWear & Error2036
SMI2smi sd0SM223 + 1 Hynix MLCWear & Errorfailed at 1761 GiB
SMI3marvell sd1SM2231 + 1 Hynix MLCWear & Error2129
PHI1smi sd0Phison 3006 + 1 Hynix MLCWear & Error1440
PHI2phison sd0Phison 3006 + 1 Hynix MLCWear & Error864
PHI3amd sd0Phison 3006 + 1 Hynix MLCWear & Error573
PHI4amd sd1Phison 3006 + 1 Hynix MLCWear & Error573

Total Written refers to the total amount of data written to date to the storage device in an attempt to test wear levelling and W/E lifetime, in GiB. For the current tests, each pass is 0.02 GiB.

PATA Flash Setup Notes

If this is the first time, see the next section. If restarting a test, mount the test device, change directories to the device under test, then run the test.sh script:

sudo fsck /dev/sda1         #  optional, if not shutdown cleanly
sudo mount /dev/sda1 /sd0   #  use appropriate device
cd /sd0
ls          # check state of directory
sudo ~/test.sh

On most modern Linux systems, the PATA drives are assigned /dev/sd[abcdef] device names and enumerated before any SATA drives. The test partition is always the first (e.g. /dev/sda1, /dev/sdb1, etc...)

PATA Flash Initialization

Note: For these tests to have a valid effect, the storage device should not be re-formatted or re-initialized for the duration of the wear leveling test!

Touch /ssd when setting up a new test host --- the test script uses this to tell it where to place the logs.

Partition a new drive using fdisk, placing the first partition at sector 512 (byte offset of 256K).

Install an ext2 filesystem on the drive, using 2048 byte blocks:

mke2fs -b 2048 -m 0 /dev/sdb1
mount /dev/sdb1 /sd0

Now you need to fill the NAND Flash partition ("/sd0 or /sd1"). This can be done using the fill_random.sh script.

LBA

The tests started with eight XOs at 1CC modified with a 4GB LBA-NAND part. Mitch Bradley prepared a kernel that has the drivers for the LBA-NAND connected through the CaFE chip. He also has a BusyBox initrd which supports partitioning, ext2 formatting, and testing of the parts. Testing started 9/22/08.

By 12/01/08, five of the 4GB devices had failed. Three had failed catastrophically, and could not be accessed (or reformatted) anymore. All devices lost their MBR (located in the first logical block on the device) upon failure. This failure might have been excerbated by a flaw in the original initialization process which placed the MBR in the same erase block as the beginning of the ext2 file system. These five devices were returned to Toshiba for failure analysis, and replaced with new 2GB LBA-NAND devices. Testing was resumed on the new devices.

The current test rates are 14-16 sec/test step 4.1 and 13-16 sec/test step 4.2. This translates into roughly a 4 MByte/s read rate, and 0.7 MByte/s write rate (this test version wrote 10MB, and did no read testing). This is verified by later tests with a 34 sec. mean time for step 4.2, when both reading back 20 MiB of data and writing 20 MiB of data.

LaptopSerial #TestTotal WrittenComments
LBA1CSN74700D03Wear & Error4906
LBA2CSN74702D30Wear & Error4882device failed
LBA3SHF808021E4Wear & Error985Device failed (sample #1)
LBA4CSN749013AFWear & Error2467Device Failed (sample #3)
LBA5CSN75001985Wear & Error2111Device failed (sample #4)
LBA6CSN74702A8EWear & Error2491Device failed (sample #5)
LBA7CSN748040B6Wear & Error3022device failed
LBA8CSN74900B3CWear & Error2329Device failed (sample #2)
LBA13SHF808021E4Wear & Error16452 GiB part - device failed
LBA14CSN749013AFWear & Error120 ?2 GiB part - device failed
LBA15CSN75001985Wear & Error120 ?2 GiB part - device failed
LBA16CSN74702A8EWear & Error22692 GiB part - device failed
LBA18CSN74900B3CWear & Error120 ?2 GiB part - device failed

Total Written refers to the total amount of data written to date to the storage device in an attempt to test wear levelling and W/E lifetime, in GiB. For the current tests, each pass is 0.02 GiB.

LBA-NAND Setup Notes

Boot with a USB stick containing two directories:

  • boot
  • test.sh - a script for running the wear leveling and error checking test

After the laptop boots, type the following to mount the USB disk for the first time:

mount /usb

At this point, some dangerous sounding error messages will result. Ignore them. If this is the first time, see the next section. If restarting a test, now simply type:

/usb/test.sh

A new logfile will automatically be created on the USB key (in /usb).

fsck

Occasionally, the ext2 filesystem on the NAND device becomes corrupted. You can repair it using:

umount /nand
/sbin/fsck.ext2 /dev/lba1
mount /dev/lba1 /nand

LBA-NAND Initialization

Note: For these tests to have a valid effect, the storage device should not be re-formatted or re-initialized for the duration of the wear leveling test! Do not run fill_cp.sh unless you are starting the tests for the first time!

The /etc/init.d/rc.usbnandtest script attempts to mount the storage device at boot time. Unmount it with:

umount /nand

To partition the disk, use fdisk:

fdisk /dev/lba

Delete any existing partitions, and create a single partition using all available space. Tell fdisk to start the first partition at sector number 512 - that aligns on a 256K boundary which is a multiple of the erase block size for this generation and probably the next.

With the version of fdisk in busybox, use the 'u' command to switch to sector units - it should respond with "Changing display/entry units to sectors". Then when you add the first partition, tell it 512 for the start. It defaults to cylinder units, which is a problem because the goofy old DOS conventions for the maximum values for SPT, tracks, and heads combine in strange ways to give non-power-of-two cylinder sizes.

Then proceed with creating a partition. Hit this series of keys: n <CR> p <CR> 1 <CR> 512 <CR> <CR> w <CR>.

After partitioning, dump the MBR and verify that the partition table is right. You should see something like this:

dd if=/dev/lba bs=32K count=1 | od -b
00000700: xxx xxx 203 xxx xxx xxx 000 002 000 000 yy yy yy yy 00 00

Look for the "000 002 000 000" - that's 0x200 in hex little endian, i.e. starting block 512. (The 203 (hex 83) is the ext2 type code.)

Now, use mke2fs to place a filesystem on the partition, forcing a 2K block size and no space reserved for root:

mke2fs -m 0 -b 2048 /dev/lba1

Now you can mount it and start filling it:

mount /dev/lba1 /nand
/usb/fill_cp.sh

As the kernel provided doesn't include support for /dev/urandom, the method used was to provide the random data on a USB key. fill_cp.sh just copies it from /usb/random. The USB key was previously initialized with sufficient random data using the fill_random.sh command. This command takes a number of 32 MB random data files to generate as an argument (65 files is sufficient for 4 GiB devices):

mkdir /Volumes/USBKEY/random
cd /Volumes/USBKEY/random
~/NANDtest/fill_random.sh 65

Original LBA-NAND Initialization

The first time this test was conducted, the devices were initialized using slightly simpler instructions, which resulted in the MBR being in the same erase block as the start of the ext2 filesystem. This resulted in a failure mode where the device formatting was lost.

The difference with the above instructions for initializing were the command used to repartition the storage device:

fdisk /dev/lba

Delete any existing partitions, and create a single partition using all available space. Hit this series of keys: d <CR> n <CR> p <CR> 1 <CR> <CR> <CR> w <CR>.

And the command used to format the device didn't force a small block size:

mke2fs -m 0 /dev/lba1

SD Cards

All SD cards follow the same initialization and setup procedures. Two types are currently being tested, with more possibly added in the future.

Sandisk Extreme III

There are four XOs at 1CC running the tests on a SanDisk Extreme III 4GB SD card. Build 8.2-760 was freshly installed on the laptops.

The current test rates are roughly 3.8 sec/test step 4.1, and 4.1 sec/test step 4.2. This translates roughly into a 17 MByte/s read rate, and a 5.7 MByte/s write rate.

LaptopSerial #TestTotal Written
SAN1SHF725004D1Wear & Error30,550
SAN2SHF7250048FWear & Error28,335
SAN3SHF80600A54Wear & Error28,730
SAN4CSN74902B22Wear & Error29,250

Total Written refers to the total amount of data written to date to the storage device in an attempt to test wear levelling and W/E lifetime, in GiB. For the current tests, each pass is 0.02 GiB.

Transcend Class 6

There are three XOs at 1CC running the tests on a Transcend class 6 4GB SD card. Build 8.2-767 was freshly installed on the laptops.

The current test rates are roughly 3.9 sec/test step 4.1, and 5 sec/test step 4.2. This translates roughly into a 17 MByte/s read rate, and a 5.7 MByte/s write rate.

LaptopSerial #TestTotal Written
TR1?Wear & Error14505
TR2?Wear & Error15710
TR3?Wear & Error5026

Total Written refers to the total amount of data written to date to the storage device in an attempt to test wear levelling and W/E lifetime, in GiB. For the current tests, each pass is 0.02 GiB.

SD Card Setup Notes

If this is the first time, see the next section. If restarting a test, boot the laptop, with a USB stick containing the test.sh script, and type:

/usb/test.sh

A new logfile will automatically be created on the USB key (in /usb/logfile-xxxxx).

SD Card Initialization

Note: For these tests to have a valid effect, the storage device should not be re-formatted or re-initialized for the duration of the wear leveling test!

Install a fresh copy of release 8.2-760 from a USB key using Open Firmware:

copy-nand u:\os760.img.

Boot, and insert a USB key containing several scripts:

  • fill_jffs.sh - a script for filling the NAND with random data
  • fill_random.sh - an alternative script for filling the disk
  • test.sh - a script for running the wear leveling and error checking test
  • random - a directory containing over 400 MB of random data, in 32 MiB files (only needed for initialization, and optional even then)

Go to the Journal and unmount the SD card.

You will need to create a link from the mount point for the USB key to /usb:

ln -s /media/<USB_KEY_NAME> /usb

Repartition the storage device using:

fdisk /dev/mmcblk0

Delete any existing partitions, and create a single partition using all available space. Hit this series of keys: d <CR> n <CR> p <CR> 1 <CR> <CR> <CR> w <CR>. If you get an error while re-reading the device partition table, reboot at this point.

Then format the device using:

mke2fs -m 0 /dev/mmcblk0p1

Now, mount the device as /nand, and start filling it with random data:

mkdir /nand
mount /dev/mmcblk0p1 /nand
/usb/fill_cp.sh
umount /nand
rmdir /nand

Reboot, and create a link from the mount point for the SD card to /nand:

ln -s /media/<SD_CARD_NAME> /nand

You are ready to start the testing, with:

/usb/test.sh