JFFS2: Difference between revisions

From OLPC
Jump to navigation Jump to search
(document that XO uses JFFS2, improve links)
m (→‎Links: fix link and note it's old)
Line 61: Line 61:
*[http://en.wikipedia.org/wiki/JFFS JFFS] at Wikipedia
*[http://en.wikipedia.org/wiki/JFFS JFFS] at Wikipedia
*[http://sourceware.org/jffs2/jffs2-html/ Summary on JFFS]
*[http://sourceware.org/jffs2/jffs2-html/ Summary on JFFS]
*[http://developer.axis.com/software/jffs/ Homepage of JFFS at AXIS]
*[http://developer.axis.com/old/software/jffs/ old Homepage of JFFS at AXIS]


[[Category:File systems]]
[[Category:File systems]]

Revision as of 01:19, 6 February 2009

  english | spanish HowTo [ID# 194317]  +/-  


JFFS2 is the Journaling Flash File System version 2. It is the file system used on the XO-1's built-in NAND flash (enter the command cat /etc/fstab in a Terminal Activity to verify). OLPC provides OS images in this format as well as Ext3, another file system format more common for disk-based computers.

jffs2 is a log-structured file system for flash memory devices and has support for NAND devices, also Hard links and compression algorithms like zlib, rubin, and rtime.

jffs2 also has a garbage collection algorithm that eliminates unnecessary I/O.

This article is a stub. You can help the OLPC project by expanding it.

Size of files

Files in JFFS2 are compressed transparently. This removes some of the need for gzipping and otherwise compressing files to save space. For text files compression rates of 50% or more are possible. du will not show the compressed sizes of files (nor will ls -l).

More discussion is available in this email post.

Algorithms

Zlib

Lossless data-compression library,

rubin

This simple algorithm is faster than zlib but not quite as effective.

rtime

This is a simple algorithm that often manages to squeeze extra few bytes from data already compressed with gzip

Measuring disk usage

From http://lists.laptop.org/pipermail/library/2007-July/000070.html

JFFS2 compresses 4K chunks using zlib.  So it's not just per-file 
compression, it's compressing bits of a file.  It doesn't compress files 
where compression doesn't help.  And there's a 68 byte overhead per 
compressed chunk.  Plus probably some fixed overhead per file.

Running "mkfs.jffs2 --root DIR | wc" gives a more accurate picture of 
the size of a directory.  Things might change as the compression 
parameters of JFFS2 are changed; specifically LZO compression is better 
than zlib, and there might be an attribute to disable compression on a 
particular file.

I wrote a little Python script to estimate the size and compare it to 
the actual size from mkfs.jffs2 (if you have that available on your 
system), and to the original size.  For small files (500 bytes - 1.5K) 
it's compressed to 50% of the size, for Python source code 40% of the 
size (with .pyc files), the Gnome 2 User Guide (with lots of images) 
gets to about 65% (35% reduction).

Script at: http://svn.colorstudy.com/home/ianb/olpc/jffs2size.py

/usr/sbin/mkfs.jffs2 --root DIR | wc -c

See also

Links