JFFS2: Difference between revisions

From OLPC
Jump to navigation Jump to search
(added note about files.)
(Added "Measuring disk usage" section, with snippet from recent thread.)
Line 24: Line 24:
This is a simple algorithm that often manages to squeeze extra few
This is a simple algorithm that often manages to squeeze extra few
bytes from data already compressed with gzip
bytes from data already compressed with gzip

==Measuring disk usage==
From http://lists.laptop.org/pipermail/library/2007-July/000070.html
JFFS2 compresses 4K chunks using zlib. So it's not just per-file
compression, it's compressing bits of a file. It doesn't compress files
where compression doesn't help. And there's a 68 byte overhead per
compressed chunk. Plus probably some fixed overhead per file.
Running "mkfs.jffs2 --root DIR | wc" gives a more accurate picture of
the size of a directory. Things might change as the compression
parameters of JFFS2 are changed; specifically LZO compression is better
than zlib, and there might be an attribute to disable compression on a
particular file.
I wrote a little Python script to estimate the size and compare it to
the actual size from mkfs.jffs2 (if you have that available on your
system), and to the original size. For small files (500 bytes - 1.5K)
it's compressed to 50% of the size, for Python source code 40% of the
size (with .pyc files), the Gnome 2 User Guide (with lots of images)
gets to about 65% (35% reduction).
Script at: http://svn.colorstudy.com/home/ianb/olpc/jffs2size.py




== Links ==
== Links ==

Revision as of 15:20, 26 July 2007

JFFS2 is the Journaling Flash File System version 2, jffs2 is a log-structured file system for flash memory devices and has support for NAND devices, also Hard links and compression algorithms like zlib, rubin, and rtime.

Jffs2 also has a garbage collection algorithm that eliminates unnecessary I/O.

This article is a stub. You can help the OLPC project by expanding it.

Size of Files

Files in JFFS2 are compressed transparently. This removes some of the need for gzipping and otherwise compressing files to save space. For text files compression rates of 50% or more are possible. du will not show the compressed sizes of files (nor will ls -l).

More discussion is available in this email post.

Algorithms

Zlib

Lossless data-compression library,

rubin

This simple algorithm is faster than zlib but not quite as effective.

rtime

This is a simple algorithm that often manages to squeeze extra few bytes from data already compressed with gzip

Measuring disk usage

From http://lists.laptop.org/pipermail/library/2007-July/000070.html

JFFS2 compresses 4K chunks using zlib.  So it's not just per-file 
compression, it's compressing bits of a file.  It doesn't compress files 
where compression doesn't help.  And there's a 68 byte overhead per 
compressed chunk.  Plus probably some fixed overhead per file.

Running "mkfs.jffs2 --root DIR | wc" gives a more accurate picture of 
the size of a directory.  Things might change as the compression 
parameters of JFFS2 are changed; specifically LZO compression is better 
than zlib, and there might be an attribute to disable compression on a 
particular file.

I wrote a little Python script to estimate the size and compare it to 
the actual size from mkfs.jffs2 (if you have that available on your 
system), and to the original size.  For small files (500 bytes - 1.5K) 
it's compressed to 50% of the size, for Python source code 40% of the 
size (with .pyc files), the Gnome 2 User Guide (with lots of images) 
gets to about 65% (35% reduction).

Script at: http://svn.colorstudy.com/home/ianb/olpc/jffs2size.py


Links