JFFS2/lang-es: Difference between revisions

From OLPC
Jump to navigation Jump to search
m (fake translation redirect)
 
mNo edit summary
 
Line 1: Line 1:
'''JFFS2''' is the Journaling Flash File System version 2, jffs2 is a log-structured file system for flash memory devices and has support for NAND devices, also Hard links and compression algorithms like zlib, rubin, and rtime.
#REDIRECT [[JFFS2]]


Jffs2 also has a garbage collection algorithm that eliminates unnecessary I/O.
[[Category:Missing translation]]

{{stub}}

{{anchor|Size of Files}}
== Tamano de los Archivos ==

Files in JFFS2 are compressed transparently. This removes some of the need for gzipping and otherwise compressing files to save space. For text files compression rates of 50% or more are possible. <code>du</code> will not show the compressed sizes of files (nor will <code>ls -l</code>).

More discussion is available in [http://lists.laptop.org/pipermail/library/2007-July/000070.html this email post].

{{anchor|Algorithms}}
== Algoritmos ==

{{anchor|Zlib}}
=== Zlib ===
Lossless data-compression library,
* [http://www.zlib.net/manual.html Manual]
* [http://www.zlib.net/zlib_how.html Examples]

=== rubin ===
This simple algorithm is faster than zlib but not quite as effective.

=== rtime ===
This is a simple algorithm that often manages to squeeze extra few
bytes from data already compressed with gzip

{{anchor|Measuring disk usage}}
==Midiendo el uso del disco==
From http://lists.laptop.org/pipermail/library/2007-July/000070.html
JFFS2 compresses 4K chunks using zlib. So it's not just per-file
compression, it's compressing bits of a file. It doesn't compress files
where compression doesn't help. And there's a 68 byte overhead per
compressed chunk. Plus probably some fixed overhead per file.
Running "mkfs.jffs2 --root DIR | wc" gives a more accurate picture of
the size of a directory. Things might change as the compression
parameters of JFFS2 are changed; specifically LZO compression is better
than zlib, and there might be an attribute to disable compression on a
particular file.
I wrote a little Python script to estimate the size and compare it to
the actual size from mkfs.jffs2 (if you have that available on your
system), and to the original size. For small files (500 bytes - 1.5K)
it's compressed to 50% of the size, for Python source code 40% of the
size (with .pyc files), the Gnome 2 User Guide (with lots of images)
gets to about 65% (35% reduction).
Script at: http://svn.colorstudy.com/home/ianb/olpc/jffs2size.py

/usr/sbin/mkfs.jffs2 --root DIR | wc -c

== Links ==
*[http://en.wikipedia.org/wiki/JFFS2 JFFS2] at Wikipedia
*[http://en.wikipedia.org/wiki/JFFS JFFS] at Wikipedia
*[http://sourceware.org/jffs2/jffs2-html/ Summary on JFFS]
*[http://developer.axis.com/software/jffs/ Homepage of JFFS at AXIS]
*[[Logfs]]

[[Category:OS]]
[[Category:Developers]]
[[Category:Software development]]
[[Category:Resources]]

Latest revision as of 04:17, 9 September 2007

JFFS2 is the Journaling Flash File System version 2, jffs2 is a log-structured file system for flash memory devices and has support for NAND devices, also Hard links and compression algorithms like zlib, rubin, and rtime.

Jffs2 also has a garbage collection algorithm that eliminates unnecessary I/O.

This article is a stub. You can help the OLPC project by expanding it.

Tamano de los Archivos

Files in JFFS2 are compressed transparently. This removes some of the need for gzipping and otherwise compressing files to save space. For text files compression rates of 50% or more are possible. du will not show the compressed sizes of files (nor will ls -l).

More discussion is available in this email post.

Algoritmos

Zlib

Lossless data-compression library,

rubin

This simple algorithm is faster than zlib but not quite as effective.

rtime

This is a simple algorithm that often manages to squeeze extra few bytes from data already compressed with gzip

Midiendo el uso del disco

From http://lists.laptop.org/pipermail/library/2007-July/000070.html

JFFS2 compresses 4K chunks using zlib.  So it's not just per-file 
compression, it's compressing bits of a file.  It doesn't compress files 
where compression doesn't help.  And there's a 68 byte overhead per 
compressed chunk.  Plus probably some fixed overhead per file.

Running "mkfs.jffs2 --root DIR | wc" gives a more accurate picture of 
the size of a directory.  Things might change as the compression 
parameters of JFFS2 are changed; specifically LZO compression is better 
than zlib, and there might be an attribute to disable compression on a 
particular file.

I wrote a little Python script to estimate the size and compare it to 
the actual size from mkfs.jffs2 (if you have that available on your 
system), and to the original size.  For small files (500 bytes - 1.5K) 
it's compressed to 50% of the size, for Python source code 40% of the 
size (with .pyc files), the Gnome 2 User Guide (with lots of images) 
gets to about 65% (35% reduction).

Script at: http://svn.colorstudy.com/home/ianb/olpc/jffs2size.py

/usr/sbin/mkfs.jffs2 --root DIR | wc -c

Links