Literacy Project/Data Processing Notes

< Literacy Project
Revision as of 16:20, 11 July 2012 by (talk) (Pushing the data into git: fix spelling of wolenchite)
Jump to: navigation, search

Some notes on processing .zip files received from the field:


Prerequisites: an account on hydro, membership in the literacy group, write access to /home/ethiopia:

$ ssh
cscott@hydro:/home/ethiopia$ groups
cscott literacy
cscott@hydro:/home/ethiopia$ cd /home/ethiopia/
cscott@hydro:/home/ethiopia$ touch do-i-have-write-access
cscott@hydro:/home/ethiopia$ rm do-i-have-write-access 

Archiving, unpacking, and checking the data

wolonchete/ and wonchi/ are rsynced to

cscott@hydro:/home/ethiopia$ ls wolonchete/ wonchi/
2012-03-30  results.txt            wolonchete_2012-05-27  wolonchete_2012-06-24
2012-04-06  wolonchete_2012-05-01  wolonchete_2012-06-03
2012-04-13  wolonchete_2012-05-08  wolonchete_2012-06-10
2012-04-20  wolonchete_2012-05-17  wolonchete_2012-06-17

2012-02-14  2012-03-30  results.txt        wonchi_2012-05-23  wonchi_2012-06-21
2012-02-20  2012-04-06  wonchi_2012-04-26  wonchi_2012-05-31
2012-02-28  2012-04-13  wonchi_2012-05-03  wonchi_2012-06-07
2012-03-08  2012-04-20  wonchi_2012-05-10  wonchi_2012-06-14

Note that some of these directories have prefixed. The unprefixed versions are guesstimated dates -- we originally recorded the date the USB key arrived at OLPC. We fixed that and started recording the collection dates. The prefixed directories have the actual collection dates. The unprefixed directories are named with our best guess at the collection date.

The archive/ directory contains zip files copied from the data collection keys. There should be two files per usb key (a wolonchete .zip and a wonchi .zip). BE SURE to verify that contents of .zip file match the name, so you don't inadvertently overwrite something.

cscott@hydro:/home/ethiopia$ ls archive

These are the two most recent files received from Ethiopia:

cscott@hydro:/home/ethiopia$ ls ~rsmith/w*.zip
/home/rsmith/  /home/rsmith/
cscott@hydro:/home/ethiopia$ unzip -v /home/rsmith/ | head
Archive:  /home/rsmith/
 Length   Method    Size  Cmpr    Date    Time   CRC-32   Name
--------  ------  ------- ---- ---------- ----- --------  ----
       0  Stored        0   0% 2012-07-04 09:37 00000000  wolonchete_2012-07-01/
       0  Stored        0   0% 2012-07-04 09:27 00000000  wolonchete_2012-07-01/01/
       0  Stored        0   0% 1979-12-31 16:00 00000000  wolonchete_2012-07-01/01/
    8200  Defl:N     2543  69% 2012-06-24 09:06 5209af9a  wolonchete_2012-07-01/01/
    6152  Defl:N     1415  77% 2012-06-24 09:06 5d96a57f  wolonchete_2012-07-01/01/
    6152  Defl:N      811  87% 2012-06-24 09:06 541732f9  wolonchete_2012-07-01/01/
  114696  Defl:N    16867  85% 2012-06-24 09:37 a86306dd  wolonchete_2012-07-01/01/

Note that the directory name does match the .zip file name. (We should check the wonchi file, too.) Good.

Now let's unzip the files:

cscott@hydro:/home/ethiopia$ cd
cscott@hydro:~$ mkdir temp ; cd temp
cscott@hydro:~/temp$ unzip /home/rsmith/ 
cscott@hydro:~/temp$ unzip /home/rsmith/ 

We're going to check for duplicates:

cscott@hydro:~/temp$ cp ~rsmith/bin/rdfind ~/bin
cscott@hydro:~/temp$ rdfind wolonchete_2012-07-01/ wonchi_2012-06-28/
Now scanning "wolonchete_2012-07-01", found 13727 files.
Now scanning "wonchi_2012-06-28", found 7630 files.
Now have 21357 files in total.
Removed 0 files due to nonunique device and inode.
Now removing files with zero size from list...removed 36 files
Total size is 3198679889 bytes or 3 Gib
Now sorting on size:removed 6240 files due to unique sizes from list.15081 files left.
Now eliminating candidates based on first bytes:removed 3424 files from list.11657 files left.
Now eliminating candidates based on last bytes:removed 9294 files from list.2363 files left.
Now eliminating candidates based on md5 checksum:removed 2363 files from list.0 files left.
It seems like you have 0 files that are not unique
Totally, 0 b can be reduced.
Now making results file results.txt

Ok, this gives us confidence that there aren't gross errors in the data, like having the wonchi and wolonchete zips the same, etc. We should repeat the rdfind across the entire dataset (later) as well, to ensure that we don't have stale data here.

OK, this looks good. Let's move them into the ethiopia directory:

cscott@hydro:~/temp$ mv /home/rsmith/ /home/rsmith/ /home/ethiopia/archive/
cscott@hydro:~/temp$ mv wolonchete_2012-07-01/ /home/ethiopia/wolonchete/
cscott@hydro:~/temp$ mv wonchi_2012-06-28/ /home/ethiopia/wonchi/

Normalize the permissions:

cscott@hydro:~/temp$ cd /home/ethiopia/archive/
cscott@hydro:/home/ethiopia/archive$ chmod 644 *
cscott@hydro:/home/ethiopia/archive$ cd ..
cscott@hydro:/home/ethiopia$ chown -R rsmith:literacy wolonchete/wolonchete_2012-07-01/ wonchi/wonchi_2012-06-28/
cscott@hydro:/home/ethiopia$ chmod -R g+sw wolonchete/wolonchete_2012-07-01/ wonchi/wonchi_2012-06-28/

Now we're going to run rdfind across all of the data, in order to catch more errors:

cscott@hydro:/home/ethiopia$ rdfind wolonchete/ wonchi/

The output will be in a results.txt file.

Pushing the data into git

Make sure you have a copy of the git repo somewhere handy:

cscott@hydro:/home/ethiopia$ cd
cscott@hydro:~$ git clone ssh:// ethiopia-data
cscott@hydro:~$ git clone ~rsmith/data/.git ethiopia-data
cscott@hydro:~$ cd ethiopia-data/
cscott@hydro:~/ethiopia-data$ git remote add dev ssh://

Add the ethiopia-data/scripts directory to your path:

cscott@hydro:~/ethiopia-data$ export PATH=$PATH:$HOME/ethiopia-data/scripts

You might want to be inside screen or tmux for the following, since it takes a long time:

cscott@hydro:~/ethiopia-data$ cd /home/ethiopia/
cscott@hydro:/home/ethiopia$ wolonchete/wolonchete_2012-07-01
cscott@hydro:/home/ethiopia$ wonchi/wonchi_2012-06-28/
Mon Jul  9 17:46:31 EDT 2012: Finished processing wonchi/wonchi_2012-06-28/

You might see a few error messages which look like:

Unable to parse file: 00000034_0388920540e09217_d4c0820a-2ddc-4c19-8adf-872c835dd6b7_1340339595_mainPipeline.db

These are harmless; usually caused by too-small/empty .db files (check this).

Now that we've run this command, we have moved the original db files to encrypted_db and created merged.db and *.csv files:

cscott@hydro:/home/ethiopia$ ls wolonchete/wolonchete_2012-07-01/20/
cscott@hydro:/home/ethiopia$ ls wolonchete/wolonchete_2012-07-01/20/ 
cscott@hydro:/home/ethiopia$ ls wolonchete/wolonchete_2012-07-01/20/
BatteryProbe.csv       Matching.csv                  ScreenProbe.csv
FileMoverService.csv   NellBalloons5.csv             tinkerbook.csv
HardwareInfoProbe.csv  RecorderService.csv
LauncherApp.csv        RunningApplicationsProbe.csv

Now let's stage this data for transfer to git: (Note that, in cases of discrepancy, the first two arguments match how the directory will appear in git and the last argument matches the directory name in /home/ethiopia)

cscott@hydro:/home/ethiopia$ wonchi 2012-06-28 wonchi/wonchi_2012-06-28/

This has put the new data in $HOME/forgit:

cscott@hydro:/home/ethiopia$ cd ~/forgit/
cscott@hydro:~/forgit$ ls
cscott@hydro:~/forgit$ ls wonchi/
cscott@hydro:~/forgit$ ls wonchi/2012-06-28/
01  02  03  04  05  06  07  08  09  10  11  12  13  14  15  16  17  18  19  20

Now move this to the git dir and check it in:

cscott@hydro:~/forgit$ mv wonchi/2012-06-28 ~/ethiopia-data/wonchi/
cscott@hydro:~/forgit$ cd ~/ethiopia-data/
cscott@hydro:~/ethiopia-data$ git add wonchi/2012-06-28
cscott@hydro:~/ethiopia-data$ git commit wonchi/2012-06-28
cscott@hydro:~/ethiopia-data$ git push origin # or dev

Now repeat this for the other site (separate commits for each):

cscott@hydro:/home/ethiopia$ wolenchite 2012-07-01 wolonchete/wolonchete_2012-07-01/
cscott@hydro:/home/ethiopia$ cd ~/forgit/
cscott@hydro:~/forgit$ mv wolenchite/2012-07-01 ~/ethiopia-data/wolenchite/
cscott@hydro:~/forgit$ cd ~/ethiopia-data/
cscott@hydro:~/ethiopia-data$ git add wolenchite/2012-07-01
cscott@hydro:~/ethiopia-data$ git commit wolenchite/2012-07-01
cscott@hydro:~/ethiopia-data$ git push origin # or dev

Final steps

In theory we should then sync up owl and worldliteracy, then send mail to the literacy list to announce that there is new data. Something like:

rsync -a --progress wolonchete wonchi