WikiBrowse: Difference between revisions

From OLPC
Jump to navigation Jump to search
(archive old todos)
Line 3: Line 3:
[[Image:Olpc_logo.jpg|center]]
[[Image:Olpc_logo.jpg|center]]
<center><span style="font-size:200%">Wiki server</span></center>
<center><span style="font-size:200%">Wiki server</span></center>

A wiki webserver activity is being developed, as a self-contained browsable offline wikireader. Other efforts to generate and browse slices of online wikis are being discussed in the [[Wikislice]] project.


== Introduction ==
== Introduction ==
Line 12: Line 14:
The wikipedia-iphone project's goal is to serve wikipedia content out of a compressed copy of the XML dump file after indexing it. The architecture is that there are C functions to pull out articles, and several interfaces to those C functions: the main interface is the iPhone app, but there's also a web server (written in Ruby with Mongrel) that runs locally and serves up pages from the compressed archive.
The wikipedia-iphone project's goal is to serve wikipedia content out of a compressed copy of the XML dump file after indexing it. The architecture is that there are C functions to pull out articles, and several interfaces to those C functions: the main interface is the iPhone app, but there's also a web server (written in Ruby with Mongrel) that runs locally and serves up pages from the compressed archive.


[[User:Wadeb|Wadeb]] has helped port the original project's code to Python. [[user:Cjb|Cjb]] and Wade are working on fixing some of the unfinished aspects of the original, particularly:
I have this (the Ruby web server) working on a local machine here (temporary: http://pullcord.laptop.org:9090/) serving up the entirety of the Spanish wikipedia text (the compressed. bz2 file that pages are served out of is 400M, and the index is 10M).
* Template rendering
* Redlink/greenlink/bluelink rendering
* Image thumbnail retrieval
* Automated subselection (currently: via a minimum # of inbound links)


A Ruby web server is working on a local machine (temporary: http://pullcord.laptop.org:9090/) serving up the whole Spanish wikipedia text (the compressed. bz2 file that pages are served out of is 400M, and the index is 10M); a subset of 35,000 articles can be stored in 80MB on an XO. "green" links can indicate articles which exist (on a local server, or on the internet) but not in the local dump, while red and blue links continue to indicate nonexistent and local existing pages. (greenlinks have yet to be implemented - --[[User:Sj|Sj]]&nbsp;[[User talk:Sj|<font style="color:#f70; font-size:70%">talk</font>]] 23:01, 7 May 2008 (EDT))
The mediawiki data dump is stored as a [http://en.wikipedia.org/wiki/Bz2 .bz2 file], which is made of smaller compressed blocks (which each contain multiple articles). The WOAI code, among other things, goes through and makes an index of which block each article is in. That way, when you want to read an article, your computer only uncompresses the tiny block it's in - the rest of the huge mediawiki dump stays compressed. This means that (1) it's really fast, since you're working with tiny compressed bundles and (2) it's really small, since you're only decompressing one tiny bundle at a time. For example, the compressed text for all of Spanish Wikipedia is 400M, with a block size of 400KB.

The mediawiki data dump is stored as a [http://en.wikipedia.org/wiki/Bz2 .bz2 file], which is made of smaller compressed blocks (which each contain multiple articles). The WOAI code, among other things, goes through and makes an index of which block each article is in. That way, when you want to read an article, your computer only uncompresses the tiny block it's in - the rest of the huge mediawiki dump stays compressed. This means that
* (1) it's really fast, since you're working with tiny compressed bundles and
* (2) it's really small, since you're only decompressing one tiny bundle at a time.


=== What we're doing ===
=== What we're doing ===


We'd love to make this into an XO activity.
We are working to make this into an XO activity.


The first step is to port the code from Ruby with Mongrel to Python with BaseHTTPServer so it'll run natively on the XO - details below in [[#Help wanted]]. After that, we'd love to have a few more things done, like...
The first step was to port the code from Ruby with Mongrel to Python with BaseHTTPServer so it'll run natively on the XO - details below in [[#Help wanted]]. After that, we'd love to have a few more things done, like...


* Wrapping this as a Sugar activity (code)
* Wrapping this as a Sugar activity (code)
Line 30: Line 40:
== Help wanted ==
== Help wanted ==


=== People ===
[[User:cjb|Chris Ball]] is the person to contact if you're unsure of how to get started.
* [[User:cjb|Chris Ball]] is the person to contact if you're unsure of how to get started.
* [[User:Wadeb|Wade]] - working on the python port and can help people trying to implement any of the desierd featureset
* [[User:Sj|Sj]] - working on the featureset & tests
* [[User:mad|mad]] - working on a Spanish portal page


=== Porting server code from Ruby to Python ===


=== Todo list ===
We need a Python programmer (or a small team of Python programmers working together) to start the project off by porting the server code from Ruby to Python so it'll be easier to run on the XO. Ruby's quite easy to read and you don't have to be a Ruby programmer to do this (but it helps if you know Python). The code is very simple and short (less than 300 lines), so this should take no more than a weekend. Here's a suggested how-to-do-it procedure.


;Porting server code from Ruby to Python : {{done}} (wade)
# Read this page to get an idea of what we're trying to do.

# Read the [http://collison.ie/wikipedia-iphone/ project homepage] to get an overview of what the app does. Also see the [http://code.google.com/p/wikipedia-iphone/ google code project].
;Creating a python activity wrapper : {{done}} (wade) -- though this needs testing for journal/sugar integration.
# [http://code.google.com/p/wikipedia-iphone/source/checkout Download the source code] and take a look around. Notice how most of the code is either shell scripts or C, but there's a folder of ruby (rb) code. This is the stuff we want to port.
# (Optional but recommended): Download and install [http://ruby-lang.org Ruby] and test out the existing code so you can see the app in action. Follow the instructions in the "Getting Started" section of the README file (in the source code you just downloaded) to get a wikipedia datafile parsed and the web server running. We'd recommend using a smaller wikipedia than the English language one.
# Take a look at the files in the rb folder. There are four main ones to port to Python (the rest are very short "helper" files and should take just a few minutes to rewrite).
## '''bzipreader.rb''' (ruby interface to c/bzipreader.c; supports streaming bz2 files) - probably the most difficult, since you'll have to interface your python code with C (bzipreader.c). If someone has a tutorial or resources on how to do this, please post the link here.
## '''index.rb''' (generate an article-to-block index using bzipreader.rb)
## '''server.rb''' (Mongrel-based server for using WP dumps with a web browser) - we'd suggest using the built-in Python webserver, [http://docs.python.org/lib/module-BaseHTTPServer.html BaseHTTPServer], for this.
## '''xmlprocess.rb''' (generate stripped, XML-less file from a vanilla WP dump) -- this wouldn't have to be ported, since we could prepare the archive elsewhere and just serve it on the XO.
# Put the new files (bzipreader.py, index.py, server.py... etc) in a "py" folder and delete the "rb" one when you're done porting.
# Remember to license your work under the GPL (you must, since the original code is GPL) by putting a copy of the license in your folder (or just leaving the COPYING file from the original source in).
# Write a README on how to run your ported code and include it in the bundle.
# Write a README on how to run your ported code and include it in the bundle.
# When you have the first hint of functional progress (and ''definitely'' when you finish)...
#* let [[User:Cjb|Chris Ball]], the [http://lists.laptop.org/listinfo/library library] list, and the [http://lists.laptop.org/listinfo/wikireader wikireader] list know.
#* It would also be super nice to contact Patrick, the original "wikipedia on the iphone" developer, and work with that community to integrate your code into theirs. Chris [http://groups.google.com/group/wikipedia-iphone/browse_thread/thread/58fe1472eea3f117 made a start] on this.
#* It would also be super nice to contact Patrick, the original "wikipedia on the iphone" developer, and work with that community to integrate your code into theirs. Chris [http://groups.google.com/group/wikipedia-iphone/browse_thread/thread/58fe1472eea3f117 made a start] on this.
#* This would also be a good time to apply for [[Project hosting]].
#* Contact the testers who have signed up below, and give them instructions on how you'd like them to try out the code you've written, and what kind of feedback you're looking for.
#* Contact the testers who have signed up below, and give them instructions on how you'd like them to try out the code you've written, and what kind of feedback you're looking for.

;Creating a spanish-language portal page: mad is working on this



=== Testers ===
=== Testers ===
Line 59: Line 64:


* [[User:Mchua|Mel Chua]]
* [[User:Mchua|Mel Chua]]
* [[User:RafaelOrtiz | Rafael Ortiz]]
* [[User:RafaelOrtiz | Rafael Ortiz]]
* [[User:Sj|Sj]]

Revision as of 03:01, 8 May 2008


Olpc logo.jpg
Wiki server

A wiki webserver activity is being developed, as a self-contained browsable offline wikireader. Other efforts to generate and browse slices of online wikis are being discussed in the Wikislice project.

Introduction

The Wikipedia on an Iphone (WOAI) project by Patrick Collison makes it possible to have a working, usable mediawiki (read: wikipedia) dump in a very small space (read: the XO's flash drive).

How it works

The wikipedia-iphone project's goal is to serve wikipedia content out of a compressed copy of the XML dump file after indexing it. The architecture is that there are C functions to pull out articles, and several interfaces to those C functions: the main interface is the iPhone app, but there's also a web server (written in Ruby with Mongrel) that runs locally and serves up pages from the compressed archive.

Wadeb has helped port the original project's code to Python. Cjb and Wade are working on fixing some of the unfinished aspects of the original, particularly:

  • Template rendering
  • Redlink/greenlink/bluelink rendering
  • Image thumbnail retrieval
  • Automated subselection (currently: via a minimum # of inbound links)

A Ruby web server is working on a local machine (temporary: http://pullcord.laptop.org:9090/) serving up the whole Spanish wikipedia text (the compressed. bz2 file that pages are served out of is 400M, and the index is 10M); a subset of 35,000 articles can be stored in 80MB on an XO. "green" links can indicate articles which exist (on a local server, or on the internet) but not in the local dump, while red and blue links continue to indicate nonexistent and local existing pages. (greenlinks have yet to be implemented - --Sj talk 23:01, 7 May 2008 (EDT))

The mediawiki data dump is stored as a .bz2 file, which is made of smaller compressed blocks (which each contain multiple articles). The WOAI code, among other things, goes through and makes an index of which block each article is in. That way, when you want to read an article, your computer only uncompresses the tiny block it's in - the rest of the huge mediawiki dump stays compressed. This means that

  • (1) it's really fast, since you're working with tiny compressed bundles and
  • (2) it's really small, since you're only decompressing one tiny bundle at a time.

What we're doing

We are working to make this into an XO activity.

The first step was to port the code from Ruby with Mongrel to Python with BaseHTTPServer so it'll run natively on the XO - details below in #Help wanted. After that, we'd love to have a few more things done, like...

  • Wrapping this as a Sugar activity (code)
  • Some article selection. Since it serves files out of the .xml.bz2, we can accomplish this by choosing what goes into the .xml.bz2 (perhaps there are already tools for doing this? I don't know much about it) as long as we deal with the link-breaking we do as a result. (content, curation)
  • Add a subset of images. (curation)
  • Finding some way to handle images - the current code only works with text, and image links are broken. (code)
  • Removing the wikitext parser from the server and rewriting it as an independent plugin/middleware/etc architecture so that other wiki syntaxes can be supported. Javascript, slimming down the current Mediawiki php parser, and Python middleware are all options. The current solution is a very simple/incomplete parser within the server code itself. (code)

Help wanted

People

  • Chris Ball is the person to contact if you're unsure of how to get started.
  • Wade - working on the python port and can help people trying to implement any of the desierd featureset
  • Sj - working on the featureset & tests
  • mad - working on a Spanish portal page


Todo list

Porting server code from Ruby to Python
Done. (wade)
Creating a python activity wrapper
Done. (wade) -- though this needs testing for journal/sugar integration.
  1. Write a README on how to run your ported code and include it in the bundle.
    • It would also be super nice to contact Patrick, the original "wikipedia on the iphone" developer, and work with that community to integrate your code into theirs. Chris made a start on this.
    • Contact the testers who have signed up below, and give them instructions on how you'd like them to try out the code you've written, and what kind of feedback you're looking for.
Creating a spanish-language portal page
mad is working on this


Testers

If you're willing to test code once the developers have it working, place your name below (make sure it links to some form of contact information).