Wikislices: Difference between revisions
(..) |
(cleaner) |
||
Line 1: | Line 1: | ||
In this context, a '''Wikislice''' is collection of articles pulled from Wikipedia for the WikiReader activity on the XO. The goal is to select from wikipedia well written, structured, and cited articles while excluding the rest. The entire english Wikipedia is very large and wouldn't fit on the XO. Nor are 1000+ articles on Pokemon characters important education materials for the developing world. |
In this context, a '''Wikislice''' is collection of articles pulled from Wikipedia for the WikiReader activity on the XO. The goal is to select from wikipedia well written, structured, and cited articles while excluding the rest. The entire english Wikipedia is very large and wouldn't fit on the XO. Nor are 1000+ articles on Pokemon characters important education materials for the developing world. |
||
⚫ | |||
== Questions == |
|||
=== Universalism === |
|||
== Needs == |
|||
The question of universal use of this content needs to be considered. Do we run this project under OLPC entirely? Or do we try to create logical bundles for anyone with a wikireader? What are our ideas that may differ from other Wikipedians? |
|||
== |
=== Snapshot tools === |
||
To quickly make and revise slices, one needs |
|||
* libraries for producing different styles of wikipedia snapshots (wikitext, html, txt, pdf) from categories (special:export), topics/pages (wikiosity), and index pages (wikislices) |
|||
* libraries that can do intelligent things with metadata from history and wikimedia-commons pages |
|||
* libraries that support no-image/thumbnail/mid-res image selection |
|||
* libraries that recalculate blue v. red links given a wikislice |
|||
=== for Wikipedia === |
|||
* Produce snapshots of Wikipedia by size (10MB, 100MB, 1GB, 10GB, and 100GB) |
|||
* Produce snapshots by topic ([[Wikiosity]], categories, explicit wikislice lists) |
|||
== Format conversion === |
|||
There are many existing flavors/formats for bundles of wiki-subsets for offline use. |
|||
Conversion tools are needed to exchange collections between similar projects : WP WikiReaders, Wikibooks, wikipedia wikislice projects, webaroo wikislices, kiwix snapshots, schools-wikipedia snapshots, ksana snapshots, WP 1.0 revision-vetting. If this isn't possible as literal "conversion" from one format to another, index selections of articles+revisions should at least be shared. |
|||
== Offline readers == |
|||
As a recent focal point, Zvi Boshernitzan and Ben Lisbakken have both made offline wikipedia-readers using Google Gears that are pretty fast and offer some nice features in terms of letting you select a set of articles, cache them locally, and browse an index. We talked last week about how to integrate Gears more tightly into a browsing experience, with hopes of pursuing a prototype withing a couple of weeks. It would be helpful to inform such a client with lists of good revisions of articles, such a those Martin Walker and Andrew Cates have developed for their own projects... and to plan for it to support offline editing as well as reading, using synchronization tools such as Mako's distributed wiki client. |
|||
== Todo == |
|||
* wikipediaondvd - Pascal and Guillame wondering how they can help |
|||
== Examples == |
|||
We are planning on shipping a general collection of material with the XO. Additional packages will be shipped with the [[XS]] [[School Server]] or available online. Collections could be added to a student's XO based on classroom assignments or simply a child's interest in a subject. |
We are planning on shipping a general collection of material with the XO. Additional packages will be shipped with the [[XS]] [[School Server]] or available online. Collections could be added to a student's XO based on classroom assignments or simply a child's interest in a subject. |
||
⚫ | |||
⚫ | |||
=== Health === |
=== Health === |
||
Line 20: | Line 51: | ||
* Something for a bug blitz would be most helpful, drawing on the above and related zipcodezoo and misha h's content. |
* Something for a bug blitz would be most helpful, drawing on the above and related zipcodezoo and misha h's content. |
||
== |
== Current scripts == |
||
Scripts that are currently used to make bundles: |
Scripts that are currently used to make bundles: |
||
Line 40: | Line 71: | ||
== Older notes == |
== Older notes == |
||
''see also [[talk:Wikislice|the talk page]]'' |
|||
⚫ | |||
⚫ | |||
=== WikiProject Wikislice === |
|||
⚫ | |||
=== Code libraries=== |
=== Code libraries=== |
||
Line 56: | Line 79: | ||
* the "Wikipedia 1.0" team and Andrew Cates (user:BozMo on en:wp) is using their own scripts to generate and review static collections from a list of constantly changing wiki articles. |
* the "Wikipedia 1.0" team and Andrew Cates (user:BozMo on en:wp) is using their own scripts to generate and review static collections from a list of constantly changing wiki articles. |
||
{{stub}} |
|||
[[category:projects]] |
|||
[[category:content]] |
Revision as of 17:47, 26 February 2008
In this context, a Wikislice is collection of articles pulled from Wikipedia for the WikiReader activity on the XO. The goal is to select from wikipedia well written, structured, and cited articles while excluding the rest. The entire english Wikipedia is very large and wouldn't fit on the XO. Nor are 1000+ articles on Pokemon characters important education materials for the developing world.
Please visit the project about wikislices on the english wikipedia.
Needs
Snapshot tools
To quickly make and revise slices, one needs
- libraries for producing different styles of wikipedia snapshots (wikitext, html, txt, pdf) from categories (special:export), topics/pages (wikiosity), and index pages (wikislices)
- libraries that can do intelligent things with metadata from history and wikimedia-commons pages
- libraries that support no-image/thumbnail/mid-res image selection
- libraries that recalculate blue v. red links given a wikislice
for Wikipedia
- Produce snapshots of Wikipedia by size (10MB, 100MB, 1GB, 10GB, and 100GB)
- Produce snapshots by topic (Wikiosity, categories, explicit wikislice lists)
Format conversion =
There are many existing flavors/formats for bundles of wiki-subsets for offline use.
Conversion tools are needed to exchange collections between similar projects : WP WikiReaders, Wikibooks, wikipedia wikislice projects, webaroo wikislices, kiwix snapshots, schools-wikipedia snapshots, ksana snapshots, WP 1.0 revision-vetting. If this isn't possible as literal "conversion" from one format to another, index selections of articles+revisions should at least be shared.
Offline readers
As a recent focal point, Zvi Boshernitzan and Ben Lisbakken have both made offline wikipedia-readers using Google Gears that are pretty fast and offer some nice features in terms of letting you select a set of articles, cache them locally, and browse an index. We talked last week about how to integrate Gears more tightly into a browsing experience, with hopes of pursuing a prototype withing a couple of weeks. It would be helpful to inform such a client with lists of good revisions of articles, such a those Martin Walker and Andrew Cates have developed for their own projects... and to plan for it to support offline editing as well as reading, using synchronization tools such as Mako's distributed wiki client.
Todo
- wikipediaondvd - Pascal and Guillame wondering how they can help
Examples
We are planning on shipping a general collection of material with the XO. Additional packages will be shipped with the XS School Server or available online. Collections could be added to a student's XO based on classroom assignments or simply a child's interest in a subject.
Common examples are topical wikislices from Wikipedia, resulting in books such as the "Solar system" wikijunior text and various wikireaders. Tools used to make wikislices are regular expression toolkits.
See User:ZdenekBroz and the library grid for some examples.
Health
- In conjunction with other Health Content, a wikislice of relevant health materials would be very useful.
- A bundle of Where There is No Doctor is underway, with a pdf - to - html conversion care of Pascals
Science
- An update to the Biology bundle with fungi and protists is slowly underway. Ditto an update clarifying licenses of the images (all cc-by)
- Something for a bug blitz would be most helpful, drawing on the above and related zipcodezoo and misha h's content.
Current scripts
Scripts that are currently used to make bundles:
PDF and single-document exports
- wiki2pdf
HTML dumps
- wikiwix export (being built) : takes in a list of wikiwix entries, outputs ?
- wikiwix interface (being improved) : allow selection via firefox plugin of a set of articles for a collection
Summaries and weight-watchers
- Summarize list : takes in a list of article titles, outputs a directory of one-paragraph html files with css. (by Zdenek, not published yet)
- Compress images : take a set of pages and images, reduce images according to a slider
- no images (remove altogether) v. hotlink images (include original thumbnail, alt text when offline)
- include first {0-10} images on a page, with metadata
- thumbnail only v. include full image (but not extra large) v. include all image sizes (full screen and more-than-fullscreen where available)
- bonus: assume local resize tool v. store 3 images for large instances
Older notes
Code libraries
- KsanaForge and their KsanaWiki project have a set of scripts that process raw xml dumps from MediaWiki. They are working on producing read-only flash drives and SD cards for distribution.
- Linterweb, developer of one of the freely-available static selections of Wikipedia, has an open source toolchain for building it; they are also working on wiki search engines (see Kiwix) and have offered to help build the local-filesystem search for the journal.
- The Moulinwiki project and Renaud Gaudin have a toolchain from processing html output from the MediaWiki parser. They are now combining forces with Linterweb.
- PediaPress has an "mwlib" library for parsing mediawiki text which is freely available
- the "Wikipedia 1.0" team and Andrew Cates (user:BozMo on en:wp) is using their own scripts to generate and review static collections from a list of constantly changing wiki articles.