Annotation

From OLPC
Jump to: navigation, search
  english | 한국어 HowTo [ID# 187339]  +/-  

We want to support annotation of any document, in a generalized way that can be supported by a unified aggregation and sharing system (where annotations/comments are similar to other objects in the object store). Media that should support annotation include documents and images; perhaps also any webpage or item viewed through a browser. In the extreme one can imagine adding notes to any moment in time using a laptop; associated as well as possible with a specific item with its own identifier, or a specific activity, or at least a combination of timestamp and screenshot and context.

We should support elegant libraries for displaying aggregated notes; levels of publicity (and perhaps ways to change this after the fact for clusters of notes) and ways to highlight annotations and reviews as they take place.

In October 2008, there was discussion on sugar@ about annotation in Browse in particular, and its interaction with the Journal.

Types of annotation

An annotation is any kind of data imposed onto another page/document/object. Generally you do not need the permission of the author to add these comments or discussion. You may share your annotations with other users, or they may be private.

An annotation may be:

  • A comment that applies to a specific range of text
  • Something directed at a coordinate location in a PDF or image
  • A comment applied to a document generally
  • A comment applied to another annotation (forming a threaded discussion)
  • A rating or recommendation
  • A copyedit intended for the author
  • No comment, but simply the highlighting of a range of text or a pointer to something in a PDF (indicating a vague sense of "this is important or interesting")

As a result there are many optional aspects to an annotation -- the comment text is optional, the text range is optional, tags are optional, ratings are optional, etc.

Ratings and tags

Del.icio.us is a quick example.

Inline comments and notes

Heat maps such as co-ment and stet, like svn blame for text, allows a quick overview of thousands of granular comments within the context of a larger work.


Reviews

See content stamping for a specific kind of annotation that supports reviewing.

Other reviews include traditional Reviews : long essays on a reasonably long work.


Desired Features

Aggregation

It is useful to aggregate annotations. In the simplest case, we want to retrieve annotations from several sources.

Automatically aggregated annotations can also be useful. An aggregator may pull together annotations from many sources and republish a selection of the annotations. For example, the aggregator may drop what it judges to be spam, or only republish what it judges to be the most interesting annotations.

Querying

A standard method of querying annotation feeds is necessary for the interaction of aggregators and clients. We identified the following aspects of annotations where querying would be useful:

  1. Annotation title
  2. Annotation body
  3. Target URL - Clients query using with this term to find annotations for a specific URL.
  4. Target Content-Type - Useful for differentiating between annotations on images, videos, text, etc.
  5. In-reply-to - Return annotations replying to an annotation.
  6. Author - Find annotations from an author. E-Mail and name.
  7. Updated/creation date - Show entries updated or created during specific time periods.
  8. Feed - Show entries from a specific origin feed.

Specifying the Content Being Annotated

We weren't able to find any existing protocols for specifying target content, so we identified the two main use cases:

  1. Annotating a page as a whole (Digg-like).
  2. Annotating specific sections of a page.

These are of course related.

By specifying the original publishing URL of the entry as the annotaiton target, one can annotate an annotation.

Threading

RFC4685 covers ATOM threading in detail.

Rating

A simple optional value between 0 and 5 indicating the posters rating of the target.

We settled on adding an <ann:rating>N</ann:rating> equivalent, which gives a user rating for the target page.

hReview was considered, but it seemed overkill for simply adding a rating. But a possible idea from hReview: a rating on a category could be used, like <category term="history" ann:rating="5" />, to indicate a rating for some particular kind of criteria (e.g., this is a very good history text).

Tagging/Categorisation

Tagging/categorisation is not fundamental to annotation, but the advantages it brings to the exploration and discovery of new content are significant and worthwhile.

There are several tagging formats. We couldn't identify any significant advantage of using these formats over the atom:category element. Others have a similar opinion, though obviously there is no consensus.

Publishing

Viewing Annotations

When annotations are separate from the underlying work, one can see a constellation of notes from many people. A few views which we want to readily support:

  • no comments
  • my own comments
  • comments from a group (myself/class/teachers)
  • all comments
  • new comments

We also want to limit the types of annotation viewed to an area of interest:

  • Point-and-click annotation associated with a spot on an image or page
  • Selection annotation associated with a string in a document or region in an image
  • Block annotation associated with a paragraph or block in a document or region in an image
  • Document-level annotation such as tags or reviews

Annotation in Browse

(See also the discussion on the talk page.)

Browse can use plugins to view pdfs and media files. At the same time, it can track annotations made during that interaction, and can store the last point or page viewed or read. This should be stored somehow in the Journal, and available on resuming that interaction with the same file.

Question : should you be able to annotate a document and store the annotation locally when you don't have the document at hand and only saw it in passing? If so, how?
Question : is there a reasonably reliable way to have at hand a set of related annotation even when looking at a different but similar file? Say two editions of the same work, a later revision of the same image or page, &c. This depends on how flexibly documents are identified (whether there is a metric on identification to allow a notion of similarity between docs) and how flexibly annotations are linked to specific parts of documents (whether their validity is clear when the original subpart they refer to changes or disappears).


Implementation ideas

API Proposals

Here are two proposals.

  1. Original Annotation API Proposal by Ian Bicking and Joshua Gay
  2. Comment Anywhere Annotation Protocol Proposal by Alec Thomas and Alan Green

XSS Security

We will be injecting other people's HTML into content. We must be sure this HTML does not contain dangerous stuff, like Javascript that itself calls XMLHttpRequests. We must be sure to scrub the HTML carefully. It is difficult to do this in Javascript, but that would be most secure (on the client when loading the comments). We could require XHTML, embedded in the Atom, to do this. Or, we could rely on server-side filtering of the HTML.


References


Prior Work

There's a lot of prior work in this area which is worth learning from. For example:

Straightforward document commenting interfaces

Annotation systems

Complete transliterature projects and descriptions

  • Project Xanadu / Transliterature / Transquoting
    Good motivation and wild diagrams, for a quite comprehensive reworking of links between texts and metadata and annotations.

Annotation scripts

Specific metadata-gathering projects

  • Bitzi Bitpedia
    What is most relevant here? Their readings don't indicate much of substance to learn from, and though they seem to care about matching files to specific fingerprints in an intelligent way and to have some academic good intentions, I don't see any interfaces that allow for finding or clustering related works or versions of the same work, and little success in dealing with comments reviews and similar annotations. (Plus their actual implementation is crippled by ads.)
  • Open Library & Wikicite