Annotation: Difference between revisions

From OLPC
Jump to navigation Jump to search
m (Reverted edits by 211.39.150.104 (Talk) to last version by Sj)
 
(14 intermediate revisions by 6 users not shown)
Line 4: Line 4:
We should support elegant libraries for displaying aggregated notes; levels of publicity (and perhaps ways to change this after the fact for clusters of notes) and ways to highlight annotations and reviews as they take place.
We should support elegant libraries for displaying aggregated notes; levels of publicity (and perhaps ways to change this after the fact for clusters of notes) and ways to highlight annotations and reviews as they take place.


In October 2008, there was discussion on sugar@ about '''[[#Annotation in Browse|annotation in Browse]]''' in particular, and its interaction with the [[Journal]].
See also '''[[content stamping]]''' for a specific kind of annotation that supports reviewing.


== What's an Annotation? ==
== Types of annotation ==


An annotation is any kind of data imposed onto another page/document/object. Generally you do not need the permission of the author to add these comments or discussion. You may share your annotations with other users, or they may be private.
An '''annotation''' is any kind of data imposed onto another page/document/object. Generally you do not need the permission of the author to add these comments or discussion. You may share your annotations with other users, or they may be private.


An annotation ''may'' be:
An annotation ''may'' be:
Line 21: Line 21:


As a result there are many optional aspects to an annotation -- the comment text is optional, the text range is optional, tags are optional, ratings are optional, etc.
As a result there are many optional aspects to an annotation -- the comment text is optional, the text range is optional, tags are optional, ratings are optional, etc.

=== Ratings and tags ===
[[Del.icio.us]] is a quick example.

=== Inline comments and notes ===
Heat maps such as '''co-ment''' and '''stet''', like [http://svnbook.red-bean.com/en/1.0/re02.html svn blame] for text, allows a quick overview of thousands of granular comments within the context of a larger work.


=== Reviews ===
See '''[[content stamping]]''' for a specific kind of annotation that supports reviewing.

Other reviews include traditional Reviews : long essays on a reasonably long work.



== Desired Features ==
== Desired Features ==
Line 28: Line 41:
It is useful to aggregate annotations. In the simplest case, we want to retrieve annotations from several sources.
It is useful to aggregate annotations. In the simplest case, we want to retrieve annotations from several sources.


Automatically aggregated annotations can also be useful. An aggregator may pull together annotations from many sources and either republish a selection of the annotations. For example, the aggregator may drop what it judges to be spam, or only republish what it judges to be the most interesting annotations.
Automatically aggregated annotations can also be useful. An aggregator may pull together annotations from many sources and republish a selection of the annotations. For example, the aggregator may drop what it judges to be spam, or only republish what it judges to be the most interesting annotations.


=== Querying ===
=== Querying ===
Line 43: Line 56:
# '''Feed''' - Show entries from a specific origin feed.
# '''Feed''' - Show entries from a specific origin feed.


=== Specifying the Content Being Annotated ===
Each term can be optionally combined with extra constraints such as ''contains'', ''equals'', etc.

=== Identifying Targets ===


We weren't able to find any existing protocols for specifying target content, so we identified the two main use cases:
We weren't able to find any existing protocols for specifying target content, so we identified the two main use cases:


# Commenting on a page as a whole (Digg-like).
# Annotating a page as a whole (Digg-like).
# Commenting on specific sections of a page.
# Annotating specific sections of a page.

These are of course related.

By specifying the original publishing URL of the entry as the annotaiton target, one can ''annotate an annotation''.


=== Threading ===
=== Threading ===
Line 57: Line 72:


=== Rating ===
=== Rating ===

A simple optional value between 0 and 5 indicating the posters rating of the target.


We settled on adding an <tt><ann:rating>N</ann:rating></tt> equivalent, which gives a user rating for the target page.
We settled on adding an <tt><ann:rating>N</ann:rating></tt> equivalent, which gives a user rating for the target page.


[http://microformats.org/wiki/hreview hReview] was considered, but it seemed overkill for our needs.
[http://microformats.org/wiki/hreview hReview] was considered, but it seemed overkill for simply adding a rating. But a possible idea from hReview: a rating on a category could be used, like <tt><category term="history" ann:rating="5" /></tt>, to indicate a rating for some particular kind of criteria (e.g., this is a very good history text).


=== Tagging/Categorisation ===
=== Tagging/Categorisation ===
Line 85: Line 102:
* Document-level annotation such as tags or [[content reviews|reviews]]
* Document-level annotation such as tags or [[content reviews|reviews]]


== Annotation in Browse ==
[[Category:Content ideas]]
(See also the [[Talk:Annotation#in Browse|discussion on the talk page]].)


Browse can use plugins to view pdfs and media files. At the same time, it can track annotations made during that interaction, and can store the last point or page viewed or read. This should be stored somehow in the Journal, and available on resuming that interaction with the same file.
== API Proposals ==


: Question : should you be able to annotate a document and store the annotation locally when you don't have the document at hand and only saw it in passing? If so, how?

: Question : is there a reasonably reliable way to have at hand a set of related annotation even when looking at a different but similar file? Say two editions of the same work, a later revision of the same image or page, &c. This depends on how flexibly documents are identified (whether there is a metric on identification to allow a notion of similarity between docs) and how flexibly annotations are linked to specific parts of documents (whether their validity is clear when the original subpart they refer to changes or disappears).


= Implementation ideas =
== API Proposals ==
Here are two proposals.
Here are two proposals.


Line 103: Line 128:
* Server-side HTML filtering in [http://codespeak.net/svn/lxml/branch/html/src/lxml/html/clean.py lxml.html.clean] -- [[User:Ian Bicking|Ian Bicking]]
* Server-side HTML filtering in [http://codespeak.net/svn/lxml/branch/html/src/lxml/html/clean.py lxml.html.clean] -- [[User:Ian Bicking|Ian Bicking]]
* We're working on a Atom store for tagging (a related kind of annotation) called [https://svn.openplans.org/svn/TaggerStore/trunk TaggerStore] -- it's in an early stage still -- [[User:Ian Bicking|Ian Bicking]]
* We're working on a Atom store for tagging (a related kind of annotation) called [https://svn.openplans.org/svn/TaggerStore/trunk TaggerStore] -- it's in an early stage still -- [[User:Ian Bicking|Ian Bicking]]


= Prior Work =

There's a lot of prior work in this area which is worth learning from. For example:

Straightforward document commenting interfaces
* Stet (used to display [http://gplv3.fsf.org/comments/gplv3-draft-2.html comments on the GPL v3 draft])
* [http://www.djangobook.com/about/comments/ Django book comment system]

Annotation systems
* [http://www.w3.org/2001/Annotea/ Annotea]

Complete transliterature projects and descriptions
* [http://xanadu.com Project Xanadu] / [http://transliterature.com Transliterature] / Transquoting
*: Good motivation and wild diagrams, for a quite comprehensive reworking of links between texts and metadata and annotations.

Annotation scripts
* [http://www.geof.net/code/annotation Annotation], Commentary

Specific metadata-gathering projects
* [http://bitzi.com/bitpedia/ Bitzi Bitpedia]
*: What is most relevant here? Their [http://bitzi.com/about/metadata readings] don't indicate much of substance to learn from, and though they seem to care about matching files to specific fingerprints in an intelligent way and to have some academic good intentions, I don't see any interfaces that allow for finding or clustering related works or versions of the same work, and little success in dealing with comments reviews and similar annotations. (Plus their actual implementation is crippled by ads.)
* Open Library & Wikicite


[[Category:Annotation]]
[[Category:Annotation]]
[[Category:Content ideas]]

Latest revision as of 06:12, 17 December 2008

  english | 한국어 HowTo [ID# 187339]  +/-  

We want to support annotation of any document, in a generalized way that can be supported by a unified aggregation and sharing system (where annotations/comments are similar to other objects in the object store). Media that should support annotation include documents and images; perhaps also any webpage or item viewed through a browser. In the extreme one can imagine adding notes to any moment in time using a laptop; associated as well as possible with a specific item with its own identifier, or a specific activity, or at least a combination of timestamp and screenshot and context.

We should support elegant libraries for displaying aggregated notes; levels of publicity (and perhaps ways to change this after the fact for clusters of notes) and ways to highlight annotations and reviews as they take place.

In October 2008, there was discussion on sugar@ about annotation in Browse in particular, and its interaction with the Journal.

Types of annotation

An annotation is any kind of data imposed onto another page/document/object. Generally you do not need the permission of the author to add these comments or discussion. You may share your annotations with other users, or they may be private.

An annotation may be:

  • A comment that applies to a specific range of text
  • Something directed at a coordinate location in a PDF or image
  • A comment applied to a document generally
  • A comment applied to another annotation (forming a threaded discussion)
  • A rating or recommendation
  • A copyedit intended for the author
  • No comment, but simply the highlighting of a range of text or a pointer to something in a PDF (indicating a vague sense of "this is important or interesting")

As a result there are many optional aspects to an annotation -- the comment text is optional, the text range is optional, tags are optional, ratings are optional, etc.

Ratings and tags

Del.icio.us is a quick example.

Inline comments and notes

Heat maps such as co-ment and stet, like svn blame for text, allows a quick overview of thousands of granular comments within the context of a larger work.


Reviews

See content stamping for a specific kind of annotation that supports reviewing.

Other reviews include traditional Reviews : long essays on a reasonably long work.


Desired Features

Aggregation

It is useful to aggregate annotations. In the simplest case, we want to retrieve annotations from several sources.

Automatically aggregated annotations can also be useful. An aggregator may pull together annotations from many sources and republish a selection of the annotations. For example, the aggregator may drop what it judges to be spam, or only republish what it judges to be the most interesting annotations.

Querying

A standard method of querying annotation feeds is necessary for the interaction of aggregators and clients. We identified the following aspects of annotations where querying would be useful:

  1. Annotation title
  2. Annotation body
  3. Target URL - Clients query using with this term to find annotations for a specific URL.
  4. Target Content-Type - Useful for differentiating between annotations on images, videos, text, etc.
  5. In-reply-to - Return annotations replying to an annotation.
  6. Author - Find annotations from an author. E-Mail and name.
  7. Updated/creation date - Show entries updated or created during specific time periods.
  8. Feed - Show entries from a specific origin feed.

Specifying the Content Being Annotated

We weren't able to find any existing protocols for specifying target content, so we identified the two main use cases:

  1. Annotating a page as a whole (Digg-like).
  2. Annotating specific sections of a page.

These are of course related.

By specifying the original publishing URL of the entry as the annotaiton target, one can annotate an annotation.

Threading

RFC4685 covers ATOM threading in detail.

Rating

A simple optional value between 0 and 5 indicating the posters rating of the target.

We settled on adding an <ann:rating>N</ann:rating> equivalent, which gives a user rating for the target page.

hReview was considered, but it seemed overkill for simply adding a rating. But a possible idea from hReview: a rating on a category could be used, like <category term="history" ann:rating="5" />, to indicate a rating for some particular kind of criteria (e.g., this is a very good history text).

Tagging/Categorisation

Tagging/categorisation is not fundamental to annotation, but the advantages it brings to the exploration and discovery of new content are significant and worthwhile.

There are several tagging formats. We couldn't identify any significant advantage of using these formats over the atom:category element. Others have a similar opinion, though obviously there is no consensus.

Publishing

Viewing Annotations

When annotations are separate from the underlying work, one can see a constellation of notes from many people. A few views which we want to readily support:

  • no comments
  • my own comments
  • comments from a group (myself/class/teachers)
  • all comments
  • new comments

We also want to limit the types of annotation viewed to an area of interest:

  • Point-and-click annotation associated with a spot on an image or page
  • Selection annotation associated with a string in a document or region in an image
  • Block annotation associated with a paragraph or block in a document or region in an image
  • Document-level annotation such as tags or reviews

Annotation in Browse

(See also the discussion on the talk page.)

Browse can use plugins to view pdfs and media files. At the same time, it can track annotations made during that interaction, and can store the last point or page viewed or read. This should be stored somehow in the Journal, and available on resuming that interaction with the same file.

Question : should you be able to annotate a document and store the annotation locally when you don't have the document at hand and only saw it in passing? If so, how?
Question : is there a reasonably reliable way to have at hand a set of related annotation even when looking at a different but similar file? Say two editions of the same work, a later revision of the same image or page, &c. This depends on how flexibly documents are identified (whether there is a metric on identification to allow a notion of similarity between docs) and how flexibly annotations are linked to specific parts of documents (whether their validity is clear when the original subpart they refer to changes or disappears).


Implementation ideas

API Proposals

Here are two proposals.

  1. Original Annotation API Proposal by Ian Bicking and Joshua Gay
  2. Comment Anywhere Annotation Protocol Proposal by Alec Thomas and Alan Green

XSS Security

We will be injecting other people's HTML into content. We must be sure this HTML does not contain dangerous stuff, like Javascript that itself calls XMLHttpRequests. We must be sure to scrub the HTML carefully. It is difficult to do this in Javascript, but that would be most secure (on the client when loading the comments). We could require XHTML, embedded in the Atom, to do this. Or, we could rely on server-side filtering of the HTML.


References


Prior Work

There's a lot of prior work in this area which is worth learning from. For example:

Straightforward document commenting interfaces

Annotation systems

Complete transliterature projects and descriptions

  • Project Xanadu / Transliterature / Transquoting
    Good motivation and wild diagrams, for a quite comprehensive reworking of links between texts and metadata and annotations.

Annotation scripts

Specific metadata-gathering projects

  • Bitzi Bitpedia
    What is most relevant here? Their readings don't indicate much of substance to learn from, and though they seem to care about matching files to specific fingerprints in an intelligent way and to have some academic good intentions, I don't see any interfaces that allow for finding or clustering related works or versions of the same work, and little success in dealing with comments reviews and similar annotations. (Plus their actual implementation is crippled by ads.)
  • Open Library & Wikicite