Talk:Content stamping: Difference between revisions

From OLPC
Jump to navigation Jump to search
No edit summary
 
No edit summary
Line 9: Line 9:


-- [[User:Ian Bicking|Ian Bicking]] 11:11, 16 March 2007 (EDT)
-- [[User:Ian Bicking|Ian Bicking]] 11:11, 16 March 2007 (EDT)

----

Valuation functions and general automation seems complex and unreliable to me. In theory it could be useful, but in practice you need a lot of rating to get something meaningful -- not because of the quality of the individual reviews (which may all be just fine), but because of the lack of a common rating standard, and even common criteria. So group A might find lots of new, interesting content, while group B is looking for a small set of content directly related to one subject area. The ratings of group A could be very helpful to group B, as they identify potentially interesting information. But the actual selection process is something group B wants to do themselves. Mixing the two groups really ''enables'' group B to do all of their own selection, as they can focus on a smaller set of content that has had some basic vetting. Aggregating and weighing ratings there doesn't seem very useful. -- [[User:Ian Bicking|Ian Bicking]] 11:15, 16 March 2007 (EDT)

Revision as of 15:15, 16 March 2007

It seems like it might be unclear when someone is making a review of a website, and just of one web page, or some set of web pages. It's sometimes (but not always) clear to humans; less so to computers.

Examples of ambiguities:

  • An interesting article that has been split across multiple pages. All pages form the work.
  • A blog that has timely information on a subject, e.g., current events. Future pages that don't yet exist may effectively fall under the review.
  • An entire website that has useful information.
  • A web application that can only really be used interactively; the form not the content is what is interesting.

-- Ian Bicking 11:11, 16 March 2007 (EDT)


Valuation functions and general automation seems complex and unreliable to me. In theory it could be useful, but in practice you need a lot of rating to get something meaningful -- not because of the quality of the individual reviews (which may all be just fine), but because of the lack of a common rating standard, and even common criteria. So group A might find lots of new, interesting content, while group B is looking for a small set of content directly related to one subject area. The ratings of group A could be very helpful to group B, as they identify potentially interesting information. But the actual selection process is something group B wants to do themselves. Mixing the two groups really enables group B to do all of their own selection, as they can focus on a smaller set of content that has had some basic vetting. Aggregating and weighing ratings there doesn't seem very useful. -- Ian Bicking 11:15, 16 March 2007 (EDT)