How to test an Activity

From OLPC
Revision as of 18:03, 6 November 2008 by Mchua (talk | contribs)
Jump to: navigation, search
  This page is part of the OLPC Community testing Project. How to test an Activity | Reporting test results | Meetings
XO Checkbox

Introduction

Activity testing is important for volunteer Activity developers so that they'll know what bugs to fix, and for end-users (teachers and students in deployments) so that they know which ones they can use. Activity testing also helps OLPC determine what Activities to ship with G1G1 laptops, and which ones to recommend to large-scale country deployments.

This page is a guide for community members who want to contribute to testing an Activity.

Pick an Activity

The first step is to pick an Activity that you'd like to test. There are many good reasons for picking an Activity, but ultimately, if you like an Activity and want to see it be more widely used, this is the best reason of all.

For this tutorial, we will assume that you already have the Activity installed and running on your XO, and that you already know how to use it. We be using Analyze as an example in the sections that follow.

Look at existing resources

The next thing to do is to look at what resources already exist for users, developers, and testers of the Activity. If someone wanted to find information about the Activity you are testing, where could they look?

There are usually 4 main sources of information for each Activity:

  1. The Activity's wiki page (Example: Analyze)
  2. The Activity's test page (Examples: Tests/Activity/Analyze)
  3. The Activity's Trac component's bugs
  4. The maintainer's contact info

Decide on testing goals

  • What is the "product" that you're testing? The Activity? The Activity and its documentation? The Activity and related hardware (sensors, etc in the case of things like Measure - and is an XO required?)
  • What are the oracles you're using for those products to act as your standard of "correct" behavior? (feature lists, specs, etc.)
  • the strategies you're using to organize tests
  • How "complete" is "good enough"? (There's no such thing as "100% completed testing.")