TestMtg, 2007-12-02: Difference between revisions

From OLPC
Jump to navigation Jump to search
No edit summary
No edit summary
 
(One intermediate revision by the same user not shown)
Line 42: Line 42:
*# Action item: Chris will take the action to get tinderbox reporting again.
*# Action item: Chris will take the action to get tinderbox reporting again.


* Headless Python Test Suite: Scott is going to define this one - target 'easy' things that don't require XOs. Big low-hanging. Infrastructure to tie builds and testing (build or test slaves).
* Headless Python Test Suite: Scott is going to define this one - target 'easy' things that don't require XOs. Big low-hanging.
*# First: Infrastructure to tie builds and testing (build or test slaves).
*# This harness might be a good one to get Figleaf kind of measurements.
*# This harness might be a good one to get Figleaf kind of measurements.
*# Provide info to developers on how to create nose tests.


* Qemu harness: runs all the activities in a window on your desktop
* Qemu harness: runs all the activities in a window on your desktop
Line 57: Line 59:
* GUI testing - if we can emulate the proper screen resolution, would that be good enough? X-automation package provides 'xgrep' and take a portion of an image and say if it is a part of another image. This might be good for some sugar/journal stuff.
* GUI testing - if we can emulate the proper screen resolution, would that be good enough? X-automation package provides 'xgrep' and take a portion of an image and say if it is a part of another image. This might be good for some sugar/journal stuff.


* Code coverage: May end up being important to get measurements for many of these harnesses. First in the direction of features that crash or are sensitive. Also need memory leak tools.
* Code coverage: Will be important to get measurements for many of these harnesses. First in the direction of features that crash or are sensitive. Also need memory leak tools.
*# Must work per component - so lets specify components to get started; and start getting some numbers
*# Must work per component - so lets specify components to get started; and start getting some numbers
*# Figleaf, Coverage.py - two possible tools
*# Figleaf, Coverage.py - two possible tools

Latest revision as of 19:41, 3 December 2007

Attending: Kim, Alex, Scott, Michael, Bernie, Chris, Jim, Adam


Agenda:

  1. Automated test harness(es): getting beyond tinderbox (some discussion on tinderbox is good, too!)
  2. GUI testing
  3. Activity Testing (Greg's project)
  4. Integration, System, Performance, Scaling (everything else)

Automated testing ideas:

  • Overarching goal is to have as much automated testing as possible to be able to provide 'reliable' builds.
  • Second part of the goal is that testing takes a long time (even tinderbox with old hardware)
  • Also, a single test methodology can't catch everything, need mulitple test strategies:
    1. Tinderbox
    2. Functional tests
    3. GUI test tool (trac #5277)
    4. Headless tests (trac #5276)
      • Link checking the library
      • verifying file systems are appropriate
      • text black list, etc. (trac #5275)

Build process view (trac #5279):

  1. Collect packages from Cogi; make sure there are SRPMs
  2. Pilgrim puts all the pieces together and generates change logs
  3. Automated testing
  4. Installation: this build only shows up on download.laptop.org
    • Official builds (signed)
    • Test stream, release candidates (signed)
    • Development stream (always unsigned)


Need to get this all on a wiki page for discussion, review and reference for all developers and testers.


One goal of this process is that 'latest' releases (even joyride) should always compile and have passed the automated testing.


Test harnesses:

  • Tinderbox: Checks whether a build installs, boots, connects to the wireless mesh, runs activities, power measurements
    1. Action item: Need Richard to create an MP unit for tinderbox to replace the old laptop there.
    2. Action item: Chris will take the action to get tinderbox reporting again.
  • Headless Python Test Suite: Scott is going to define this one - target 'easy' things that don't require XOs. Big low-hanging.
    1. First: Infrastructure to tie builds and testing (build or test slaves).
    2. This harness might be a good one to get Figleaf kind of measurements.
    3. Provide info to developers on how to create nose tests.
  • Qemu harness: runs all the activities in a window on your desktop
    1. Can we pull versions from the build infrastructure?
    2. Can we do activity collaboration testing at this level?
  • Activity collaboration testing can be done without XOs
  • 'Test Activity' - Manual tests that people can run and report into a central database.
  • Hydra: test cluster of a small number of XOs connected via serial connector. Michael Stone has created this harness. This has been good for development debugging... not so much for automated.
  • GUI testing - if we can emulate the proper screen resolution, would that be good enough? X-automation package provides 'xgrep' and take a portion of an image and say if it is a part of another image. This might be good for some sugar/journal stuff.
  • Code coverage: Will be important to get measurements for many of these harnesses. First in the direction of features that crash or are sensitive. Also need memory leak tools.
    1. Must work per component - so lets specify components to get started; and start getting some numbers
    2. Figleaf, Coverage.py - two possible tools


BuildBot info:

  • Problem statement - there are many people who have written test suites for their code. Their software relies on our distribution. How can we collect the feedback about what is broken in the field?
  • There are lots of programs written in Python; developers want to make sure their new code or feature to python doesn't break something else important out there.
  • At OLPC, we want to encourage activity developers to run some tests that will feedback info to us. Similarly a change that we got from fedora doesn't break something important from our partners.
  • Buildbot uses a master-slave architecture. Master farms out tasks. Slaves and master need to communicate on what tests need to be run with this new slave. If tinderbox were a buildbot master, for example, then people could submit additions to the testing.

(You can see an example of this in [1].)

  • We could ask the Buildbot guys to dynamically collect the test cases from which to build.


One Laptop per Teacher (program):

  • Jim described a program they are doing in Peru; where each teacher gets a laptop. They would like to run Sugar on their non-XO laptops. Probably use VMWare.
  • We should make it possible for them to test; not do it for them.