TestMtg, 2007-12-02
Jump to navigation
Jump to search
Attending: Kim, Alex, Scott, Michael, Bernie, Chris, Jim, Adam
Agenda:
- Automated test harness(es): getting beyond tinderbox (some discussion on tinderbox is good, too!)
- GUI testing
- Activity Testing (Greg's project)
- Integration, System, Performance, Scaling (everything else)
Automated testing ideas:
- Overarching goal is to have as much automated testing as possible to be able to provide 'reliable' builds.
- Second part of the goal is that testing takes a long time (even tinderbox with old hardware)
- Also, a single test methodology can't catch everything, need mulitple test strategies:
- Tinderbox
- Functional tests
- GUI test tool (trac #5277)
- Headless tests (trac #5276)
- Link checking the library
- verifying file systems are appropriate
- text black list, etc. (trac #5275)
Build process view (trac #5279):
- Collect packages from Cogi; make sure there are SRPMs
- Pilgrim puts all the pieces together and generates change logs
- Automated testing
- Installation: this build only shows up on download.laptop.org
- Official builds (signed)
- Test stream, release candidates (signed)
- Development stream (always unsigned)
Need to get this all on a wiki page for discussion, review and reference for all developers and testers.
One goal of this process is that 'latest' releases (even joyride) should always compile and have passed the automated testing.
Test harnesses:
- Tinderbox: Checks whether a build installs, boots, connects to the wireless mesh, runs activities, power measurements
- Action item: Need Richard to create an MP unit for tinderbox to replace the old laptop there.
- Action item: Chris will take the action to get tinderbox reporting again.
- Headless Python Test Suite: Scott is going to define this one - target 'easy' things that don't require XOs. Big low-hanging.
- First: Infrastructure to tie builds and testing (build or test slaves).
- This harness might be a good one to get Figleaf kind of measurements.
- Provide info to developers on how to create nose tests.
- Qemu harness: runs all the activities in a window on your desktop
- Can we pull versions from the build infrastructure?
- Can we do activity collaboration testing at this level?
- Activity collaboration testing can be done without XOs
- 'Test Activity' - Manual tests that people can run and report into a central database.
- Hydra: test cluster of a small number of XOs connected via serial connector. Michael Stone has created this harness. This has been good for development debugging... not so much for automated.
- GUI testing - if we can emulate the proper screen resolution, would that be good enough? X-automation package provides 'xgrep' and take a portion of an image and say if it is a part of another image. This might be good for some sugar/journal stuff.
- Code coverage: Will be important to get measurements for many of these harnesses. First in the direction of features that crash or are sensitive. Also need memory leak tools.
- Must work per component - so lets specify components to get started; and start getting some numbers
- Figleaf, Coverage.py - two possible tools
BuildBot info:
- Problem statement - there are many people who have written test suites for their code. Their software relies on our distribution. How can we collect the feedback about what is broken in the field?
- There are lots of programs written in Python; developers want to make sure their new code or feature to python doesn't break something else important out there.
- At OLPC, we want to encourage activity developers to run some tests that will feedback info to us. Similarly a change that we got from fedora doesn't break something important from our partners.
- Buildbot uses a master-slave architecture. Master farms out tasks. Slaves and master need to communicate on what tests need to be run with this new slave. If tinderbox were a buildbot master, for example, then people could submit additions to the testing.
(You can see an example of this in [1].)
- We could ask the Buildbot guys to dynamically collect the test cases from which to build.
One Laptop per Teacher (program):
- Jim described a program they are doing in Peru; where each teacher gets a laptop. They would like to run Sugar on their non-XO laptops. Probably use VMWare.
- We should make it possible for them to test; not do it for them.