TestMtg, 2007-12-02: Difference between revisions
Jump to navigation
Jump to search
(New page: Attending: Kim, Alex, Scott, Michael, Bernie, Chris Agenda: <br> # Automated test harness(es): getting beyond tinderbox (some discussion on tinderbox is good, too!) # GUI testing # Activ...) |
No edit summary |
||
Line 12: | Line 12: | ||
* Second part of the goal is that testing takes a long time (even tinderbox with old hardware) |
* Second part of the goal is that testing takes a long time (even tinderbox with old hardware) |
||
* Also, a single test methodology can't catch everything, need mulitple test strategies: |
* Also, a single test methodology can't catch everything, need mulitple test strategies: |
||
*# Tinderbox |
*# Tinderbox |
||
⚫ | |||
*# GUI test tool |
*# GUI test tool (trac #5277) |
||
⚫ | |||
*# Headless tests (trac #5276) |
|||
*#* Link checking the library |
*#* Link checking the library |
||
*#* verifying file systems are appropriate |
*#* verifying file systems are appropriate |
||
*#* text black list, etc. |
*#* text black list, etc. (trac #5275) |
||
Build process view: |
Build process view (trac #5279): |
||
# Collect packages from Cogi; make sure there are SRPMs |
# Collect packages from Cogi; make sure there are SRPMs |
||
# Pilgrim puts all the pieces together and generates change logs |
# Pilgrim puts all the pieces together and generates change logs |
||
Line 33: | Line 34: | ||
One goal of this process is that 'latest' releases (even joyride) should always compile and have passed the automated testing. |
One goal of this process is that 'latest' releases (even joyride) should always compile and have passed the automated testing. |
||
Automated test harnesses: |
|||
# Tinderbox: Checks whether a build installs, boots, connects to the wireless mesh, runs activities, power measurements |
|||
#* Action item: Need Richard to create an MP unit for tinderbox to replace the old laptop there. |
|||
#* Action item: Chris will take the action to get tinderbox reporting again. |
|||
# Qemu harness: runs all the activities in a window on your desktop |
|||
#* Can we pull versions from the build infrastructure? |
|||
#* Can we do activity collaboration testing at this level? |
|||
# Hydra: test cluster of a small number of XOs connected via serial connector. Michael Stone has created this harness. |
|||
# Headless Python Test Suite: Scott is going to define this one |
|||
# Activity collaboration testing can be done without XOs |
|||
# 'Test Activity' - Manual tests that people can run and report into a central database. |
|||
# GUI testing - if we can emulate the proper screen resolution, would that be good enough? X-automation package provides 'xgrep' and take a portion of an image and say if it is a part of another image. This might be good for some sugar/journal stuff. |
Revision as of 19:00, 3 December 2007
Attending: Kim, Alex, Scott, Michael, Bernie, Chris
Agenda:
- Automated test harness(es): getting beyond tinderbox (some discussion on tinderbox is good, too!)
- GUI testing
- Activity Testing (Greg's project)
- Integration, System, Performance, Scaling (everything else)
Automated testing ideas:
- Overarching goal is to have as much automated testing as possible to be able to provide 'reliable' builds.
- Second part of the goal is that testing takes a long time (even tinderbox with old hardware)
- Also, a single test methodology can't catch everything, need mulitple test strategies:
- Tinderbox
- Functional tests
- GUI test tool (trac #5277)
- Headless tests (trac #5276)
- Link checking the library
- verifying file systems are appropriate
- text black list, etc. (trac #5275)
Build process view (trac #5279):
- Collect packages from Cogi; make sure there are SRPMs
- Pilgrim puts all the pieces together and generates change logs
- Automated testing
- Installation: this build only shows up on download.laptop.org
- Official builds (signed)
- Test stream, release candidates (signed)
- Development stream (always unsigned)
Need to get this all on a wiki page for discussion, review and reference for all developers and testers.
One goal of this process is that 'latest' releases (even joyride) should always compile and have passed the automated testing.
Automated test harnesses:
- Tinderbox: Checks whether a build installs, boots, connects to the wireless mesh, runs activities, power measurements
- Action item: Need Richard to create an MP unit for tinderbox to replace the old laptop there.
- Action item: Chris will take the action to get tinderbox reporting again.
- Qemu harness: runs all the activities in a window on your desktop
- Can we pull versions from the build infrastructure?
- Can we do activity collaboration testing at this level?
- Hydra: test cluster of a small number of XOs connected via serial connector. Michael Stone has created this harness.
- Headless Python Test Suite: Scott is going to define this one
- Activity collaboration testing can be done without XOs
- 'Test Activity' - Manual tests that people can run and report into a central database.
- GUI testing - if we can emulate the proper screen resolution, would that be good enough? X-automation package provides 'xgrep' and take a portion of an image and say if it is a part of another image. This might be good for some sugar/journal stuff.