Test automation: Difference between revisions
(+section 'Brainstorming harness architecture') |
(→olpc_xo_qemu: Switched olpc-xo-qemu link from web to wiki.) |
||
(3 intermediate revisions by the same user not shown) | |||
Line 1: | Line 1: | ||
Resources to aid automated testing of olpc software. |
Resources to aid automated testing of olpc software. |
||
== olpc-xo-qemu == |
|||
⚫ | |||
[[olpc-xo-qemu]] makes it easier to create and control QEMU emulated laptops. It can also control real XO's, with the same interface. |
|||
== X event scripting == |
|||
See [[X Window System event scripting]]. |
|||
== OCR of screenshots == |
|||
GOCR (see [[Optical character recognition]]) works fairly well with screenshots. In English at least. Recognition of text in [[Write]] is quite good. In Browse, rather noisy. In [[Calculate]], you get noisy results if you do the entire screen, but if you crop out the number display field, then the results are quite good. |
|||
So you can indeed use OCR for testing automation, but for each use, you need to see how well it works, and perhaps tweak it. |
|||
I expect one could do non-english Latin alphabets as well. I've not tried it. It might require switching to <tt>ocre</tt>. Arabic, Thai, Chinese, etc, are not feasible with open-source OCR. Russian might be, but it wouldn't be turn-key. |
|||
⚫ | |||
[[User:MitchellNCharity|MitchellNCharity]] 14:01, 4 December 2007 (EST) |
[[User:MitchellNCharity|MitchellNCharity]] 14:01, 4 December 2007 (EST) |
||
overview |
|||
Basically X event scripting and screenshot image analysis, plus |
|||
emulated laptop management, orchestrated by python test control. |
|||
x event generation |
x event generation |
||
tools |
tools |
||
Line 93: | Line 112: | ||
start activites by name, take snapshots. |
start activites by name, take snapshots. |
||
visgrep can find xo's in neighborhood view |
visgrep can find xo's in neighborhood view |
||
== Resources == |
== Resources == |
Latest revision as of 22:55, 17 November 2008
Resources to aid automated testing of olpc software.
olpc-xo-qemu
olpc-xo-qemu makes it easier to create and control QEMU emulated laptops. It can also control real XO's, with the same interface.
X event scripting
See X Window System event scripting.
OCR of screenshots
GOCR (see Optical character recognition) works fairly well with screenshots. In English at least. Recognition of text in Write is quite good. In Browse, rather noisy. In Calculate, you get noisy results if you do the entire screen, but if you crop out the number display field, then the results are quite good.
So you can indeed use OCR for testing automation, but for each use, you need to see how well it works, and perhaps tweak it.
I expect one could do non-english Latin alphabets as well. I've not tried it. It might require switching to ocre. Arabic, Thai, Chinese, etc, are not feasible with open-source OCR. Russian might be, but it wouldn't be turn-key.
Old brainstorming of test harness architecture
MitchellNCharity 14:01, 4 December 2007 (EST) overview Basically X event scripting and screenshot image analysis, plus emulated laptop management, orchestrated by python test control. x event generation tools xmacro xte x server access Xephyr, trivial, :N on xo, trivial, usual on qemu, from outside, need port route (easy), and xo X listening to port (how?) and shell xauth story (what/which?) notes may want to postprocess xmacro output into xte, for tuning and/or pythonization at test time, to do relative location (eg, invite person on screen at x,y) xmacro needs a slight pause between motion and click might tweak the code to decrease threshold for creating a delay command, or hardwire a pause (but pause duration seems likely to be an interesting test parameter, so no), laptop management scenarios xo-qemu managed emulated xo's solitary xo coordinating real xo's master plus ssh'ed slaves? issues host needs power and memory to run several qemus my 1GB 2GHz only gets me 3ish, and that's with non-standard small memory. image analysis tools search - visgrep snapshot - import, et al masking - ??? shapes - circle - mask out dynamic center of AP icon - ??? equality - md5sum rough equality - ??? visual diff - ??? user mouse-selected region dump tool - ??? notes crushing to thresholded gray to deal with xo color variation. gifs with known palettes to provide "give me foo in colors a,b". text from screenshot ocr doesn't seem an option cropping to obtain robustness in face of ui changes obtain text through means other than screenshot gnome accessibility watir harness architecture x event generators have few dependencies aside from X so aggregates could be spun off as shell scripts python management of laptop, test runs, test results test selection is important emulated laptops are very expensive to start tests aren't fast either nose/unittest seems plausible path. maybe. roles regression testing repetition for heisenbugs needs addiontal capabilities - eg, log capture nnd manipulation support activity developers? test building blocks mouse - all mousable state image analysis recognizing state (eg, home with frame up) extracting key state (eg, mouse over text) validating state (eg, compare with known good version) determining dynamic parameters (eg, locating non-fixed things) senarios action/expectation chains test creation tools issues tracking build changes issues screen size in emulation Q:is emulation really flaky in Xephyr, or were problems from wrong depth? getting 1200x900 in qemu is currently low priority (link ticket, bernie) pain to support multiple resolutions notes find mesh circles by number box find AP circles by sliver of bottom arc, plus perhaps verification of a surrounding masked region. current state was risk exploration. now architecture planning. what exists xo-qemu kludge python test code can recognize activity icons from frame, start activites by name, take snapshots. visgrep can find xo's in neighborhood view
Resources
- For browser/web tests: http://code.google.com/p/firewatir/ port of http://wtr.rubyforge.org/