Content Management/Final Test Results: Difference between revisions
(New page: == Final Test Results == Do we need to have a test case for each use case?) |
No edit summary |
||
Line 1: | Line 1: | ||
==Strategy & Plans== |
|||
⚫ | |||
To show that our system is bug free and acts according to use cases, we have conducted a variety of test types. The tests we used were primarily for verification; that is they check that the product works. There is an assumption that our product goals were correct. Our test strategy involves several components: |
|||
Do we need to have a test case for each use case? |
|||
*Black Box Testing: Manual testing of interface and classes it exposes. |
|||
**This testing mode also involves positive and negative testing. First, common use cases were attempted, followed by deliberate attempts to break the system. |
|||
*Regression Testing: During development, at each update, core functionality relating to the update is checked for bugs and consistency. |
|||
*Integration Testing: As each component is combined into the whole, this test insures everything works together. |
|||
Stress and performance testing was not done. Our software will run on a central server with sufficient hardware for the scale of the project. Users accessing the system will only stress the delivery tools (web server, database) and not our software directly. The output generated by our software (and viewed with a browser) is simple enough for any reasonable browser to read (no AJAX or other advanced browser technology). |
|||
===Schedule=== |
|||
The bulk of our testing occurred during our 3rd (second-to-last) iteration. Our final iteration has unit testing and further manual testing scheduled. |
|||
===Resources=== |
|||
Our testing resources are the same as our development resources. Our computers serve as the testing computers since the application runs online (differing system configurations won't affect it). We ourselves were the testers. Time spent testing was the minimum to assure a quality product. |
|||
===Guidelines=== |
|||
In addition to correctly performing the test types discussed above, there are a few guidelines for dealing with errors discovered via testing. All bugs are documented via TRAC. At the end of each iteration, bugs must not prohibit the system |
|||
s critical components from functioning. |
|||
==Quality Goals== |
|||
At minimum, our goal was to have the critical use cases for our program working. This means that content should be submittable, browsable, and searchable. In the process of performing these actions, the user should not be presented with any unexpected errors. An unexpected error is one that is caused by a faulty system, rather than an message explaining user error (example: "You didn't fill in a required field"). Ideally all errors in every part of the system would be caught, but since a) our code is not 100% complete, and b) the infeasibility of testing every single possible path, we can't achieve this. |
|||
==Test Case Specifications== |
|||
===TC1: Submit Content=== |
|||
'''Preconditions''': User must be logged in. |
|||
'''Sequence of Actions''': |
|||
*User types title, selects file, types tags, selects language, and selects category. |
|||
*User presses submit button. |
|||
*User is prompted to confirm submission. |
|||
*User presses confirm button. |
|||
*User is presented with thank you message. |
|||
===TC2: Browse Content=== |
|||
'''Preconditions''': None |
|||
'''Sequence of Actions''': |
|||
*User clicks a category (repeat as needed). |
|||
*User clicks a content title. |
|||
*User presented with content information page. |
|||
===TC3: Search Content=== |
|||
'''Preconditions''': None |
|||
'''Sequence of Actions''': |
|||
*User types search terms. |
|||
*User presses search button. |
|||
*User is presented with search results. |
|||
⚫ | |||
<table> |
|||
<tr><th>Test Case</th><th>Paul</th><th>John</th><th>Mike</th><th>Jason</th></tr> |
|||
<tr><th>OS</th><th></th><th>Windows XP SP2</th><th></th><th>Ubuntu 7.10</th></tr> |
|||
<tr><th>Browser</th><th></th><th>Firefox 2.0.0.9<br>& IE 7.0</th><th></th><th>Firefox 2.0.0.9</th></tr> |
|||
<tr><th>Condition</th><td>Normal</td><td>Normal</td><td>Normal</td><td>Normal</td></tr> |
|||
<tr><th>TC1</th><td>Pass</td><td>Pass</td><td>Pass</td><td>Pass</td></tr> |
|||
<tr><th>TC2</th><td>Pass</td><td>Pass</td><td>Pass</td><td>Pass</td></tr> |
|||
<tr><th>TC3</th><td>Pass</td><td>Pass</td><td>Pass</td><td>Pass</td></tr> |
|||
</table> |
|||
= TEST PLAN (here for reference, ignore) = |
|||
== Outline == |
|||
*Table of contents |
|||
*Goals for test |
|||
*Basic Testing Strategies |
|||
* list of resources needed |
|||
** pc |
|||
** people |
|||
** time |
|||
* Schedule |
|||
* Test case specification |
|||
** TC1: |
|||
*** precondition: |
|||
*** sequence of events |
|||
*** postcondition |
|||
* Results |
|||
** Example: table |
|||
*** test case, tester 1, tester2, test3, tester4, tester5 |
|||
*** PASS/FAIL |
|||
* Explain reason of failure |
Revision as of 22:46, 6 December 2007
Strategy & Plans
To show that our system is bug free and acts according to use cases, we have conducted a variety of test types. The tests we used were primarily for verification; that is they check that the product works. There is an assumption that our product goals were correct. Our test strategy involves several components:
- Black Box Testing: Manual testing of interface and classes it exposes.
- This testing mode also involves positive and negative testing. First, common use cases were attempted, followed by deliberate attempts to break the system.
- Regression Testing: During development, at each update, core functionality relating to the update is checked for bugs and consistency.
- Integration Testing: As each component is combined into the whole, this test insures everything works together.
Stress and performance testing was not done. Our software will run on a central server with sufficient hardware for the scale of the project. Users accessing the system will only stress the delivery tools (web server, database) and not our software directly. The output generated by our software (and viewed with a browser) is simple enough for any reasonable browser to read (no AJAX or other advanced browser technology).
Schedule
The bulk of our testing occurred during our 3rd (second-to-last) iteration. Our final iteration has unit testing and further manual testing scheduled.
Resources
Our testing resources are the same as our development resources. Our computers serve as the testing computers since the application runs online (differing system configurations won't affect it). We ourselves were the testers. Time spent testing was the minimum to assure a quality product.
Guidelines
In addition to correctly performing the test types discussed above, there are a few guidelines for dealing with errors discovered via testing. All bugs are documented via TRAC. At the end of each iteration, bugs must not prohibit the system s critical components from functioning.
Quality Goals
At minimum, our goal was to have the critical use cases for our program working. This means that content should be submittable, browsable, and searchable. In the process of performing these actions, the user should not be presented with any unexpected errors. An unexpected error is one that is caused by a faulty system, rather than an message explaining user error (example: "You didn't fill in a required field"). Ideally all errors in every part of the system would be caught, but since a) our code is not 100% complete, and b) the infeasibility of testing every single possible path, we can't achieve this.
Test Case Specifications
TC1: Submit Content
Preconditions: User must be logged in.
Sequence of Actions:
- User types title, selects file, types tags, selects language, and selects category.
- User presses submit button.
- User is prompted to confirm submission.
- User presses confirm button.
- User is presented with thank you message.
TC2: Browse Content
Preconditions: None
Sequence of Actions:
- User clicks a category (repeat as needed).
- User clicks a content title.
- User presented with content information page.
TC3: Search Content
Preconditions: None
Sequence of Actions:
- User types search terms.
- User presses search button.
- User is presented with search results.
Results
Test Case | Paul | John | Mike | Jason |
---|---|---|---|---|
OS | Windows XP SP2 | Ubuntu 7.10 | ||
Browser | Firefox 2.0.0.9 & IE 7.0 | Firefox 2.0.0.9 | ||
Condition | Normal | Normal | Normal | Normal |
TC1 | Pass | Pass | Pass | Pass |
TC2 | Pass | Pass | Pass | Pass |
TC3 | Pass | Pass | Pass | Pass |
TEST PLAN (here for reference, ignore)
Outline
- Table of contents
- Goals for test
- Basic Testing Strategies
- list of resources needed
- pc
- people
- time
- Schedule
- Test case specification
- TC1:
- precondition:
- sequence of events
- postcondition
- TC1:
- Results
- Example: table
- test case, tester 1, tester2, test3, tester4, tester5
- PASS/FAIL
- Example: table
- Explain reason of failure