Content Management/Final Test Results
Strategy & Plans
To show that our system is bug free and acts according to use cases, we have conducted a variety of test types. The tests we used were primarily for verification; that is they check that the product works. There is an assumption that our product goals were correct. Our test strategy involves several components:
- Black Box Testing: Manual testing of interface and classes it exposes.
- This testing mode also involves positive and negative testing. First, common use cases were attempted, followed by deliberate attempts to break the system.
- Regression Testing: During development, at each update, core functionality relating to the update is checked for bugs and consistency.
- Integration Testing: As each component is combined into the whole, this test insures everything works together.
Stress and performance testing was not done. Our software will run on a central server with sufficient hardware for the scale of the project. Users accessing the system will only stress the delivery tools (web server, database) and not our software directly. The output generated by our software (and viewed with a browser) is simple enough for any reasonable browser to read (no AJAX or other advanced browser technology).
Schedule
The bulk of our testing occurred during our 3rd (second-to-last) iteration. Our final iteration has unit testing and further manual testing scheduled.
Resources
Our testing resources are the same as our development resources. Our computers serve as the testing computers since the application runs online (differing system configurations won't affect it). We ourselves were the testers. Time spent testing was the minimum to assure a quality product.
Guidelines
In addition to correctly performing the test types discussed above, there are a few guidelines for dealing with errors discovered via testing. All bugs are documented via TRAC. At the end of each iteration, bugs must not prohibit the system s critical components from functioning.
Quality Goals
At minimum, our goal was to have the critical use cases for our program working. This means that content should be submittable, browsable, and searchable. In the process of performing these actions, the user should not be presented with any unexpected errors. An unexpected error is one that is caused by a faulty system, rather than an message explaining user error (example: "You didn't fill in a required field"). Ideally all errors in every part of the system would be caught, but since a) our code is not 100% complete, and b) the infeasibility of testing every single possible path, we can't achieve this.
Test Case Specifications
TC1: Submit Content
Preconditions: User must be logged in.
Sequence of Actions:
- User types title, selects file, types tags, selects language, and selects category.
- User presses submit button.
- User is prompted to confirm submission.
- User presses confirm button.
- User is presented with thank you message.
TC2: Browse Content
Preconditions: None
Sequence of Actions:
- User clicks a rg
TC3: Search Content
Preconditions: None
Sequence of Actions:
- User types search terms.
- User presses search button.
- User is presented with search results.
Results
Test Case | Paul | John | Mike | Jason |
---|---|---|---|---|
OS | Windows XP SP2 | Windows XP SP2 | Windows Vista | Ubuntu 7.10 |
Browser | Firefox 2.0.0.11 | Firefox 2.0.0.9 & IE 7.0 | Firefox 2.0.0.11 | Firefox 2.0.0.9 |
Condition | Normal | Normal | Normal | Normal |
TC1 | Pass | Pass | Pass | Pass |
TC2 | Pass | Pass | Pass | Pass |
TC3 | Pass | Pass | Pass | Pass |