User:Bluefoxicy/AggressiveTestEngine

From OLPC
< User:Bluefoxicy
Revision as of 22:34, 10 October 2006 by Bluefoxicy (talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

This describes an Aggressive Test Engine which adapts questions on tests in real time, based on the student's responses. The purpose of this engine is to place heavier weight on subtopics which the student does not understand well; it is hoped that this sort of testing will allow gifted students to expose themselves to harsher working conditions and accelerate their education.

Premise

An Aggressive Test Engine (ATE) is defined here to be a test engine which aggressively lowers the student's test score by selecting questions based on the previous questions given. When a student answers a question incorrectly, the test engine adds weight to the subtopic the question pertained to, and increases the share of questions which pertained to that subtopic to a maximum.

It is hoped that an ATE can be used for the purpose of increasing the difficulty of tests without changing the questions. ATE tests demand that the student have full understanding of an entire topic as presented on the test; faltering on any question will increase the number of questions under that topic, exposing the student to a larger number of questions he is unlikely to truly understand. Scoring high on ATE tests requires a student to truly understand and be proficient with the widest range of tested subtopics possible.

ATE is not intended for standard tests. Standard education should use non-aggressive tests which do not change based on the student's performance.

Adaptivity

The test's Adaptivity is the way the test behaves based on student interaction.

Basis

The basis of Adaptivity of the test is that when a student gets a question wrong, the subtopic of that question is increased in weight. The number of questions on the test does not increase; therefor, other subtopics' distributions must be weighted out.

To properly produce an ATE test, two things are required. First, a broad range of questions is needed, enough to produce large, unique tests and allow expansion of subtopics. Second, the tests must order subtopics randomly rather than in a structured manner, so as to allow questions in all subtopics to be encountered early in the test.

All subtopics start at a level of 0, having not had questions answered incorrectly. When a student answers a question incorrectly, its subtopic has its weight increased by 1, making it more likely that a question from this subtopic will be selected. Based on the relative weight, a mandatory minimum and maximum number of questions is selected, with as little difference as 1 (i.e. 2 or 3; 3 or 4) between them.

To further improve the quality of question selection, a mandatory distance and total distribution may be set up. Questions from the same subtopic must be at least, for example, ((T * 0.75) / W) away from each other and at most ((T * 1.25) / W) away, where (T) is the number of subtopics and (W) is the weight. This causes topics to be spaced out somewhat evenly, but still fairly random. This algorithm may also suffice to control the distribution of topics without calculating number of questions.

Aggressiveness

The ATE should allow tests to be run with adjustable Aggressiveness. An Aggressiveness value would control the weight given by questions, or even if weighting is applied at all.

The basic scale of 0 to 100 could be used for Aggressiveness. An Aggressiveness of 0 would not change the test at all due to student answers; whereas an Aggressiveness of 100 would quickly flood the test with topics the student isn't expected to understand. This would allow tests to be adjusted based on how proficient the students are expected to be with the full topic.

Depending on the length of a test, a low Aggressiveness may have a minimal affect. On a 200 question exam with less than 10 subtopics, an Aggressiveness increasing weight by 0.05 would guarantee an extra question if 20 questions in a single subtopic were answered incorrectly, and suggest one otherwise. On a 50 question exam with 10 subtopics, an Aggressiveness appropriate to increase weight by 0.20 per incorrect answer would have the same effect. The Aggressiveness ranking may be abstracted such that it increases the applied weight based on the number of questions and subtopics in an exam, rather than strictly by the Aggressiveness ranking.

Aggressiveness used is expected to depend on special education levels. Standard education can likely use very low Aggressiveness, although using none at all would be most conservative. A Gifted level of education where students are expected to display a rate of learning much higher than average students would use a higher Aggressiveness.

Side Effects

Because of the way the engine works, a number of side effects can be derived from it. The most interesting side effect is that the test accounts for student proficiency in subtopics. This accounting could be turned into a useful form of student feedback.

As stated earlier, the ATE would be configurable for variable Aggressiveness, where a level of 0 does not alter the test at all. At this level, the test would still be able to measure proficiency in subtopics. This would allow the test engine to approximate the student's strengths and weaknesses and enable students to focus on improving their weak points. Further, teachers could use the overall class performance to target and improve their teaching methods.