# Appendix B

### Further Discussion & Examples re: Adjusting Marks

**Example 1**

- A simple example. After marking, the instructor discovers a test
to have been unusually difficult, such that there are students spread
throughout the marks range but the scores of even the best performers
do not reach beyond 80%. Looking at the test, the instructor sees that
there were difficult questions to test the finer points of the material
but not enough questions to test more basic knowledge. The instructor
**adds X% to all students’ scores**on the rationale that every student would have done been able to achieve roughly this many more points if the questions had given them the opportunity to show more of the basic knowledge they had acquired. **Comment:**This simple method assumes that the poorest students would have achieved the same number of additional points as the best students, since the material not tested well was the most basic. This may not be precisely true of those at the very bottom of the failure range, but the imprecision has no real significance since those were merely moved around in the lower failure range.

**Example 2**

- Another simple example. A test is discovered after marking to
have been unusually easy, such that there are students spread through
the marks range but the scores of even the poorest performers are
beyond the 50% threshold, when quizzes and assignments have shown no
unusually strong grasp of the material by all. The instructor
**subtracts X% from all students’ scores**on the rationale that every student would have achieved roughly this many fewer points if the questions had not been so heavily weighted toward truly basic material. **Comment:**This method assumes that the poorest students have obtained the same number of points on the easier questions as the best students, since there were more of those simple questions on the test. This may not be precisely true of those at the very bottom of the class – those who didn’t study much at all, for example – but again the imprecision has no real significance.

**Example 3**

- The instructor finds the results look as they do in Example 1,
but upon reviewing the test finds that she has simply made the questions
in all ranges of difficulty more challenging than intended. The
instructor
**adjusts by adding X% of each student’s score to that score**(i.e. multiplies each score by some percent greater than 100% to calibrate upwards, by less than 100% to calibrate downwards). By multiplying by some percent, you are not adding or subtracting a fixed amount; you are recalibrating the scale proportionately rather than absolutely. The assumption behind calibration upwards in this instance is that the more able students would have been able to score better on more of the questions than the weaker students, and so the resulting additional points would be differentially greater rather than absolutely the same. **Comment:**This method works well when scaling scores up, but some instructors are reluctant to use it to scale scores down since it takes away more from abler students than weaker ones.

**Example 4**

- A more complex model. The instructor decides on a “floor” mark, on the basis of relevant factors such as (but not necessarily limited to) the instructor’s post-marking assessment of the difficulty of the assignment for the students. A student whose raw score is below the floor mark is assigned the floor mark for the test/exercise, other things being equal. (Other things might not be equal, as when the student did not answer any of the questions.)
**Comment:**The instructor will need a rationale for setting a floor mark. One possible rationale would be to try to minimize (or at least reduce) the likelihood of there being students in the class who get a mark so low that they become discouraged and drop the course. This rationale might be appropriate in a course that aims to teach certain skills – skills that, in the instructor’s experience, most students develop over time and with repeated practice. On the other hand, the floor mark should not be so high that it gives a student who receives it little or no incentive to do better; other things being equal, it should be one that most students would not be content with.- The floor-mark method illustrated here benefits
only students whose raw scores fall below the floor mark. Does this
make the method unfair? A reason to think it doesn’t is that it applies
to
*any*student in the class whose raw score is below the floor mark; it’s a protection available to*all*the students in the class – even an ‘A’ student can have an off-day. (It’s worth noting in this connection that a curved grading system can have the effect of establishing a floor grade. Such a system might assign a grade of A to the top x% of the class (as determined by raw scores), a grade of B to the next y%, and a grade of C to the remaining percentage of the class. A grade of C would be the floor grade.)

**Example 5**

- The same procedure as in Example 4, plus the following steps. The instructor calculates the average score increase for those students whose raw scores are raised to the floor mark and then raises the raw scores of the other students (i.e., those students whose raw scores are equal to or higher than the floor mark) by a percentage of that average or by different percentages of that average for different ranges of raw scores. For example, suppose that the floor mark is 60% and that the average score increase for those students whose raw scores are raised to the floor mark is 4%. The instructor might raise raw scores in the 60-69% range by x% of 4%, raw scores in the 70-79% range by y% of 4%, and raw scores in the 80-100% range by z% of 4% provided that the increase does not raise the student’s mark above 100%.
**Comment:**In contrast to Example 4, the method illustrated in Example 5 benefits (by awarding a score increase) students with raw scores at or above the floor mark, as well as those students with raw scores below the floor mark who are assigned the floor mark.