?

Log in

Previous Entry | Next Entry

Everett, L.J., Alexander, R.M. & Wienen, M. (1999). A grading method that promotes competency and values broadly talented students. Journal of Engineering Education, 88(4), 477-483.

The authors describe an assessment regime carried out in an engineering science course, designed to reward students for skills valued in engineers. The assessment regime was run three times in consecutive semesters and the authors feel that it worked well although there were a few challenges. Four assessment types were carried out.
Readiness assessment tests: They encourage being prepared for class, but setting one or two questions based on reading an assignment before class. These seem to have been carried out twice a week, but I could see them being almost daily. It was tricky to set questions which did reward reading and understanding. After the course a significant correlation was found between preparation for class and success at the course.
Basic understanding tests: These were conceptual in nature, testing understanding of physical phenomena. They involved no mathematics. These were held once per week.
Major evening examinations: These were also conceptual in nature and bore the closest resemblance of all the assessments to traditional exams. The questions were tough and required engineering-type skills such as simplification. The problems were also frequently ill-founded and there could be a variety of solutions. There were 3 or 4 of these per semester.
Minimum skills tests: Again, these were once a week and were made up of simpler versions of the prior week’s homework. They were multiple choice, which means no partial credit.

The authors argue for these assessments as criterion-referenced, and argue against the use of norm-referenced assessment. I found their insistence on no partial credit interesting. The paper presents analysis of the results as well as comparison with what the grades would have looked like if only the major evening examinations (as closest to traditional exams) were used. Various challenges were discussed such as the trickiness of determining validity and reliability and also the students’ struggles with this new type of assessment and the expectations of them.

Do not treat this blog entry as a replacement for reading the paper. This blog post represents the understandings and opinions of Torquetum only and could contain errors, misunderstandings and subjective views.