CS/ECE 374 A: Meta-Policies

This document summarizes my justifications for some recent course policy changes.


Why I stopped grading on a curve

— Stay tuned —

Why I stopped giving 25% for “I don't know”

After 20 years of awarding 25% partial credit for answering any problem with "I don't know", I was convinced to end this practice in Spring 2020. The 25% partial credit policy was intended to reward self-awareness; I still believe that truthfully admitting ignorance indicates more mastery of the material than submitting a completely incorrect answer.

However, in practice, the IDK partial credit seems to undermine some students' confidence, and it introduces an distracting element of gamesmanship. Students who lack confidence in their ability often submit IDK rather than risk getting even less than 25% partial credit, even when in fact they can produce a solution that is more than 25% partial credit. Other students spend far too much mental energy trying to guess whether their partial solution is worth more or less than IDK; I've heard several complaints that the IDK policy discourages students from "trying" or "guessing". So at least for these students, IDK credit shifts the focus of homework and exam questions toward risk-avoidance or grade optimization, and away from the actual course material.

Over the past few years, I've also observed an increasing tendency among struggling students to use IDKs in a way that is clearly self-destructive, such as answering IDK to a majority of the final exam questions, meeting the minimum homework requirement by submitting 16 IDKs, or ignoring the fact that submitting IDK for any homework problem is actually worse for your overall grade than submitting nothing at all.

In particular, there is some evidence that the IDK policy unfairly impacts female and underrepresented minority students.

So no more partial "I don't know" credit. We want students to make a good-faith effort to solve every problem, especially on exams.


Why I require hand-written cheat sheets

I used to allow printed/photocopied cheat sheets, but around 2015, I started seeing a signfiicant increase in students who were including reduced photocopies of complete homework solutions, lecture notes, and/or old homework/lab/exam solutions, or just photocopying cheat sheets created by other students. The most extreme example I ever saw consisted of 144 pages of notes and solutions — four 6×6 grids of reduced pages, one on each side of each sheet of paper — none of which was actually readable by normal human eyes.

Cheat sheets are far more useful as a study techinque than as an in-exam reference. Creating them forces students to filter the relevant course material down to a one- or two-page summary of essential ideas. More poetically, writing the cheat sheet gets the key ideas out of your notes and into your fingers. (This isn't the reason we don't return cheat sheets to students after we grade exams — that's more a logistical nightmare — but it it is why I don't feel too guilty about it.) Cheat sheets also help avoid memorization, but that's only a side benefit; memorization isn't particularly helpful in this class. So not suprisignly, students who shared or photocopied cheat sheets performed significantly worse than students who wrote their own cheat sheets by hand.


Why I stopped dropping exam scores

I used to drop the lowest problem from each exam, but this policy had the counterintuitive effect of lowering students' performance, for behavioral rather than statistical reasons. Many students would just ignore one problem on each exam, but they often ignored the wrong problem.

In 2010, I switched to a policy of dropping the lowest three three exam problems across the entire semester. With this new policy, the strategy of ignoring one problem on each exam virtually disappeared; surprisingly, this lead to a significant improvement in overall averages!

However, dropping the lowest exam scores actually lowers more grades than it raises, because of more subtle statistical effects. Dropping scores raises everyone's average (as a percentage of the maximum possible score), which means it also raises the mean. If you have mostly high scores with a few low outliers, dropping the low outliers raises your average. But if all scores are about the same, dropping the lowest scores actually lowers your average relative to the rest of the class, which lowers your final grade. (Historically, this class has been graded on a curve.)

So now I just keep it simple. Every exam grade counts.

I drop homework scores for a different reason — sometimes students get sick or overwhelmed, or they need to travel, but CS 374 is too big and complicated to reliably deal with extensions. Homework scores are typically high enough and their contribution to the final course grade is low enough to avoid the counterintuitive statistical effects. Or so I tell myself. (For example, 75% of the students in Spring 2017 CS 473 had a homework average of 89% or better, and 90% had a homework average of 75% or better.) Hopefully this flexibility helps students concentrate on using the homework for its intended purpose — practice and feedback with the course material — and not worry so much about homework grades.