"New CS 473": Grading Policies

If you have any questions or concerns, please ask in lecture, during office hours, on the course website, or by email.



Graded work


Regrade requests


Final course grades

We will determine final course grades as follows. (What do you expect from an algorithms course?)
  1. Compute raw totals from homework and exam scores, excluding extra credit. Course work is weighted as follows:

  2. Compute adjusted totals, which include extra credit points. Extra credit points are not necessarily worth the same as regular points.

  3. Remove outliers at both ends of the curve.

  4. Determine letter-grade cutoffs from the undergraduate raw totals. Outliers and graduate students are excluded from the cutoff computation to avoid unfairly skewing the curve for undergraduates. The mean is the center of the B range, and each standard deviation is worth 3/4 of a letter grade. For example, the B+/A– cutoff is 2/3 standard deviations above the mean, and the B–/C+ cutoff is 2/3 standard deviations below the mean.

  5. Compute final letter grades (for non-outliers) from adjusted totals.

  6. Adjust grades upwards at the instructor's whim.

Past grade distributions

This course has never been taught before, but for comparison, here are rough statistics from the last five times Jeff taught CS 473. (See below for a possible explanation for the significant jump in 2010.) Here the mean was at the C+/B– boundary, and each standard deviation was a full letter grade.

Semester Mean ± stdev Min pass #As #Bs #Cs #Ds #Fs
Fall 2006 65% ± 11% 40% 25 26 23 13 5
Spring 2009 66% ± 13% 43% 21 25 26 14 2
Spring 2010 72% ± 12% 47% 24 34 35 16 3
Fall 2012 71% ± 13% 44% 36 51 42 33 2
Fall 2013 73% ± 11% 50% 49 58 55 19 2
Typical 72% ± 12% 47% 24% 30% 28% 15% 3%

Here are rough statistics from the previous offerings of CS 374. Again, the mean was at the C+/B– boundary, and each standard deviation was a full letter grade. The averages were significantly lower than in CS 473, but this reflects changes in grading standards (and the novelty of the course) more than the abilities of the students.

Semester Mean ± stdev Min pass #As #Bs #Cs #Ds #Fs
Spring 2014 59% ± 11% 38% 8 11 8 8 1
Fall 2014 62% ± 12% 38% 16 22 22 12 0


Why I stopped dropping exam scores

I used to drop the lowest problem from each exam, but in practice, this policy had the counterintuitive effect of lowering students' performance, for behavioral rather than statistical reasons. Many students would just ignore one problem on each exam, but they often ignored the wrong problem.

In 2010, I switched to a policy of dropping the lowest three three exam problems across the entire semester. With this new policy, the strategy of ignoring one problem on each exam virtually disappeared; surprisingly, this lead to a significant improvement in overall averages! See the statistics above.

However, dropping the lowest exam scores actually lowers more grades than it raises, because of more subtle statistical effects. Dropping scores raises everyone's average (as a percentage of the maximum possible score), which means it also raises the mean. If you have mostly high scores with a few low outliers, dropping the low outliers raises your average. But if all scores to be about the same, dropping the lowest scores actually lowers your average relative to the rest of the class.

So now we just keep it simple. Every exam grade counts.

I drop homework scores for a different reason — sometimes students get sick or overwhelmed, or they need to travel, but the class is too complex to reliably deal with extensions. Homework scores are typically high enough and their contribution to the final course grade is low enough to avoid any counterintuitive statistical effects. Or so I tell myself.