More than a decade ago, I found myself newly minted as a department chair.  When it came time to do annual evaluations of faculty (all of whom were more senior than I), I sweated bullets over it.  I tried to systematize it as much as possible in the name of objectivity.  Whether I was successful or not is questionable, but I survived without a mutiny at least.  One of the indicators I looked at was grades assigned versus evaluations received.  I made a scatterplot of the two variables and looked for outliers.  This had to be done by hand, laboriously typing in numbers from printed pages.
Flash forward to the present, where I built an electronic system to store evaluations.  Mass comparisons are now only a few queries away.  I finally got around to doing this a couple of weeks ago, and the result is shown below.

The student ratings for "The instructor appears knowledgeable and competent in the field" appear on the left, with 1 = strongly agree and 5 = strongly disagree.  The bottom axis is grade point average on a four point scale.  Each dot represents one faculty member (with at least 100 ratings).  Although there is a bit of tendency for higher grades to go with better ratings, the correlation is pretty low at -.22.  There is an obvious ceiling effect at the bottom of the graph.  Many students are giving top ratings across the board.
I expected the correlation to be considerably more significant.  The good news is that I was wrong--grades don't seem to be being given away in order to get good evaluations.
 
No comments:
Post a Comment