More than a decade ago, I found myself newly minted as a department chair. When it came time to do annual evaluations of faculty (all of whom were more senior than I), I sweated bullets over it. I tried to systematize it as much as possible in the name of objectivity. Whether I was successful or not is questionable, but I survived without a mutiny at least. One of the indicators I looked at was grades assigned versus evaluations received. I made a scatterplot of the two variables and looked for outliers. This had to be done by hand, laboriously typing in numbers from printed pages.
Flash forward to the present, where I built an electronic system to store evaluations. Mass comparisons are now only a few queries away. I finally got around to doing this a couple of weeks ago, and the result is shown below.
The student ratings for "The instructor appears knowledgeable and competent in the field" appear on the left, with 1 = strongly agree and 5 = strongly disagree. The bottom axis is grade point average on a four point scale. Each dot represents one faculty member (with at least 100 ratings). Although there is a bit of tendency for higher grades to go with better ratings, the correlation is pretty low at -.22. There is an obvious ceiling effect at the bottom of the graph. Many students are giving top ratings across the board.
I expected the correlation to be considerably more significant. The good news is that I was wrong--grades don't seem to be being given away in order to get good evaluations.
Subscribe to:
Post Comments (Atom)
-
The student/faculty ratio, which represents on average how many students there are for each faculty member, is a common metric of educationa...
-
(A parable for academic workers and those who direct their activities) by David W. Kammler, Professor Mathematics Department Southern Illino...
-
The annual NACUBO report on tuition discounts was covered in Inside Higher Ed back in April, including a figure showing historical rates. (...
-
Introduction Stephen Jay Gould promoted the idea of non-overlaping magisteria , or ways of knowing the world that can be separated into mutu...
-
In the last article , I showed a numerical example of how to increase the accuracy of a test by splitting it in half and judging the sub-sco...
-
Introduction Within the world of educational assessment, rubrics play a large role in the attempt to turn student learning into numbers. ...
-
I'm scheduled to give a talk on grade statistics on Monday 10/26, reviewing the work in the lead article of JAIE's edition on grades...
-
Inside Higher Ed today has a piece on " The Rise of Edupunk ." I didn't find much new in the article, except that perhaps mai...
-
"How much data do you have?" is an inevitable question for program-level data analysis. For example, assessment reports that attem...
-
I just came across a 2007 article by Daniel T. Willingham " Critical Thinking: Why is it so hard to teach? " Critical thinking is ...
No comments:
Post a Comment