Friday, October 01, 2010

Course Evaluations and Learning Outcomes

I've posted recently about some of our course evaluation statistics, and the effect of going from paper to electronic. A while back I also showed a summary of our Faculty Assessment of Core Skills learning assessment. I'm trying to put the two together by re-engineering the course evaluation to focus squarely on learning. The old version was a standardized one with fifty-five items, only five of which addressed learning at all, and these not very well. Here's my first draft of a new version, with comments afterward. The scale is indicated after each item. The exact wording is still in development.

  1. What was the quality of instruction in this course as it contributed to your learning? (try to set aside your feelings about the course content)
    --(ineffective to very effective)
  2. How much effort did you put into this course
    --(minimal to maximum)
  3. How much did you know about the course content before taking the course?
    --(nothing to a lot)
  4. How much do you know about the course content now?
    --(nothing to a lot)
  5. How much your skills in analytical/deductive thinking (knowing facts, following rules and formulas, learning standard methods) increase in this course?
    --(none to a lot)
  6. How much did your skills in creative/inductive thinking (trial-and-error, development of ideas, taking chances) increase in this course?
    --(none to a lot)
  7. How much did your ability to speak effectively increase in this course?
    --(none to a lot)
  8. How much did your ability to write effectively increase in this course?
    --(none to a lot)
  9. How much did this course help you understand yourself?
    --(none to a lot)
  10. How much did this course spark your interest in the content?
    --(none to a lot)
  11. Was the course enjoyable?
    --(not at all to very much)
  12. How much course content (the subject area, like chemistry or psychology) do you think you learned in this course?
    --(none to a lot)
  13. What overall rating would you give this course as a learning experience?
    --(poor to excellent)

Comments.

This is a radical departure from what we do now. The first question is what we use now on the evaluation form, and is the only one used for evaluation. Question 13 is a validity check on it because the answers should be very much the same.

The questions all focus on learning, except numbers 2 and 11. In the old evaluation, almost all of the questions were about the process of teaching, which makes a lot of assumptions about the value of those processes, and doesn’t transfer well to styles like online learning or hybrid courses.

The learning questions are split between the content area and general liberal-arts skills. This gives us a natural complement to the Faculty Assessment of Core Skills (FACS), which we launched very successfully last spring. Taken together, the teacher view and the student view will give us excellent insight into gen ed outcomes across the whole curriculum.

Question 2 is included because it matches the one on the FACS. The noncognitive “effort” is very important to performance. Here’s the graph from the spring FACS, with GPA in red and credits earned in blue numbers. More effort means better grades and better chance of advancing.

Questions 3 and 4 get at how much content was learned by asking in terms of before/after. This is checked for reliability with question 12.

Questions 5-9 are about general learning outcomes. No course would be expected to get max scores in all of these—it’s an environmental scan to help us understanding where students feel what kind of learning is happening where. It complements the NSSE, the QEP, and the FACS, and will be a gold mine of information.

Question 9 is from the temple of Apollo at Delphi: “know thyself.”

Question 11 will raise some hackles, but it’s there as a control. We know from research that students who rate courses as enjoyable also rate everything else higher. This allows us to investigate that phenomenon locally. If we get to the point where we can administer electronically, we can do these studies ourselves by comparing to course grade. With an anonymous paper survey, we’ll have less ability to do that, but can still do intra-response correlations. We could be more direct and just ask “how happy are you right now?” but that would turn off some students.

There are two free-response questions we'll carry over from the old survey. This will let students write on topics they care about most.

The survey is short for two reasons. First, we’ll get better reliability because students won’t get survey fatigue. Second, this leaves room for other surveys customized by a program, department, or college, to be administered in parallel. For example, the Lit folks could ask detailed content-related questions if they wanted, or conversely ask all about processes (office hours, syllabus, etc.).

4 comments:

  1. Thank you for sharing this work. I've been through campus revisions on such instruments on two campuses and yours seems to be a great improvement. As a person with three degrees in Communication, my main suggestion is with the generic "speaking skills" question. I think that this is a huge gloss over many nuances and I'm not sure these are adequately addressed by such generalities. Do your GEs put more emphasis on presentation delivery or interpersonal interaction? I think this is also where the medium of instruction is important -- face to face vs. online make a large difference in communication practices.

    ReplyDelete
  2. Trudy, thanks for the comment. I agree-- "speaking skills" is too vague. Probably what we want is presentation delivery, since that's a part of the senior capstone.

    ReplyDelete
  3. Mary Hennessey10:00 AM

    Dave, I really like that you include general education competencies in your survey. We're working on that now at my institution--having the department chairs recognize that they fulfill general education competencies even in "their" courses.

    ReplyDelete
  4. Mary, you might want to check out the FACS system we use for gathering faculty input on the same thing. (The Assessing the Elephant link top left)

    ReplyDelete