Introduction
- Deductive (Objective): "A learning area is targeted for improvement because assessment data indicate that students are not performing as expected." (p. 60).
- Inductive (Subjective): "[P]otential areas for improvement are identified through the experiences of the faculty (or the students)." (p. 60).
Although it's not in the book, I suggested the associations to objective/subjective because objectivity is often seen as superior to subjectivity in measurement and decision-making. See, for example Peter Ewell's NILOA Occasional paper "Assessment, Accountability, and Improvement: Revisiting the Tension."
In the early days of the assessment movement, campus assessment practices were consciously separated from what went on in the classroom. This separation helped increase the credibility of the generated evidence because, as “objective” data-gathering approaches, these assessments were free from contamination by the subject they were examining.(p. 19)The desire to be free from human bias is related to assessment's roots as a management method--the same family tree as Six-Sigma, which helped Jack Welsh's General Electric build refrigerators more efficiently. It's related to the positivist movement's emphasis on definitions and strict classifications, and further back to the scientific revolution. However, the history of human beings "objectively measuring" one another includes dark and tragic episodes, as recounted in part in the 2020 president's address to the National Council on Measurement in Education (NCME). From the abstract:
Reasons for distrust of educational measurement include hypocritical practices that conflict with our professional standards, a biased and selected presentation of the history of testing, and inattention to social problems associated with educational measurement.While the methods of science may be mooted as objective, the uses of those methods are never unbiased as long as humans are involved.
So putative objectivity can be challenged in at least two ways, viz. that human involvement means it's not really objective, and that objective methods aren't necessarily better than subjective ones in solving problems beyond a certain complexity level.
In their book, Fulcher and Prendergast describe another compelling reason to consider an inductive/subjective approach: practicality.
Learning Goals
Examples
Some of the following examples are my interpretations of selected assessment work drawn from the literature, conferences, and practice.
Interview Skills
In a 2018 RPA article, Keston (co-author of the book cited above) and colleagues documented a program improvement in Computer Information Systems.
Lending, D., Fulcher, K. H., Ezell, J. D., May, J. L., & Dillon, T. W. (2018). Example of a program-level learning improvement report. Research & Practice in Assessment, 13, 34-50 [link]
The same story is told less formally in a collection of assessment stories here. It's short and worth reading, but here's the apposite part--a faculty member's realization based on recent direct observation of student work:
We realized that nowhere in our curriculum did we actively teach the skill of interviewing. Nor did we assess our students’ ability to perform an effective [requirements elicitation] interview, apart from two embedded exam questions in their final semester.
So although although there existed an official learning outcome intended to cover this important interview skills, it was assessed in a way that hadn't elicited the important information that students weren't doing it well. This illustrates the flaw in assuming that a few general goals can adequately cover a curriculum in detail. But faculty are immersed in these details, and good teachers will notice and act on such information. So in this case, the deductive machinery was all in place, including an assessment mechanism, but it failed where the inductive method worked.
An important point illustrated here, and mentioned in the book, is that having faculty enthusiasm for a project is conducive to making changes.
Discussion-Based Assessment
Jacksonville University decided to design and implement a discussion-based approach to assessment for the 2020-2021 academic year. This decision—which was made to minimize the bureaucratic feeling of sending out templates and providing deadlines—has led to enhanced faculty and staff understanding of assessment without the stress of prolonged back and forth reviews and evaluations. [...]
[T]he discussion-based approach has led to expressions of cathartic relief. Faculty and staff in this model are provided with immediate feedback, an opportunity to reflect on their accomplishments in this unparalleled time in higher education, and to hear affirmations of value and contribution. Ultimately, we have been able to collect information that is both deeper and wider than in previous assessment cycles while gaining countless insights into the efforts of the campus community to ensure success in the face of adversity.
An example I remember from the talk is that the Dance faculty faced special challenges teaching classes virtually, as one might imagine they would. They noticed that students were getting injured at alarming rates when practicing on their own. So they addressed the problem. This didn't happen because they had pre-declared a learning outcome about dance injury, and then created benchmark measures and so on, but because faculty were paying attention.
Notice the indications in the abstract (which I underlined) that the inductive/subjective approach provides more timely and useful information and is more natural than the formal goal-setting.
Teaching Statistics
- Analytical reasoning. [...] explain the assumptions of a model, express a model mathematically, illustrate a model graphically, [...], and appropriately select from among different models.
- Quantitative methods. [...] articulate a testable hypothesis, use appropriate quantitative methods to empirically test a hypothesis, interpret the statistical significance of test statistics and estimates [...]
- Formal reasoning. (General education) [...] mastery of rigorous techniques of formal reasoning. [...] the mathematical interpretation of ideas and phenomena; [...] the symbolic representation of quantification,
No comments:
Post a Comment