I was browsing the assessment blogs this morning and saw Pat William's post at Assess This! on the AAC&U initiative "Bringing Theory to Practice." From the website, the project's mission is to use student engagement to enhance both cognitive and noncog development. If you haven't heard of noncognitive assessment, you can start here. In one well-developed model by William E. Sedlacek, there are eight dimensions that can be used to predict academic success as measured by persistance, grades, and graduation. I'm presently starting that up at my home institution.
But beyond using noncogs for predictive information, shouldn't we also be introducing such skills as leadership and realistic self-apprasial into the curriculum itself? The answer is that we probably already to in orientation-type courses, or perhaps a full-blown freshman seminar. But what about a bolder idea: integrate them throughout the curriculum.
The AAC&U proposal goals center on health and civic engagement. These are certainly not traditional cognitive goals like reading, writing, and math. The LEAP initiative also has noncog dimensions in its list of goals, if you look closely.
How would you do it? The distribution requirements/general education/liberal studies compenents are probably chock-full of criss-crossing goals already. How can you add more without the whole edifice collapsing? Very carefully, methinks. The first step would be to show that there is value to the proposition and start a conversation about it.
We had an exercise in January, asking faculty to describe the kinds of students they wanted. It was amazing to me to see that at least two thirds of the descriptions were about noncognitives. Faculty want students who care, who work hard, work well with others, are interested in academics, and so forth. Yes, they also want talented young minds to work with, but attitudes and behaviors play a large role. So in our case, it would likely be an easy sell--they already care about the outcomes.
To start a conversation, my prescription is to try what has worked for me before--building a dialogical assessment piece. This has to be very, very easy for facultly to use, and the reports have to be understandable when the results come back. This is all quite doable if one doesn't get hung up on rubrics and alignment and reductionism. Once the conversation is underway, we can contemplate changes.
People do things for reasons that make sense to them. I have to remind myself of this all the time. In order to get traction on this project, or any number of others, the ones doing the heavy lifting really need for the project to make sense. And it can't waste their time--because that makes no sense at all. I've started holding some meetings asynchronously (sometimes with Etherpad) to show committee members that I respect their time.
I mused yesterday that our outcomes assessment is more philosophy than science, and can easily fall into We-Say-So-ism (WSSism). For the fuzzy problems of philosophy, there is no axiomatic approach (many attempts to build one notwithstanding) nor other deductive process of arriving at the right answer. One has to convince. It has to make sense. This is a lot harder than WSSism in the short term, and certainly is more prone to becoming stalled along the way. But if we can't convince our colleagues that a project is worth their time, then we should consider that 1) we haven't developed our position very well, or 2) perhaps it's really not worth their time. You want writing across the curriculum, math across the curriculum, assessment of liberal arts skills, student engagement, and now noncognitives in addition to delivering the traditional curriculum? There has to be a case made for it. It has to make sense. Obviously, I have my work cut out for me.
Subscribe to:
Post Comments (Atom)
-
The student/faculty ratio, which represents on average how many students there are for each faculty member, is a common metric of educationa...
-
(A parable for academic workers and those who direct their activities) by David W. Kammler, Professor Mathematics Department Southern Illino...
-
The annual NACUBO report on tuition discounts was covered in Inside Higher Ed back in April, including a figure showing historical rates. (...
-
In the last article , I showed a numerical example of how to increase the accuracy of a test by splitting it in half and judging the sub-sco...
-
Introduction Stephen Jay Gould promoted the idea of non-overlaping magisteria , or ways of knowing the world that can be separated into mutu...
-
I'm scheduled to give a talk on grade statistics on Monday 10/26, reviewing the work in the lead article of JAIE's edition on grades...
-
Introduction Within the world of educational assessment, rubrics play a large role in the attempt to turn student learning into numbers. ...
-
"How much data do you have?" is an inevitable question for program-level data analysis. For example, assessment reports that attem...
-
Inside Higher Ed today has a piece on " The Rise of Edupunk ." I didn't find much new in the article, except that perhaps mai...
-
Introduction A few days ago , I listed problems with using rubric scores as data to understand learning. One of these problems is how to i...
No comments:
Post a Comment