I'm just back from two full days in Indianapolis at the Assessment Institute. The trip was great. I made contacts and learned some really interesting stuff. Too much to summarize this morning, but here are some bits and pieces to be filled in later.
The opening plenary was shortened by an unfortunate medical incident, but hinted at some interesting fault lines. Trudy Banta, the organizer, does a good job of representing different points of view with her choice of panel, guest, and topic.
The first divergence I noticed centered on the idea of "tuning," a concept borrowed from the Bologna Club in the EU, which I blogged about here and here (the update at the bottom). It's not a lunch meeting, as you might think, but a process of comparing and improving programs across institutions. As a point of trivia, we learned from Jeffery Sybert that concert A is 440 Hz in this country, but 442 Hz in Europe. I'm quite sure I couldn't tell the difference, which is in any case greater than the intra-tuning dissonances of an equi-tempered scale. There must be an interesting story there. The topic was presented by Jamie Merisotis, President/CEO of Lumina Foundation for Education. You can read more about the project here. The panel dissonance in this case was fairly minor, turning on the question of how much authority faculty should have in the process. On the one hand, faculty own curriculum. On the other, they can be protective, self-interested, and unduly academic in their horizons (speaking as one). This is a good case of needing external reviewers to constantly check that goals and progress align. (Such as a stakeholder analysis: see below.)
There was the admonition from the panel that for-profits and their venture capitalist underwriters are very interested in producing a meaningful educational product, and by implication that the usual plodding change of bricks and mortarboard institutions won't be fast enough to compete with cyberdon. As a riff on that, George Kuh suggested that "the credit-hour is a dying concept in higher education, measuring things we no longer value." He elaborated, equating seat time with credit hours under the traditional system.
The other twanging note in the plenary was caused by the use of the term "critical thinking." The consensus position is that this skill is valuable, should be taught and "measured," etc. In opposition to that is the idea that maybe critical thinking is too fuzzy to actually be useful as a learning objective. This turned out to be a minor theme at the conference, which I'll elaborate on tomorrow. In a Q&A at one session, I was given an admonishing mini-lecture that psychology had solved the critical thinking problem and that I only needed to look at the literature. In a session with University of Phoenix that I missed, I'm told that there was rude and contentious debate about assessing critical thinking. Maybe someone who was there can comment.
I met the creator of Waypoint, which I had blogged about here. I watched a demo of the online rubric management and implementation software. More on that later.
At lunch on Monday, Jon and I chatted with an Assistant Director of CIRP at HERI [edit: fixed title] who works on CIRP. I've used the freshman survey, and found it quite useful in finding attrition trends (see this post), and I was interested to learn that CIRP is getting into the constructs business, using item response theory. I went to the session on that on Tuesday, and have some other comments that will have to wait until I have more time.
A session by Tom Zane at Western Governors University gave a fascinating insight into their system of assessing massive amounts of student work. You can read about their innovative system of using assessments to entirely replace grades here. They get between 1500 and 2000 new students each month, and are still growing rapidly. Student work is assessed by human raters using rubrics--this isn't standardized test land. Samples are rated more than once to test for reliability. Although this is still monological, my first impression is that this is as good as it gets for traditional assessment, and that the assessment side of their business could plug into this model if WGU chose do to it--providing a uniform system of credentialing for higher ed. Something like that is conceivably in our future. That might sound scary, but it's infinitely better than standardized tests running our lives.
AAC&U has a project called VALUE, which accumulates rubrics that they have found or created and refined. This seems useful, and complementary to the tuning idea. The next part of the project is to create a repository of student work that has been rated using the rubrics. A problem I noticed more than once is that designers usually don't seem to think much up front about whether their rubric is relative to the curriculum or fixed absolutely. When we built the FACS model, we used an absolute scale, in which raters say things like "student Ecks is working at the freshman/sophomore level." In a relative scale you get stuff like "exceeds expectations." The former is great for tracking longitudinal progress, the latter not so much. A good student will exceed expectations in all classes, showing no progress. By contrast, even a great freshman math major is very unlikely to be doing senior-level work. I'm not sure the VALUE leaders have addressed this.
Jon Shannon and I led a 75-minute session on stakeholder analysis in strategic planning, which had good participation. I blogged about the topic here, and you can find the presentation linked here. This is a great tool for addressing complex planning issues. One of the advantages is that it keeps the conversation on track, focused on goals everyone more or less agrees on.
The backchannel on twitter was pretty thin, or else I just didn't hit the main vein with my search. We set one up at Today's Meet and advertised it on one of our slides, but haven't gotten any activity from participants.
More on some of these topics later.
Subscribe to:
Post Comments (Atom)
-
The student/faculty ratio, which represents on average how many students there are for each faculty member, is a common metric of educationa...
-
(A parable for academic workers and those who direct their activities) by David W. Kammler, Professor Mathematics Department Southern Illino...
-
The annual NACUBO report on tuition discounts was covered in Inside Higher Ed back in April, including a figure showing historical rates. (...
-
Introduction Stephen Jay Gould promoted the idea of non-overlaping magisteria , or ways of knowing the world that can be separated into mutu...
-
In the last article , I showed a numerical example of how to increase the accuracy of a test by splitting it in half and judging the sub-sco...
-
Introduction Within the world of educational assessment, rubrics play a large role in the attempt to turn student learning into numbers. ...
-
I'm scheduled to give a talk on grade statistics on Monday 10/26, reviewing the work in the lead article of JAIE's edition on grades...
-
Inside Higher Ed today has a piece on " The Rise of Edupunk ." I didn't find much new in the article, except that perhaps mai...
-
"How much data do you have?" is an inevitable question for program-level data analysis. For example, assessment reports that attem...
-
I just came across a 2007 article by Daniel T. Willingham " Critical Thinking: Why is it so hard to teach? " Critical thinking is ...
No comments:
Post a Comment