This outline is based on my own idiosyncratic view of assessment as a practitioner. Some of it clashes with what you will read elsewhere. Please read this as ideas to consider, not a comprehensive plan for success. It’s important to get other perspectives and begin to network with others in the business. The ASSESS-L listserv and conferences like the IUPUI Assessment Institute and the Atlantic Assessment Conference are good places to start. The SACS annual meeting is good, but can be frightening because of the panic pheromones in the air. Don’t miss the roundtable discussions at the conferences—get there very early because the assessment tables fill up first. There are also many books on the subject. [Note: edited 9/12/10]
I. Initialization of the Plan
- The president and academic vice president have to visibly be behind the project. Describe to the board what you are planning to do. Just as important is that the executives trust the implementation team to do the job, and not be tempted to micromanage.
- What to expect:
- Successful SACS accreditation
- Better faculty engagement with teaching, leading to improvements
- Cultural change that incorporates language of your stated outcomes (includes administration, faculty, students, prospective students if you want)
- Understandable reports that include what was learned from observations and what actions were taken because of it.
- A long slow evolutionary process of improvement.
- What not to expect:
- Learning measurements that can be used for determining faculty or program performance.
- One-off solution. It requires sustained attention, like growing an orchid. It’s just as finicky. Celebrate every blossom and nurse the brown bits.
- Unequivocal proof of learning achievement
- A “microwaved” solution to learning outcomes assessment. The garden analogy is a good one: this stage is about picking the right seeds and location.
- Gain the faculty’s trust. Without them all is lost.
- Form a small group of some faculty who are receptive or already doing assessment. Include at least one open-minded skeptic to avoid group-think.
- The leader can be a faculty member with release time or a full-time admin, but this person should teach at least one class a year. It has to be someone the faculty respects, and someone who’s not yet sold on assessment may be a good bet.
- Make sure each program has a program coordinator, per SACS, and consider making each of these responsible for making sure assessment gets done. Or maybe it’s department chairs, but someone has to be responsible for activities at the program level. I’ll just assume it’s coordinators for simplicity.
- Figure out how you’re going to organize documentation. An institutional archive is a wonderful thing. Librarians can be of great help here.
- Keep reports organized, at a minimum by year and program.
- Accumulate and save as much original student work as you can. A dropbox works wonders. More complex are ePortfolios and learning management systems. There’s nothing like authentic work to analyze when you need it. Note that the student work is much more valuable when associated with the assignment, so figure out how to save these too. This can be an IT/ library project.
- Coordinate with your SACS liaison and other accreditation processes. Make sure timelines and requirements are covered. Set some milestones.
- Consider re-visiting how faculty teaching evaluation is done. Linking assessment directly to evaluation is a bad idea (because there is incentive to cheat), but letting the faculty themselves use assessment results and improvements in their presentations for merit can be powerful. In other words, bottom-up rather than top-down use will work best (e.g teaching portfolios).
- Build a small library of assessment books and links, and read up on what other people do. Read critically, though, and don’t believe that things are as easy or simple as authors sometimes make them out to be.
This is mostly an assignment for your assessment working group, with appropriate input from the VP level (e.g. #2 below).
- You probably already have a list of learning objectives for general education, the institutional mission, programs, etc. Do an inventory of these and organize them. There are different types of goals, including:
- Skills like communication or thinking
- Content: facts, methods, ways of doing things in a discipline
- Non-cognitives: self-evaluation, attitudes, etc.
- Exposure: e.g. “every student will study abroad”
- Beliefs: could be religious or secular, depending on your mission
- One of the tenants of the TQM, which is what the SACS effectiveness model is based on, is that lower level goals should feed up to higher goals. In practice, trying to make this fit with learning outcomes can be a distraction. It creates a whole bunch of bureaucracy and reporting of dubious value. But you can attempt it if you want; it will just make the rest of the work twice as hard.
- If you can simplify your goals list by eliminating some, that will help focus the effort. In any event, don’t expect all goals to be evaluated all the time. You can simply “park” some goals for now if you want. If you don’t have many learning outcomes, this is great because you don’t have to deal with an existing mess. Figure out what’s the most important goals to assess at the current time.
- Help programs develop or reboot their plans:
- Stay away from “critical thinking” like the plague. Thinking skills are great, but pick ones that faculty can agree on, like (probably) deductive reasoning, inductive reasoning, evaluating information for credibility, and even creativity. If you’re already stuck with critical thinking as a goal, consider defining it as two or three more specific types of thinking that are easier to deal with.
- Provide structure for programs so that they are parallel. Here’s a sample list:
- Content-related outcomes for the discipline
- Non-cognitive outcomes for the discipline (e.g. confidence)
- General education outcomes as they relate to the major
- A technology-related outcome (to help with the SACS standard)
- Applicable QEP outcomes.
- Have meetings with programs or related groups of programs to discover their most important goals in the categories from (b) above. This can be a lot of fun, and should immediately be seen as a productive exercise. It takes a little practice for the facilitator to learn how to move the conversation along without getting hung up on technicalities. It will take at least three meetings with a good group to accomplish the following (in order):
- A list of broad goals important to the faculty for students to accomplish. It’s best to start from scratch unless they have a good assessment program already up and running. Get them to tell you what they believe in so you don’t have to convince them later.
- A map showing where these things appear in the present curriculum. (This is where ideas for change will already start appearing—document it.)
- A description of how we know if the students accomplish the goals. This includes assessment and documentation. Avoid temptation to extract promises for assessment to happen all the time, as it won’t happen.
- Develop assessments that are so integral to courses that they are natural, and perhaps already even happening. Assessments should be learning activities if at all possible.
- Share and cross-pollinate ideas across areas, for example by inviting “outsiders” to sit in on some of these sessions. That way you can develop others to run similar sessions. It’s too much for one person to do it all.
- Develop or find (e.g. from AAC&U) rubrics as it makes sense. Don’t go crazy with rubrics—they can be as harmful as helpful, just like electricity. Rubrics have scales to show accomplishment. It’s essential to get this right. If you can, tie the rubric accomplishment levels to the career of your students. For example, the scale might be “developmental,” “first-year,” “second-year,” “graduate,” for a two-year degree. That would be suitable for writing effectiveness, for example. Faculty find this natural—they have an implicit understanding of what to expect out of students of relative maturity. Wouldn’t you like to know how many of your graduates were still doing pre-college level work when they walked? [Note: you will find lots of people who don’t see it this way, and prefer scales like “underperform” to “overperform.” The problem for me with these is that you often can’t see progress. A good student may always overperform, and yet still progress to a higher level of performance.]
It’s not always suitable to use such a scale, of course. For example, I assess how hard students work (a non-cognitive), and faculty rate on a scale like “minimal” to “very hard.”
- If your faculty wants to use external standardized tests, make sure that the content on the test matches the curriculum. Especially for small programs, these tests are better for making external reviewers happy that actually being of use. Remember that if it’s not a learning experience, it’s probably a poor assessment. Generally, try to avoid external tests if you can. They are expensive, very hard to admininster, generally not learning experiences, the faculty don’t “own” the content, and the results are often not detailed to know with any precision what to do to make improvements.
- Make sure everyone knows your plan for keeping documentation, how archiving of reports, student work, assignments, etc. is to be done. Set reasonable expectations, but think about what you will want to have on hand when the next SACS team visits, and (more important) what kinds of information you may want to look at retrospectively in five years. Retrospective analysis is very powerful for finding predictors of success. For example, suppose you use the CLAQWA writing aid one year and then stop. Two years later you might want to regress this on graduation (logistic regression) to see if those students’ success rates are statistically linked to the writing treatment. Okay, this is a little far-fetched, but the truth is that you never know what that original, authentic data will be useful for, so save it and organize it if you can.
- Avoid big standardized tests of gen ed skills like the aforementioned plague. They will only lead to heartbreak. The only exception to this is if you want to cover your bases for an accreditation report by using one of these. Some reviewers will see this is as a meaningful effort, so it might help with SACS section 3.5.1.
- Faculty may want you to talk about validity and reliability or other technicalities, generally as an obstruction. Read about this stuff to be acquainted, but don’t let it rule your life. It’s a common mistake to say “the XYZ test is proven valid.” No test is valid. Only propositions can be valid, like “Tatiana’s score of 78 means she can conjugate Spanish verbs with some proficiency.” As such, validity is very much a local concern. If assessments stay close to authentic classroom work, the faculty will believe appropriate statements valid, and faculty belief is at a premium in this venture. Other objections may come in the form of “has assessment been proven to improve learning?” This is very difficult to answer if you take the scientific approach of trying to prove things. Selling an evolutionary “assessment is teaching while paying attention” is easier. For example, you can ask what improvements such-and-such department has made in the last year. These are always going on: curricular change, new labs or experiences, etc. Then work backwards from the change with a line of questioning: why did you make the change? What made you notice there was an opportunity for improvement? What is the ultimate goal?” This is just the assessment/improvement process in reverse. All we’re trying to do is organize and document what already happens naturally, and intentionally use this powerful force for good, which otherwise is more of a random walk. A more serious objection to consider is “where am I going to get the time to do this?” Here, the administration can help in a variety of ways by carving out strategic release time, summer stipends, or elimination of other committees or bureaucracy to give the assessment effort priority. These efforts will help send the message that the administration takes the effort seriously.
- Don’t try to make the assessments too scientific. If you document part of the learning process (the most authentic assessment), it’s likely to be messy even with rubrics and whatnot. Don’t try to reduce everything to numbers. See the section on reporting for more on that. One very effective way of assessing is to get faculty together to look at raw results and discuss their experiences in this context. What worked? What didn’t? What problems are evident? This is very rich with possibilities for action. Somebody just needs to write down what happens during this meeting and archive it.
- Once everything is up and running, the assessment group’s most important function is to review and give feedback on plans and results. Without regular feedback and encouragement, professional development opportunities, and recognition, the process will peter out. Set a calendar of events for assessment and reporting.
- Make opportunities to give visibility to learning goals and results. Senior administrators should talk about it publically, praising successful efforts, describing the big picture, committing support. A magical thing happens when you start referring to your learning outcomes directly. Example: “Across the board we have seen efforts to increase student abilities in writing effectiveness and effective speaking.” By using the vocabulary, it becomes a natural part of the culture and seeps into the way people think. Get the goals written into syllabi so students see it and hear it talked about.
- Depending on your execution, you may have individual student outcomes reported (e.g with the Faculty Assessment of Core Skills, below). Consider whether or not you want advisors to have access to this information to use with their advisees. For example, I noticed one time that a senior art student was getting ratings of “remedial” in creativity. It’s important that individual instructor ratings not be revealed unless it’s part of the learning experience in a current course. This shields the ratings from political pressures (“why did you give me a remedial score?”).
- Make non-compliance a matter for administrative enforcement only as a last resort. SACS is a big stick, and you have to use it sometimes, but you don’t want a “because SACS says” culture. The odds are that if you create a process the faculty believe in, the SACS review will be fine. There are never guarantees of anything because of the randomness of peer-review, but this should improve the odds:
- Make sure there are no majors in non-compliance. Even if there are no students in the major, do something that looks like assessment. I know it’s absurd, but go read the SACS-L listserv.
- Assessment is only the appetizer. The test for reviewers is 1. is there a regular process that shows at least a couple years’ comprehensive effort, and 2. are there results—actions taken, changes made because of what the results were.
- Don’t accept “actions” like “we are continuing to monitor” or simply more plans do something. Any real process will produce evolutionary change in the form of curricular proposals, classroom experiences, teaching methods, testing, technology, etc. Extracting the reports that document this is always a battle, both for quantity and quality, but it has to be fought. Think like a SACS reviewer when you look at them. Ask your SACS VP to be a reviewer once you have some confidence.
- Don’t expect to be able to scientifically show that learning is improving because of changes. There are innumerable tests and data management products, “value-added” statistics, and other nonsense that will only frustrate you if you buy into it. It’s more important that faculty believe that learning is improving, that the changes are (probably) for the better. This is no excuse, of course, not to look at education literature for best practices and studies that do highlight some processes as better than others. Take them with a grain of salt, but try them out.
- For any goals that are institution-wide, consider using a “Faculty Assessment of Core Skills” approach. You can read more about that at http://www.coker.edu/assessment/elephant.pdf.
- You can have the IR office help with report statistics if you want, but make sure they know the game plan first, or faculty may get two different messages. Generally with reporting, keep it simple and to the point. Here’s a sample list of headings for the sections:
- Learning Outcome: (a statement of it)
- Assessment Method (a statement of how it’s done with attached forms, if any)
- Results and analysis (what did faculty observe or glean through analysis or focused discussion?)
- Actions and Improvements (what did they do?)
- Avoid condensing data down more than you have to. Averages are often too abstract. It’s like taking a ripe strawberry and boiling it before you eat it. Presenting averages to an external body like SACS is fine—they expect it. All the better if the graphs go up. But internally, where you want to know meaning, don’t average unless it’s the only logical thing to do.
- Instead of averages, report out frequencies when you can. For example, rather than saying the average test score was 3.4 this year as opposed to 3.3 last year, say the percentage of students reaching the “acceptable” level went from 60% to 65% or whatever. This is a great technique that will instantly improve the usefulness of reports.
- Remember that assessment reports can’t be used punish programs or individuals administratively. The reason is simple: the instant you start doing that, every report thereafter will be suspect. The whole system relies on trust to work, and that goes in both directions. Instead, use results administratively as a reason for a conversation, especially around budget time.
- If you standardized goals by category as suggested earlier, it makes it easy to present the whole mass of reports to SACS in an organized and well-formatted form. Without some structure, you’ll have a pile of disjointed formats and styles that will immediately turn off whoever has to look at them.
No comments:
Post a Comment