Sunday, September 19, 2010

Paperwork Kudzu

Bureaucracy is like kudzu: without some inhibitor, it will smother any terrain and reduce interesting features to a uniform mush of vines. You can see that happening on the left. If you live in the South, you probably drive by this on your way to work.
Last year I listed Zza's College Rankings based on public data from IPEDS, to show the average net cost of attendance versus instructional value. In the context of bureaucratic load, it's interesting to see how many staffers it takes to support an instructor. Note that these reported statistics come from very different schools, and need to be taken with pinch of salt. I chose my alma mater SIU-C, my former employer Coker College, and for comparison threw in MIT and Williams College.








The second row gives how many directors and executives (management positions) it takes to run the institution divided into the number of instructors. The third row include all administrative staff, including managers. The last divides that figure into the number of students.

Of course, not all administrators are devoting their time to bureaucracy. There are plenty of services provided too, sports teams to coach, tutoring labs, IT shops, and so on. So it's not surprising to see that Williams has a lot of staff per instructor. It is interesting, however, that SIU rivals that number. Given the size difference, one might think there would be economies of scale involved. It's also interesting that there are half as many management positions per instructor at SIU. This is probably evidence of a fatter administration pyramid, where a manager pushes policy down to a  lot of staffers to administer to the 20K students. I didn't put the ratio of managers to staff in the table, but it's 13 for SIU compared to around 5 for Coker and Williams, and MIT in the middle at 8.5.

In the big picture, it seems excessive that in all cases, there's at least one admin of some sort for every instructor at all of these schools, and two have twice that. Certainly a large portion of what these people are needed for is maintaining the bureaucracy.

Cutting Kudzu My current institution is no different, although I didn't run the numbers. We have a lot of forms and processes. In my IT capacity (one of my hats), I have a long wish list to automate some of these processes. This week I finally had time to work on it directly. Until then I'd been exploring off-the-shelf solutions called workflows. There are plenty of them out there, some open source ones even. I installed a couple of these, tried out commercial ones, and even looked at using our SharePoint service to build a solution. But in the end these were not exactly what I was looking for.

There is a real virtue to simplicity: doing one thing well. This is part of Google's corporate philosophy. Their first three statements are:
  1. Focus on the user and all else will follow.
  2. It's best to do one thing really, really well
  3. Fast is better than slow.
These are the same principles that made my dropbox solution at Coker work so successfully, and could also be said to work with the general education assessment approach. Make it easy, make it quick, do one thing well.

Some have argued that understanding exponential growth is the most important knowledge a person can learn.  At the heart of any process is some number of ways in which elements can be combined. In crude terms we can think of it as combinations of elements, represented by a rapidly-growing list of ways things can go wrong in practice. In mixing ten colors together, we have 1024 basic ways to do it (2 to the 10th). If order matters in a process, it's even worse than exponential: the number of ways to list ten unique names is 10! = 3,628,600. It's essential to limit possibilities to only the essential ones if you want to avoid errors from unexpected conditions.

My eform solution is very simple. It uses standard html forms because there are already many resources for creating them. I standardize the forms with a few rules to reduce complexity. I use the OpenIGOR installation we already have to save them, and just added a new document type. So half of the problem is solved without hardly lifting a finger. Here are some of the remaining design elements:

  • A form itself is a template, like a Platonic ideal. It belongs to a group responsible for its maintenance, like Human Resources
  • Form elements (inputs) will have standardized names to enable global searches. So a person's first name will always be an input called "First".
  • Form data is created when someone fills out a form, either anonymously or as a logged-in user
  • Form data is stored in simple text files. Updates are appended with a time stamp, so the whole history of the form data will always exist.
  • The owner of the form data is the only one who can update data or take actions like deleting it.
  • There is view-only access that can be sent as a hyperlink, and exists automatically for anonymous submissions to a group.
  • Anonymous form data comes to the sponsor group to be claimed. 
  • All of the form data files viewable to a user are displayed on a simple interface that already existed to list paper forms. A sample is shown below.


A sample of the form is shown below. This is one that had data filled in from the stored file. It shows the transaction history. This is all test data.




Refactoring 
One of the nice things about this project is that it has led me to ask "why?" with respect to current processes. Rather than blindly reproducing some paper chase with an electronic version, it's good to step back and see if there's not a better way. For example, at the end of a term, adjuncts have to chase around to about eight departments to get signatures to show they've turned in grades, evaluations, etc., before they can pick up a check. This is a pain for everybody. Why not turn that process 90 degrees and simply create a global approval table that can be updated by each of those eight administrators? This would cut out more than half the work, and allow the process to pick out only those adjuncts who have some problem or another in compliance.

Of course, this is my first week with machete in hand, looking at the field of wild vines. We'll see how sore I am after the week is over, and the users have tried out the prototype. Stay tuned.

Sunday, September 12, 2010

Assessment Philosophy Prezi

I put together a Prezi for an upcoming talk, rather than digging out some tired powerpoint slides. If you don't know Prezi, check it out. There's a very reasonable educational license for the full product, but you get a lot for free.

The idea is to describe the yin and yang character of assessment: the scientific ambition of measuring and demonstrating improvement in learning versus a more modest aim to create a culture that values learning and pays attention to what observation is telling us. In the presentation I term these "Demonstrated Improvement" versus "Intentional Evolution." There are techniques and language that belong to both, and in implementation, some that fall in between. This last category would include things like student portfolios, which can be used for rubricked (if that's a word), sliced and diced number crunching, or messier assessment through observation and discussion.

I intend to do a voice-over for this thing when I get the chance. That's new to me, and our web guy recommended a product from Techsmith called Camtasia that I'll try out. I use the free version of their Jing all the time.

Here's the Prezi:


Saturday, September 11, 2010

Suggestions for an Outcomes Assessment Strategy

Introduction
This outline is based on my own idiosyncratic view of assessment as a practitioner. Some of it clashes with what you will read elsewhere. Please read this as ideas to consider, not a comprehensive plan for success. It’s important to get other perspectives and begin to network with others in the business. The ASSESS-L listserv and conferences like the IUPUI Assessment Institute and the Atlantic Assessment Conference are good places to start. The SACS annual meeting is good, but can be frightening because of the panic pheromones in the air. Don’t miss the roundtable discussions at the conferences—get there very early because the assessment tables fill up first. There are also many books on the subject. [Note: edited 9/12/10]

I. Initialization of the Plan
  • The president and academic vice president have to visibly be behind the project. Describe to the board what you are planning to do. Just as important is that the executives trust the implementation team to do the job, and not be tempted to micromanage.
    • What to expect:
      • Successful SACS accreditation
      • Better faculty engagement with teaching, leading to improvements
      • Cultural change that incorporates language of your stated outcomes (includes administration, faculty, students, prospective students if you want)
      • Understandable reports that include what was learned from observations and what actions were taken because of it.
      • A long slow evolutionary process of improvement.
    • What not to expect:
      • Learning measurements that can be used for determining faculty or program performance.
      • One-off solution. It requires sustained attention, like growing an orchid. It’s just as finicky. Celebrate every blossom and nurse the brown bits.
      • Unequivocal proof of learning achievement
      • A “microwaved” solution to learning outcomes assessment. The garden analogy is a good one: this stage is about picking the right seeds and location.

  • Gain the faculty’s trust. Without them all is lost.
    • Form a small group of some faculty who are receptive or already doing assessment. Include at least one open-minded skeptic to avoid group-think.
    • The leader can be a faculty member with release time or a full-time admin, but this person should teach at least one class a year. It has to be someone the faculty respects, and someone who’s not yet sold on assessment may be a good bet.
    • Make sure each program has a program coordinator, per SACS, and consider making each of these responsible for making sure assessment gets done. Or maybe it’s department chairs, but someone has to be responsible for activities at the program level. I’ll just assume it’s coordinators for simplicity.
  • Figure out how you’re going to organize documentation. An institutional archive is a wonderful thing. Librarians can be of great help here.
    • Keep reports organized, at a minimum by year and program.
    • Accumulate and save as much original student work as you can. A dropbox works wonders. More complex are ePortfolios and learning management systems. There’s nothing like authentic work to analyze when you need it. Note that the student work is much more valuable when associated with the assignment, so figure out how to save these too. This can be an IT/ library project.

  • Coordinate with your SACS liaison and other accreditation processes. Make sure timelines and requirements are covered. Set some milestones.

  • Consider re-visiting how faculty teaching evaluation is done. Linking assessment directly to evaluation is a bad idea (because there is incentive to cheat), but letting the faculty themselves use assessment results and improvements in their presentations for merit can be powerful. In other words, bottom-up rather than top-down use will work best (e.g teaching portfolios).

  • Build a small library of assessment books and links, and read up on what other people do. Read critically, though, and don’t believe that things are as easy or simple as authors sometimes make them out to be.
II. Implementation of the Plan
This is mostly an assignment for your assessment working group, with appropriate input from the VP level (e.g. #2 below).
  • You probably already have a list of learning objectives for general education, the institutional mission, programs, etc. Do an inventory of these and organize them. There are different types of goals, including:
    • Skills like communication or thinking
    • Content: facts, methods, ways of doing things in a discipline
    • Non-cognitives: self-evaluation, attitudes, etc.
    • Exposure: e.g. “every student will study abroad”
    • Beliefs: could be religious or secular, depending on your mission

  • One of the tenants of the TQM, which is what the SACS effectiveness model is based on, is that lower level goals should feed up to higher goals. In practice, trying to make this fit with learning outcomes can be a distraction. It creates a whole bunch of bureaucracy and reporting of dubious value. But you can attempt it if you want; it will just make the rest of the work twice as hard.

  • If you can simplify your goals list by eliminating some, that will help focus the effort. In any event, don’t expect all goals to be evaluated all the time. You can simply “park” some goals for now if you want. If you don’t have many learning outcomes, this is great because you don’t have to deal with an existing mess. Figure out what’s the most important goals to assess at the current time.

  • Help programs develop or reboot their plans:

    • Stay away from “critical thinking” like the plague. Thinking skills are great, but pick ones that faculty can agree on, like (probably) deductive reasoning, inductive reasoning, evaluating information for credibility, and even creativity. If you’re already stuck with critical thinking as a goal, consider defining it as two or three more specific types of thinking that are easier to deal with.

    • Provide structure for programs so that they are parallel. Here’s a sample list:
      • Content-related outcomes for the discipline
      • Non-cognitive outcomes for the discipline (e.g. confidence)
      • General education outcomes as they relate to the major
      • A technology-related outcome (to help with the SACS standard)
      • Applicable QEP outcomes.

    • Have meetings with programs or related groups of programs to discover their most important goals in the categories from (b) above. This can be a lot of fun, and should immediately be seen as a productive exercise. It takes a little practice for the facilitator to learn how to move the conversation along without getting hung up on technicalities. It will take at least three meetings with a good group to accomplish the following (in order):
      • A list of broad goals important to the faculty for students to accomplish. It’s best to start from scratch unless they have a good assessment program already up and running. Get them to tell you what they believe in so you don’t have to convince them later.
      • A map showing where these things appear in the present curriculum. (This is where ideas for change will already start appearing—document it.)
      • A description of how we know if the students accomplish the goals. This includes assessment and documentation. Avoid temptation to extract promises for assessment to happen all the time, as it won’t happen.
      • Develop assessments that are so integral to courses that they are natural, and perhaps already even happening. Assessments should be learning activities if at all possible.
      • Share and cross-pollinate ideas across areas, for example by inviting “outsiders” to sit in on some of these sessions. That way you can develop others to run similar sessions. It’s too much for one person to do it all.

    • Develop or find (e.g. from AAC&U) rubrics as it makes sense. Don’t go crazy with rubrics—they can be as harmful as helpful, just like electricity. Rubrics have scales to show accomplishment. It’s essential to get this right. If you can, tie the rubric accomplishment levels to the career of your students. For example, the scale might be “developmental,” “first-year,” “second-year,” “graduate,” for a two-year degree. That would be suitable for writing effectiveness, for example. Faculty find this natural—they have an implicit understanding of what to expect out of students of relative maturity. Wouldn’t you like to know how many of your graduates were still doing pre-college level work when they walked? [Note: you will find lots of people who don’t see it this way, and prefer scales like “underperform” to “overperform.” The problem for me with these is that you often can’t see progress. A good student may always overperform, and yet still progress to a higher level of performance.]

      It’s not always suitable to use such a scale, of course. For example, I assess how hard students work (a non-cognitive), and faculty rate on a scale like “minimal” to “very hard.”

    • If your faculty wants to use external standardized tests, make sure that the content on the test matches the curriculum. Especially for small programs, these tests are better for making external reviewers happy that actually being of use. Remember that if it’s not a learning experience, it’s probably a poor assessment. Generally, try to avoid external tests if you can. They are expensive, very hard to admininster, generally not learning experiences, the faculty don’t “own” the content, and the results are often not detailed to know with any precision what to do to make improvements.

    • Make sure everyone knows your plan for keeping documentation, how archiving of reports, student work, assignments, etc. is to be done. Set reasonable expectations, but think about what you will want to have on hand when the next SACS team visits, and (more important) what kinds of information you may want to look at retrospectively in five years. Retrospective analysis is very powerful for finding predictors of success. For example, suppose you use the CLAQWA writing aid one year and then stop. Two years later you might want to regress this on graduation (logistic regression) to see if those students’ success rates are statistically linked to the writing treatment. Okay, this is a little far-fetched, but the truth is that you never know what that original, authentic data will be useful for, so save it and organize it if you can.

  • Avoid big standardized tests of gen ed skills like the aforementioned plague. They will only lead to heartbreak. The only exception to this is if you want to cover your bases for an accreditation report by using one of these. Some reviewers will see this is as a meaningful effort, so it might help with SACS section 3.5.1.

  • Faculty may want you to talk about validity and reliability or other technicalities, generally as an obstruction. Read about this stuff to be acquainted, but don’t let it rule your life. It’s a common mistake to say “the XYZ test is proven valid.” No test is valid. Only propositions can be valid, like “Tatiana’s score of 78 means she can conjugate Spanish verbs with some proficiency.” As such, validity is very much a local concern. If assessments stay close to authentic classroom work, the faculty will believe appropriate statements valid, and faculty belief is at a premium in this venture. Other objections may come in the form of “has assessment been proven to improve learning?” This is very difficult to answer if you take the scientific approach of trying to prove things. Selling an evolutionary “assessment is teaching while paying attention” is easier. For example, you can ask what improvements such-and-such department has made in the last year. These are always going on: curricular change, new labs or experiences, etc. Then work backwards from the change with a line of questioning: why did you make the change? What made you notice there was an opportunity for improvement? What is the ultimate goal?” This is just the assessment/improvement process in reverse. All we’re trying to do is organize and document what already happens naturally, and intentionally use this powerful force for good, which otherwise is more of a random walk. A more serious objection to consider is “where am I going to get the time to do this?” Here, the administration can help in a variety of ways by carving out strategic release time, summer stipends, or elimination of other committees or bureaucracy to give the assessment effort priority. These efforts will help send the message that the administration takes the effort seriously.

  • Don’t try to make the assessments too scientific. If you document part of the learning process (the most authentic assessment), it’s likely to be messy even with rubrics and whatnot. Don’t try to reduce everything to numbers. See the section on reporting for more on that. One very effective way of assessing is to get faculty together to look at raw results and discuss their experiences in this context. What worked? What didn’t? What problems are evident? This is very rich with possibilities for action. Somebody just needs to write down what happens during this meeting and archive it.
III. Development of the Plan
  • Once everything is up and running, the assessment group’s most important function is to review and give feedback on plans and results. Without regular feedback and encouragement, professional development opportunities, and recognition, the process will peter out. Set a calendar of events for assessment and reporting.

  • Make opportunities to give visibility to learning goals and results. Senior administrators should talk about it publically, praising successful efforts, describing the big picture, committing support. A magical thing happens when you start referring to your learning outcomes directly. Example: “Across the board we have seen efforts to increase student abilities in writing effectiveness and effective speaking.” By using the vocabulary, it becomes a natural part of the culture and seeps into the way people think. Get the goals written into syllabi so students see it and hear it talked about.

  • Depending on your execution, you may have individual student outcomes reported (e.g with the Faculty Assessment of Core Skills, below). Consider whether or not you want advisors to have access to this information to use with their advisees. For example, I noticed one time that a senior art student was getting ratings of “remedial” in creativity. It’s important that individual instructor ratings not be revealed unless it’s part of the learning experience in a current course. This shields the ratings from political pressures (“why did you give me a remedial score?”).

  • Make non-compliance a matter for administrative enforcement only as a last resort. SACS is a big stick, and you have to use it sometimes, but you don’t want a “because SACS says” culture. The odds are that if you create a process the faculty believe in, the SACS review will be fine. There are never guarantees of anything because of the randomness of peer-review, but this should improve the odds:

    • Make sure there are no majors in non-compliance. Even if there are no students in the major, do something that looks like assessment. I know it’s absurd, but go read the SACS-L listserv.

    • Assessment is only the appetizer. The test for reviewers is 1. is there a regular process that shows at least a couple years’ comprehensive effort, and 2. are there results—actions taken, changes made because of what the results were.

  • Don’t accept “actions” like “we are continuing to monitor” or simply more plans do something. Any real process will produce evolutionary change in the form of curricular proposals, classroom experiences, teaching methods, testing, technology, etc. Extracting the reports that document this is always a battle, both for quantity and quality, but it has to be fought. Think like a SACS reviewer when you look at them. Ask your SACS VP to be a reviewer once you have some confidence.

  • Don’t expect to be able to scientifically show that learning is improving because of changes. There are innumerable tests and data management products, “value-added” statistics, and other nonsense that will only frustrate you if you buy into it. It’s more important that faculty believe that learning is improving, that the changes are (probably) for the better. This is no excuse, of course, not to look at education literature for best practices and studies that do highlight some processes as better than others. Take them with a grain of salt, but try them out.

  • For any goals that are institution-wide, consider using a “Faculty Assessment of Core Skills” approach. You can read more about that at http://www.coker.edu/assessment/elephant.pdf.
IV. Reporting Outcomes
  • You can have the IR office help with report statistics if you want, but make sure they know the game plan first, or faculty may get two different messages. Generally with reporting, keep it simple and to the point. Here’s a sample list of headings for the sections:
    • Learning Outcome: (a statement of it)
    • Assessment Method (a statement of how it’s done with attached forms, if any)
    • Results and analysis (what did faculty observe or glean through analysis or focused discussion?)
    • Actions and Improvements (what did they do?)

  • Avoid condensing data down more than you have to. Averages are often too abstract. It’s like taking a ripe strawberry and boiling it before you eat it. Presenting averages to an external body like SACS is fine—they expect it. All the better if the graphs go up. But internally, where you want to know meaning, don’t average unless it’s the only logical thing to do.

  • Instead of averages, report out frequencies when you can. For example, rather than saying the average test score was 3.4 this year as opposed to 3.3 last year, say the percentage of students reaching the “acceptable” level went from 60% to 65% or whatever. This is a great technique that will instantly improve the usefulness of reports.

  • Remember that assessment reports can’t be used punish programs or individuals administratively. The reason is simple: the instant you start doing that, every report thereafter will be suspect. The whole system relies on trust to work, and that goes in both directions. Instead, use results administratively as a reason for a conversation, especially around budget time.

  • If you standardized goals by category as suggested earlier, it makes it easy to present the whole mass of reports to SACS in an organized and well-formatted form. Without some structure, you’ll have a pile of disjointed formats and styles that will immediately turn off whoever has to look at them.

Thursday, September 09, 2010

Raising the Standard

My cup hath run over this fall, with only enough pauses in the storm of activity to give me false hope that things are finally getting back to normal. Au contrair...in addition to the normal "whack a mole" quotidian routine, I have an ambitious web site overhaul to manage, a form solution to build into our implementation of OpenIGOR,  two grants to write, one paper to get to a publisher, another to finish (it keeps growing and growing), and several other projects that will have to wait. Oh, and I'm teaching a Calc II section. This last is the highlight of my week. Despite a one hour pre-class review and 75 minute class, I always come out of it with more energy that I went it with (starting at 3pm, mind you!).

But the idea that struck me last evening has nothing to do with these. Rather, it focuses on the problem of how to more efficiently manage a university's enrollment while it tries to raise standards. We are in the middle of such a transformation right now. The usual route is to raise admissions standards. This seems like the obvious thing to do. But HS GPA and SAT/ACT are blunt instruments that explain less than half the variance in first year grades. For less selective institutions this is particularly acute because the high non-completion rate represents a huge waste of time and money for those students who don't finish. At the same time, it represents lost opportunity for the university, because those seats potentially could have been filled by others who tested lower but would have performed better. Finding better predictors is one approach, and we are developing non-cognitive instruments to do that.

However, there is another idea I'd like to share. What if an institution were to accept that predicting success is not all that effective, and adopts the "try and see" approach? This is how it might work:

  1. Adopt lower admissions standards, but still try to predict as best as possible
  2. Admit double the number of freshman you would actually want to continue
  3. Charge half price for the first year
  4. Advertise and implement high expectations in the classroom, and foster high quality teaching and support programs: give students every chance, but accept no compromises or excuses for non-performance
  5. Accept a 50% attrition rate for Freshman to Sophomore years. For those who are leaving, try to identify them early and have a Plan B outlined (community college perhaps)
  6. Maintain high standards throughout the rest of the curriculum.
  7. Advertise all of the above with a banner like: we give you the chance, make it affordable, and expect great things.
This would be a radical change, and would require a faculty that endorses the plan, upholds high standards, and constantly works to improve teaching and learning. The advantage of this approach is that you don't have to whittle down enrollment to increase standards--just the opposite.