Tuesday, October 12, 2010

Assessing Writing

Over the last week I've had the pleasure of revisiting the assessment plans we put in place at my prior institution, as it prepares to submit its SACS fifth-year report. I pitched in by doing some number crunching. The topic of the Quality Enhancement Plan (a SACS requirement for a program to improve teaching and learning) is writing effectiveness. This is a popular topic for QEPs, and I tried to make a list of such institutions a while back. A common problem is how to assess success.

In this case, the program spanned three initiatives with a range of assessment activities, including the NSSE, internal surveys, and qualitative assessments. For assessing writing, there are multiple types of assessments, but I'm just going to focus on the "big picture" assessment here: the Faculty Assessment of Core Skills (FACS) piece. I've written about the general method on this blog many times, and you can find an overview in the manuscript Assessing the Elephant, although the most recent results aren't in there yet.

The FACS surveys faculty opinions about individual students' writing abilities, provided that they have opportunity to observe such (not necessarily teach it or even count it for a grade, however). The scale for reporting is tied to the idealized college career (pre-college work, fresh/soph level work, jr/sr level work, work at the level we expect of our grads), and represented here on a 0-3 point scale. Getting the data is trivially easy and basically free. We started in fall 2003, and by now there are over 25,000 individual observations recorded on over 3,000 students (about a fourth of these on writing).


The graph above shows three cohorts, controlled for survivorship, each over four years. The error bars are two standard errors. One trend is that the first two years have plateaus, after which growth looks linear. In order to look at the quality of the data, I also graphed the average minimum and maximum ratings, combining the three cohorts.

This shows a consistent half-point average difference across eight semesters of attendance. That's not bad, and reliability statistics show that raters match exactly about half the time, far more than could be the case randomly. At my current institution, I've been getting even better numbers for some reason.

Although these graphs are nice, they don't actually show the effect of the QEP. That is, how do we know this growth wasn't happening anyway? This is the problem that will bedevil most QEP assessment efforts. In this case, one of the programs was to increase use and quality of the college's writing center.


This graph isn't mine; I took it with permission from the draft report.  It shows the dramatic growth of the writing center use. (Student body size is around 1100, for comparison). The use of the writing center also gives us a kind of control group for studying increase in writing skill. It's not perfect, because conventional wisdom is that the students who use the writing center tend to be those who are told they need to, meaning their skills are perceived to be lower than their peers in general. We can compare the users versus non-users using FACS:


This shows that indeed writing center users started with about equal or slightly less assessed skill, but overtook and exceeded their peers over four years. It gets even more interesting if we disaggregate by entering (high school) GPA.

This is the majority of students, and it shows that, in fact, for this "B" and better students, use of the writing center corresponds to their being seen as better writers within a year, and that this persists. On the other hand, for those less-prepared students (per HSGPA predictor), the story is different.


Here, according to FACS scores, the conventional wisdom is true: these students really do start off with lower perceived skill level, and it takes a year to reach near-parity with their peers. But by the fourth year, they have surpassed them. Note the numbers on the scale: even with the jump at the end, these students are rated far below their HSGPA>3 peers, writing center or not.

The slopes of the lines show something we've noticed before: a so-called Matthew Effect, whereby the most able students learn the fastest. Compare the blue lines (non-writing center users) in the two graphs above. The higher HSGPA students increased by .81, whereas the lower HSGPA group increased by only .32. Use of the writing center for this latter group more than doubled this increase, to .84.

I'm generally skeptical of assigning causes and effects without a lot more information, but these results are very suggestive, and certainly do nothing to contradict a conclusion that the writing center use is pushing the better students to higher performance, while enabling the less-prepared students to steadily and dramatically increase their skill.

No comments:

Post a Comment