Saturday, January 28, 2006

Standardization of Higher Education

I came across a Memo from the Chairman” by Charles Miller, who seems to be fond of standardized testing. He’s now the Chairman of the U.S. Secretary of Education’s Commission on the Future of Higher Education, created by Education Secretary Margaret Spellings.


Miller is obviously of the opinion that there’s not enough oversight of higher education:


[W]e need to improve, and even fix, current accountability processes, such as accreditation, to ensure that our colleges and universities are providing the highest quality education to their students.


His solution?


Very recently, new testing instruments have been developed which measure an important set of skills to be acquired in college: critical thinking, analytic reasoning, problem solving, and written communications.


And:


An evaluation of these new testing regimes provides evidence of a significant advancement in measuring student learning — especially in measuring the attainment of skills most needed in the future.


I’m not sure how he knows what’s going to be needed in the future. It’s not footnoted. But he lists his favorite standardized tests, including:

A multi-year trial by the Rand Corporation, which included 122 higher education institutions, led to the development of a test measuring critical thinking, analytic reasoning and other skills. As a result of these efforts, a new entity called Collegiate Learning Assessment has been formed by researchers involved and the tests will now be further developed and marketed widely.

That sounded familiar. Sure enough, he’s talking about the same standardized test I found a few days ago and wrote a post about. The article was by Richard H. Hersh, recently president of Trinity College (here’s an article about his resignation).


Hersh is now a senior fellow at the Council for Aid to Education (CAE). In the November 2005 Atlantic Monthly article I wrote about, Hersh subtitles his piece with It's time to put an end to "faith-based" acceptance of higher education's quality. He concludes that what we need is a standardized test of which he is co-director: the Collegiate Learning Assessment (CLA). CLA was cited in a 2001 report called Measuring Up, comparing state higher educational achievement, which laments the lack of a national measure of learning.


The president of CAE is Roger Benjamin. Here’s an article he wrote in the same vein as the Atlantic one, describing the CLA. He notes that:


Student responses can be graded by a trained reader or by a computer.


I have to comment on that. We tried out ETS’s Criterion computerized essay grader last year. It was interesting, but overly simplistic, and we decided it didn’t suit our needs. One problem with timed essays is that we try to teach students that they shouldn’t be in a hurry when they write: revise, revise, revise is the mantra. So how does that square with a timed assessment? With difficulty, I imagine.


Chairman Miller and the CLA’s Benjamin, both talked at a Columbia University press briefing on the future of higher education. Miller is listed as being associated with the Meridian National, Inc.


It seems to me that public policy is being steered toward adoption of standardized testing nationally or state-wide for higher education.


Other links:


Testimony by Miller, citing Benjamin and Hersh


Article on ‘value added assessment’ by Benjamin and Hersh in AACU’s Peer Review


Article by Benjamin from CAE about new assessment techniques.


CAE website


Hersh slams higher Education on PBS.

Sunday, January 22, 2006

Visualizing Retention

Here's a great way to take a look at your institution's retention history using pivot tables. First, you'll need data, of course. I use grade records because they seem to be the 'gold standard' for accuracy. It's difficult to keep financial aid records cleaned up, and financial records can be very complex. Of course, any student who leaves without receiving a grade doesn't show up on this particular radar. But you can use any method you like. Flag each student in a database or spreadsheet with exactly one of: graduated, attritted, still attending. One of those three things must be true. In my case, I use Perl scripts to process the raw data, which gets filtered to a flat file and whence into a database.

Here's a small part of a spreadsheet downloaded from the database, to illustrate.

The year is the first (calendar) year that the student got a grade. All of the students shown left before graduation.


There are lots of other data columns, including financial aid, GPA, and the like. You can obviously add what's important to you. Once the data set is ready in Excel, click Data->PivotTable. If you haven't run one of these before, go learn how. Create a chart that uses Grad, Attrit, and Attend as data items and set the field properties to Average, using the Percent display option. Drag the start year over to the left, and you should create something like this:
You can sort the data columns however you like. The one above shows graduates in blue, non-returners in yellow, and students still in attendence in magenta. This latter band widens out on the right because those are recent enrollees. The year of the 'class' appears at the bottom.

Once the chart is up and running, you can use page selects or other options to narrow the scope of samples to a particular major or student demographic. You can also see side-by-side comparisons for males and females, for example.

I create two of these charts. One shows percentages, like the one above. The other shows absolute numbers of students so you can see total enrollment for the group selected.

Saturday, January 14, 2006

CopyMyth

Part of being a library manager is dealing with copyright issues. It's complicated, and here's a page to sort out some of the truth from myth.

Saturday, January 07, 2006

What Does College Teach?

In a November 2005 Atlantic Monthly article, Richard Hersh subtitles his piece with "It's time to put an end to "faith-based" acceptance of higher education's quality". The thesis is that colleges and universities don't actually know, because they can't measure, whether or not students are learning. After raising the question in rather breathless terms, the author summarizes the situation in assessment, and then gets predictably bogged down in trying to propose a solution.

Hersh starts with the question "What makes your college worth $35,000 a year?" I suppose this was supposed to be provocative, but he could have hardly chosen an easier question to answer. He says this is a hard question for college presidents to answer. Why?

About 5 seconds' search on Google comes up with a suitable response:

Any way you measure it, a college degree is the best investment of your life. In today's dollars, a bachelor's degree is worth more than $2.1 million over 40 years. "Having that post-secondary diploma can make such a difference in lifetime earnings," said Washington, D.C.-based Employment Policy Foundation President Ed Potter.
Never mind the fact that few students pay full sticker price. This is a ROI that any CFO would fantasize about. But read a little further and you discover that Hersh really means something else. He wants a measure on learning, not worth.

[W]e have little direct evidence of how any given school contributes to students' learning.
Apparently what he's really after is a way to rate colleges after taking student quality into account. Again, you could produce such ratings by looking at salary histories of graduates, normalized by socio-economic status of the parents/guardians. But despite his original question, he's not really interested in economic worth, but improvement in learning.

We might well ask what the source of college graduate's extra income is. Is the marketplace so easily duped that a college degree represents nothing about the skills and abilities of graduates, and that the mere slip of sheepskin is sufficient to accrue the megabucks that accompany a Bachelor's degree? That seems exceedingly unlikely. If graduates were no better than non-graduates, US companies would very quickly stop paying them a premium. So even though we haven't yet discussed direct measures of learning, there certainly is good circumstantial evidence of it.

Hersh cites four current categories of measures of college quality: actuarial data (including graduation rates, admissions selectivity and the like), expert ratings by college administrators and professors, student/alumni surveys (including NSSE), and direct assessment (grades). Hersh is openly contemptuous of grades:
For our purposes [grades and grade point averages] are nearly useless as indicators of overall educational quality--and not only because grade inflation has rendered GPAs so suspect that some corporate recruiters ask interviewees for their SAT scores instead. Grades are a matter of individual judgment, which varies wildly from class to class and school to school; they tend to reduce learning to what can be scored in short-answer form; and an A on a final exam or a term paper tells us nothing about how well a student will retain the knowledge and tools gained in coursework or apply them in novel situations.

You see what I mean about 'breathless'. Where to begin? First, there are no references to support any of this, but my own experience (about 20 years in the classroom) doesn't jibe with this. Generally speaking, 'A' students are going to be better than 'D' students. I'm trying to imagine what 'short answer' means for a class on assembly language or mathematical analysis. If we accept this paragraph as an emotional outburst rather than rational argument, I think we can boil it down to this: the author doesn't find it acceptable that student GPAs aren't useful for comparing one school to another.

Of course they're not! The whole endeavor of trying to rank one college against another is daft--anybody who's ever been to college knows that some programs are better than others. What good would an average program quality be to anybody, even if you contrived to compute it? Moreover, such rankings are by nature one-dimensional. Imagine if we had to rank people--your friends and co-workers--on such a scale. Bob is a 4.5, but Mary is 7.2. Ridiculous.

Again, the article makes no mention in the list of assessments of post-college performance in the workplace, although he alludes to it disparagingly "What is worth learning cannot be measured, some say, or becomes evident only long after the undergraduate years are over."

In the following paragraphs, Hersh plays a switcheroo. After arguing that there's no good metric to compare colleges against each other, he assumes that he's concluded that even at the classroom or program level no good assessment is going on. "[C]ummulative learning is rarely measured."

Huh? Anyone who's been to a conference on education lately (SACS for example), knows that half of the sessions are devoted to this very topic. Not only are people interested in it, it's required by most accrediting bodies (all the ones I know of). After saying that cumulative learning is rarely measured, the author revises this to "[M]easuring cumulative learning hasn't been tried and found wanting: it has been found difficult and untried."

I don't know about your institution, but every academic program at mine has a capstone course sequence. For example, in mathematics, students have three semesters of independent research with a faculty advisor, during which we rate them for various skills and abilities on a scale that ranges from 'remedial' to 'graduate'. Math is like most disciplines, where without cumulative learning, you simply can't progress. How could you hope to pass differential equations if you haven't got a clue about basic calculus?

Now that the author's point has wandered off into the landscape of student and program assessment (rather than college rankings), he finds some promising approaches, including portfolios. These have been around for a long time, of course, and there are some very good electronic portfolio vendors out there (just google 'electronic portfolio'). I don't know what the statistics are on usage, but a lot of schools are using them to track student work. We built our own in 2005.

We finally reach the punch line of the article: a pitch for a product that the author is co-director of: the Collegiate Learning Assessment Project. The last two sevenths of the paper is devoted to this product. As a matter of fact, my copy of the article came bundled in an envelope with more advertisement for the CLA surveys.

So the answer to the college ratings 'problem' is a standardized test? The accompanying literature assures us that it has been proven valid and reliable. As convenient as that is, validity is not something that can be externally determined because it depends on the question you're trying to answer by giving the test. That's a subject for another post...