Saturday, October 30, 2010

One plus Blue

I've written before about mixing up dimensions. It seems to happen all the time. Here's an example from a nicely done learning outcomes site I came across. This sort of thing looks good for the casual reviewer, but I imagine isn't much use for actually finding opportunities for improvements.


There's actually a note at the bottom saying that the scales have no relation to one another. And yet they all get added up?

It would help if we used units, even if they were more imaginary than real. For example, is the math score a percentage rate of getting a correct response on some kind of math (obviously not in this case)? If it's a total score, what is the maximum? Is it a rate of learning or an absolute measure? And so on.

Adding them up is clearly absurd, but we do it all the time: that's what grade averages are too.

The graphs were made with Tableau, by the way. It's great for generating reports.

Thursday, October 28, 2010

Creating Word Clouds

I just came across Wordle for creating word clouds out of a text source. It's very neat, flexible, and free. As an example, I took a chapter for a book I wrote on authentic assessment and pasted it into the Wordle input. Here's the result.


This could be used with program plans or reports, curriculum maps, or other input sources to create attention-getters that also have some content--the larger words are the ones most frequent in the source.

You can change fonts, colors, formating, and even eliminate words (by clicking on them) to adjust the output.

I created another one on the blog for my novel that you can see here. For that I took an entire novella and pasted it into the input.

Wednesday, October 27, 2010

The Fat Middle

There are those ideas, like Darwinian evolution or the basics of information theory or game theory, that cannot be unthought. They are powerful enough to change one's Weltanschauung, and having done so mark the mind like a growth ring. I suppose that should be the aim of a liberal arts education. We could probably do a better job of selling it that way.

Lately I've been orbiting the gravitational well of an idea that, while not on the order of calculus or market theory, makes a lot of things click into place for me personally. I call it The Fat Middle.

Most things in life are ruled by combinatorics. We experience and try to make sense of many dimensions of sensory data, which if our brains didn't automatically data compress for us, would overwhelm us with complexity. The very molecules we're made of are combinations of more permanent bits--those atoms hypothesized by Democritus about 1500 years ago. Our civilization comprises a vast economy with many, many moving parts. It's easy to get lost in the permutations.

Every day we figure out how to get from where we are to where we want to be, literally or metaphorically. It's that part in the middle where the opportunity lies. Not for you and me usually, but for someone who spends all day thinking about the middle. The middlemen.

Having a way to negotiate the way between trying to sell a house and having sold it, for example, is valuable. Finding someone who knows about applicable laws and contracts, and is hooked into a network designed for moving real estate is a valuable thing.  The creation of such a Middle is like pushing a volatile molecule up the energy scale until it cracks open to spill out far more energy than you put into it.

This Middle provides a public service, whether it's selling houses, buying a good book, or finding a job. Cities sprang up along trading routes (often on rivers) because of the value of the Middle. Paved roads were investments in the Middle where the rivers couldn't go.

Phase One: Creating the Middle

Basically, if you can find a way to be of use between a supply and a demand, you create the Middle, and this is nice because it naturally wants to be standardized. Roads are narrow, standardizing routes. Realtors may compete with one another, but they all cooperate when it comes to making a sale.

Here are some Middles:
  • Google: between you and everything on the Internet
  • Facebook: between you and your friends
  • eBay: between you and the world's flea market
  • Music companies: between you and music (pre-Internet)
  • Newspapers: between customers and businesses (pre-Internet)
  • Higher Education: between high school grads and good jobs
  • Medical Establishment: between you and medicines or medical services
The last two of these are not informational, but still play the role of the Middle by providing access for a price to exclusive services and benefits. There didn't used to be a Middle for either education or medicine. You'd simply go to a teacher and pay tuition or take a chicken to the witch doctor, respectively. But standardization and massive organization have created a Middle for accommodating a huge demand.

Phase Two: Getting Fat

Once the Middle is standardized, it becomes a gatekeeper as well as access path. Tolls get higher on the road, until the price is as high as the market will bear.

As an example, consider an entrepreneur who builds a bridge over a river, meeting a demand for transportation. He makes enough money on tolls to maintain the bridge and have enough left over for a nice income. But as the use of the bridge becomes the standardized Middle, and travelers depend on it more and more, he realizes that he controls a natural monopoly. So he increases the price until the total revenue levels out, fattening the Middle.

It's interesting to note what is allowed to get fat and what isn't. Power companies in the US are regulated by the government, presumably as a nod to the fact that providing electricity is a public good. Roads are mostly free to drive on, paid for indirectly through taxes. Access to water provided at very low cost to most of us. On the other hand, access to professional medical care is not a concern of the government in the same way. Education falls in between, given the range of opportunities between public and private schools.

Uncontrolled, the Middle will make itself fat and continue to secure its position. Media outlets are particularly well suited for the latter task, since they are between news and consumers (or were until lately).

We used to pay more than $50/month for a wired telephone line. Then along came the Internet and Skype, and hey, I get that basically for free now. I can only conclude that most of that $50 was Fat Middle for the phone company. Don't cry for the phone company, though; they can still get away with charging a dime for every text message you send--all Fat Middle for them.

Those companies in the Middle can use their profits to tilt legislation their way, like the music industry and copyright laws, or the recent assault on net neutrality. I think net neutrality cannot hang on much longer because of this pressure. Or the cozy relationship between banks and regulators that helped create the overindulgence in consumer debt.

Another technique to fatten the Middle is to over-promise the usefulness of it. Think of it as a "Gold Rush" effect. This is more suitable to some services than others. The creation and marketing of "maintenance" drugs is one example, or Monsanto's attempt to corner the seed market with one-use varieties, so you can't grow your own. Soon, Facebook will be plugged into so many facets of your personal life that you won't know how to live without it, right? A bachelor's degree is absolutely essential to getting good job. Nobody questions that. Why?

Phase Three: Revolution

Once the Fat Middle has squeezed out every bit of extra economic value, it no longer performs a public service. This is an extreme position that few Middles may ever achieve due to the particulars of the betweenness. In the bridge story, if the owner charges so much to pass over the bridge that it would be economically more viable to take the ferry, but there are new laws outlawing such "unsafe" transportation, the fattening has ripened.

Technology has a history of dining on the Fat Middle. Toll roads are mostly gone in the US. Newspapers can't make fat revenue by cornering the advertisement market, and the music producers can't isolate musicians from their audience and charge a rich premium for access. AT&T can't charge me 30 cents a minute to talk to my parents in Illinois anymore (when that bill came, I immediately switched to another phone company. I got a letter from AT&T a couple of months later saying "we're sorry! we want you back!" Fat chance.).

The technology solution is bound to rearrange the Middle, but it will again solidify. Costs and laws around the Internet will slowly evolve under the pressure of the rich rewards dangling there, until you pay per click to Facebook and Youtube, and don't even think about starting a web business without going to the bank first. I hope I'm wrong.

The Fat Middle of Higher Education

In the July 20th Chronicle article "Learning From Socrates and Adam Smith on Financing Universities"
Richard Vedder writes:
If one were to allocate faculty salaries for instruction to account for the non-instructional dimension of university service, faculty compensation for instructional services often is well under 20 percent of revenues raised by institutions, and almost never as much as 50 percent. 
 So what? The point being made here is that the ostensible principal raison d’etre of most universities—the education of our youth—is really a small part of university activities. Put differently, if the faculty salary for instruction to institutional revenue ratio were to rise to, say, 50 percent, by reducing the non-instructional dimension of university spending, the total cost of educating students would fall dramatically—to roughly the levels found in many other industrialized nations in the world. 
This is an argument that universities (public and private) are too fat.

On October 20th, Vedder wrote another article "Why Did 17 Million Students Go to College?", citing two apposite bits of information. Both indicate a "Gold Rush" effect for higher education: that the Middle between high school graduates and good jobs has been oversold by the supplier:
Over 317,000 waiters and waitresses have college degrees (over 8,000 of them have doctoral or professional degrees), along with over 80,000 bartenders, and over 18,000parking lot attendants. All told, some 17,000,000 Americans with college degrees are doing jobs that the BLS says require less than the skill levels associated with a bachelor’s degree.
He goes on to cite a paper about the marginal return on higher education investment:
This week an extraordinarily interesting new study was posted on the Web site of America’s most prestigious economic-research organization, the National Bureau of Economic Research. Three highly regarded economists (one of whom has won the Nobel Prize in Economic Science) have produced “Estimating Marginal Returns to Education,” Working Paper 16474 of the NBER. After very sophisticated and elaborate analysis, the authors conclude “In general, marginal and average returns to college are not the same.” (p. 28)
Unfortunately, the paper itself is behind a paywall (the irony...). A low marginal return means that more investment is hard to justify: we're at the point of "diminished returns."


In a nutshell, the argument is that higher education has become a Fat Middle, and is ripe for revolution. The form of that revolution isn't hard to fathom: low-cost, high quality, online programs. The way Vedder puts it is:
Higher education is on the brink of big change, like it or not.
Expect to see more analyses like the one recently from The Wall Street Journal, where "Putting a Price on Professors" by Stephanie Simon and Stephanie Banchero refers to:
A 265-page spreadsheet, released last month by the chancellor of the Texas A&M University system, amounted to a profit-and-loss statement for each faculty member, weighing annual salary against students taught, tuition generated, and research grants obtained.
This creates a certain kind of business-speak dialog:
"Every conversation we have with these institutions now revolves around productivity," says Jason Bearce, associate commissioner for higher education in Indiana. He tells administrators it's not enough to find efficiencies in their operations; they must seek "academic efficiency" as well, graduating more students more quickly and with more demonstrable skills. The National Governors Association echoes that mantra; it just formed a commission focused on improving productivity in higher education.
A part of this is just the current demonizing of education that seems to be in vogue, and the narrow viewpoint that colleges are like factories with a uniform input that should be able to "six-sigma" end product. But there are valid points to be made about the cost versus return of post-secondary degrees too, particularly focusing on those costs that have little to do with the instructional mission.

It's ironic that the business viewpoint would be used to make this particular criticism of higher education, which (if they are correct) is only doing the same thing that, say, the drug companies do. But for some reason, education is seen as a public good and pharmaceuticals or access to medical care are not.

Opportunity

It's not all broom and doom. You can find a Middle of your own and enlarge your fortune. Here's one idea. One of the most frustrating problems in dealing with corporations is getting problems solved through their customer service departments. Some are good (like our local Time-Warner office), and some aren't. Standardization and transparency would work wonders in this area. So all you have to do is create a "complaint engine" that becomes the standard interface between companies and their customers for resolving disputes, with public ratings showing response time, resolution rate, and individual comments.  At first, companies will hate it. Then the more progressive ones will see the advantages and start asking for plugins so they can feed directly into their PeopleSoft (or whatever) systems. Then you can sit back and watch the Middle fatten.

The Internet (while it lasts) creates enormous potential for Middle solutions. Even within a university, there are opportunities, like creating an institutional document repository.

Saturday, October 23, 2010

Do Students Add Value?

You hear "value-added" a lot these days; just google it. One article from RAND Corp tries to sum it up ("EvaluatingValue-Added Models for Teacher Accountability"), but in reading the paper I was drawn to one of the references from 2002, with the lengthy title "What Large-Scale, Survey Research Tells Us About Teacher Effects On Student Achievement: Insights from the Prospects Study of Elementary Schools" by Brian Rowan, Richard Correnti, and Robert J. Miller at the Consortium for Policy Research in Education at University of Pennsylvania's Graduate School of Education.

I don't propose to do a review of either of these papers here, but one quote struck me from the latter. It should first be noted that the authors strike a cautious note about the nature of such research in the introduction on page 2:
[O]ur position is that future efforts by survey researchers should: (a) clarify the basis for claims about “effect sizes”; (b) develop better measures of teachers’ knowledge, skill, and classroom activities; and (c) take care in making causal inferences from nonexperimental data. 
The point from the paper that struck me was this quote from page six:

Two important findings have emerged from these analyses. One is that only a small percentage of variance in rates of achievement growth lies among students. In cross-classified random effects models that include all of the control variables listed in endnote 4, for example, about 27- 28% of the reliable variance in reading growth lies among students (depending on the cohort), with about 13-19% of the reliable variance in mathematics growth lying among students. An important implication of these findings is that the “true score” differences among students in academic growth are quite small [...]
Let's think about that for a moment. The variation in "learning" (as numerically squashed into an average of a standardized test result, I think) is mostly not due to the variation among students. This struck me as absurd at first, but then I realized it's just a statement about how variable students are: to what degree the phenotypes sitting in our classrooms differ in their respective abilities to learn integral calculus (in my case).

As a reality-check, I pulled up my FACS database and classified students by their first semester college GPA and looked at their average trajectory in writing, as assessed by the faculty. Here it is.




The top line is 3.0+ students, then comes 2.0-2.99 students and so on over eight semesters, showing average writing scores. The improvement semester by semester does seem pretty constant, regardless of how "talented" the students are. Note that this particular graph isn't controlled for survivorship.

The numbers hide most of the real information, however. Is the improvement of the lowest group really comparable to that of the uppermost? There are different skills involved in teaching fast learners versus slow (which is considerably related to how hard the students work, a non-cognitive). If one substitutes standardized test scores for actual learning, this problem can only get worse. 

No, I still don't like averages. Here's the non-parametric chart for the 3.0+ group over eight semesters.


The blue portion of the bar is the proportion of these students who receive the highest rating, and so forth. The red ones are "remedial" ratings.

Update: the effect size of differences between student in my FACS scores is small, but it seems to be real. Especially if you look at particular treatments like that of the writing lab in an earlier article. As a first approximation, perhaps student abilities don't matter as to how much they learn, but I strongly suspect that conclusion is vulnerable on a number of fronts. See my more recent article on Edupunk and the Matthew Effect.

Monday, October 18, 2010

General Education, by Wired

This is just a pointer to an interesting article on Wired: "Seven Essential Skills You Didn't Learn in College."


Those interested in a fresh look at general studies might want to take a look at the curriculum proposed:

Saturday, October 16, 2010

Meeting Salad Lives

Over a year ago, I wrote a post called "Meeting Salad" about how to keep meeting notes organized. I was inspired to buy the domain meetingsalad.com, and intended to do something with it when I had time.

Well, I'll never have time, but as it turns out our IT Director bought a couple of iPads to try out, and presented me with one. Laptops have never been satisfactory as note-taking machines for me in meetings--they're too big and clicky, have to be plugged in, and so forth. But an iPad...that's another matter.

So I copied my current version of openIGOR and cut it down to just do the notes thing. I had already built a forms solution, which I wrote about recently, so it only took a couple of hours to get everything working. It isn't pretty:

From the menu, you can add or replace an html form, but mostly what you want to do is view them:

Here you can see there are forms for two different groups, both called Meeting Notes (same form). The bottom one has five instances of form data. Here's what the form looks like with data in it (which you can get by clicking on the appropriate one).


I can see I need to add a title, to keep track of them all, but that's no big deal. There's a transaction history at the top. You can open and edit a form as often as you like, and it keeps track of all the changes, so you could roll it back or play them like a movie (although that's not built in at this point).

This version's on the public internet at meetingsalad.com. If you want to play with it, email me. The actual version I use will be on our internal IGOR installation. It works pretty well on the iPad. The green's already getting old, though.

Thursday, October 14, 2010

Assessing Creativity Creatively

Assessment of learning outcomes and the development of expertise go hand in hand. A while back I saw a nice conceptualization of these working together. It's a Learning Outcome Network Interview Tool from David Dirlam at Hebrew Union College, which he shared with the ASSESS listserv. I've reformatted a description from the linked document:
The interviews will be seeking to discover several dimensions of four types of commitments that learners make on their way to becoming experts. The four commitments are:
  1. Beginning: to try
  2. Easy: to learn a little
  3. Practical: to become proficient enough to earn a living in the field
  4. Inspiring: to make a contribution or unique discovery within a field
Each commitment is realized within a different time frame. It takes no time to begin, a few months to comfortably use an easy strategy, a few years to get good at practical strategies and a decade to make regular contributions to a field.
David and co-author Scott Singeisen wrote an article "Collaboratively Crafting a Unique Architecture Education through MODEL Assessment" [1] that has background and much more detail. They have done some very interesting and creative work in framing a learner's trajectory from beginner to expert. The work is based on a large corpus of empirical data that they have analyzed to identify what they call a Succession Model. This is depicted graphically in the handout posted to the listserv, and I've reproduced it below for your convenience.


The ratings of proficiency are tied to a rubric, but it's not the kind of rubric one typically sees, because of the scope. The text on the graph above is fuzzy, but the highest level of achievement has a bubble that says that the actual frequency is "near zero." What's really attractive about this idea from a pedagogical perspective is that it puts the whole landscape of the educational endeavor into one frame. In the paper, a rubric for architecture is given, ranging from stereotypical misconceptions of beginners through transformative integration of components that is the mark of a brilliant architect.

In the conclusion of the paper, there is a powerful statement that can be used in conversations with faculty about assessment. It gives as outcomes of the research a comprehensive and original theory of development of the discipline, and makes the case that the participation of a group of faculty "dilutes the biases of individuals" to give a good collective result. My guess is that after all this, the faculty would feel pride of ownership in the end result.

Thinking back to my teaching and assessment in math, I think I have approached the idea of assessing creativity tangentially, but never as directly as David and Scott do. For example, to illustrate the role of creativity in mathematics, I typically show students the NOVA film "The Proof," which has an excerpt on YouTube:

But while such an illustration gives math students a glimpse into the life of a brilliant professional, that's as far as it goes. And yet, would it not be a boon to students even at the undergraduate level, to map out for them how deeply one can go into their chosen subject with descriptions of what real expertise entails? I think so. In our current QEP (a SACS project for improving learning) two of the goals are a nice combination of noncognitives: 1. self-assessment and 2. planning for the future. This is a prescription for such a long-range view of a discipline.

I also think the model Dirlam and Singeisen have created complements nicely the rubric strategy I've used before that ties outcomes to the degree program by assessing students relative to an ideal Fresh/Soph, Jr/Sr, or ready-to-graduate student. I won't rehash that, as I've written about it before.

Assessing Creativity is what I set out to write about in this post. The example above is very interesting, and I'm looking forward to learning more about it. Obviously a discipline like architecture requires both analytical and creative skills, but the "long view" of expertise works for both.

You probably know about the Flynn Effect, whereby IQ scores have steadily risen over the years. This has led to unintended consequences, but what I didn't know is that there is (or was) a similar effect in creativity. Check your skepticism at the door for a moment, and play along. The article is in Newsweek, and so as with any popular media story, you can expect an eschatological twist. To wit, the article is entitled "The Creativity Crisis," and dates from July 2010. It describes results of a standardized assessment of creativity (CQ):
Like intelligence tests, Torrance’s test—a 90-minute series of discrete tasks, administered by a psychologist—has been taken by millions worldwide in 50 languages. Yet there is one crucial difference between IQ and CQ scores. With intelligence, there is a phenomenon called the Flynn effect—each generation, scores go up about 10 points. Enriched environments are making kids smarter. With creativity, a reverse trend has just been identified and is being reported for the first time here: American creativity scores are falling.
This is a serious study with a lot of data behind it:
Kyung Hee Kim at the College of William & Mary discovered this in May, after analyzing almost 300,000 Torrance scores of children and adults. Kim found creativity scores had been steadily rising, just like IQ scores, until 1990. Since then, creativity scores have consistently inched downward. “It’s very clear, and the decrease is very significant,” Kim says. It is the scores of younger children in America—from kindergarten through sixth grade—for whom the decline is “most serious.”
Here's the obligatory "end of the world is nigh" quote from Newsweek:
The potential consequences are sweeping. The necessity of human ingenuity is undisputed. A recent IBM poll of 1,500 CEOs identified creativity as the No. 1 “leadership competency” of the future.
There's a following argument that the neurobiology of creativity is at least partially understood, and that creativity can be learned. I think the first thing to do is start pointing it out where it occurs. I try to do this in math classes, because students have the most trouble with problems that require creative (as opposed to deterministic) solutions.

The article is interesting, provocative even, but I think it misses one thing. To be really productive as a creative person means being productive in a group. The dynamics of sitting alone and composing a guitar piece are very different from knowing how to present a creative idea to a group of colleagues, or to recognized and support creative solutions from others. It seems to me that productive group creativity is tightly linked to emotional intelligence. In fact, it's very odd that we educate students in silos: homework and testing are almost always expected to be done on one's own. Then students get turned loose in a lab or corporate office and have to work as a team. Here's a crazy idea: why not encourage students to assess and monitor their own intellectual and social abilities, and help them teach themselves how best to "plug in" to a working group? Stereotyping, there are the good group leaders (organized, respectful but firm, goal-oriented), the idea people (smart, creative, random, delicate), analytical whizzes (love of technical detail, logical, great deductive thinkers, visual, proud), and so on.

At the very least, it would be interesting to survey perceptions about these skills and attitudes, as well as the perceived overall effectiveness of the group, to see how creative (and analytical, etc) people affect the whole. My guess is that most individuals don't get to perform at their best because the dynamics aren't conducive.

Closing note: My second Calculus II exam is take-home, and I encourage the students to work together on the problems. They just have to tell me who they worked with. This technique has worked well for me before with small upper level classes. It encourages all kinds of good behavior, which (for me) trumps the minor drawback of not knowing who knows what, exactly.  I'll find that out on the final exam.  Anyway, students tend to pair themselves off by ability level, so there's not nearly as much copy/paste as you might think.


[1] Dirlam, D. K. and Singeisen, S. R. (2009). Collaboratively Crafting a Unique Architecture
Education through MODEL Assessment. In P. Crisman and M. Gillem (Eds.)The Value
of Design (pp. 445-455), Washington, DC: ACSA Publishing.

Tuesday, October 12, 2010

Assessing Writing

Over the last week I've had the pleasure of revisiting the assessment plans we put in place at my prior institution, as it prepares to submit its SACS fifth-year report. I pitched in by doing some number crunching. The topic of the Quality Enhancement Plan (a SACS requirement for a program to improve teaching and learning) is writing effectiveness. This is a popular topic for QEPs, and I tried to make a list of such institutions a while back. A common problem is how to assess success.

In this case, the program spanned three initiatives with a range of assessment activities, including the NSSE, internal surveys, and qualitative assessments. For assessing writing, there are multiple types of assessments, but I'm just going to focus on the "big picture" assessment here: the Faculty Assessment of Core Skills (FACS) piece. I've written about the general method on this blog many times, and you can find an overview in the manuscript Assessing the Elephant, although the most recent results aren't in there yet.

The FACS surveys faculty opinions about individual students' writing abilities, provided that they have opportunity to observe such (not necessarily teach it or even count it for a grade, however). The scale for reporting is tied to the idealized college career (pre-college work, fresh/soph level work, jr/sr level work, work at the level we expect of our grads), and represented here on a 0-3 point scale. Getting the data is trivially easy and basically free. We started in fall 2003, and by now there are over 25,000 individual observations recorded on over 3,000 students (about a fourth of these on writing).


The graph above shows three cohorts, controlled for survivorship, each over four years. The error bars are two standard errors. One trend is that the first two years have plateaus, after which growth looks linear. In order to look at the quality of the data, I also graphed the average minimum and maximum ratings, combining the three cohorts.

This shows a consistent half-point average difference across eight semesters of attendance. That's not bad, and reliability statistics show that raters match exactly about half the time, far more than could be the case randomly. At my current institution, I've been getting even better numbers for some reason.

Although these graphs are nice, they don't actually show the effect of the QEP. That is, how do we know this growth wasn't happening anyway? This is the problem that will bedevil most QEP assessment efforts. In this case, one of the programs was to increase use and quality of the college's writing center.


This graph isn't mine; I took it with permission from the draft report.  It shows the dramatic growth of the writing center use. (Student body size is around 1100, for comparison). The use of the writing center also gives us a kind of control group for studying increase in writing skill. It's not perfect, because conventional wisdom is that the students who use the writing center tend to be those who are told they need to, meaning their skills are perceived to be lower than their peers in general. We can compare the users versus non-users using FACS:


This shows that indeed writing center users started with about equal or slightly less assessed skill, but overtook and exceeded their peers over four years. It gets even more interesting if we disaggregate by entering (high school) GPA.

This is the majority of students, and it shows that, in fact, for this "B" and better students, use of the writing center corresponds to their being seen as better writers within a year, and that this persists. On the other hand, for those less-prepared students (per HSGPA predictor), the story is different.


Here, according to FACS scores, the conventional wisdom is true: these students really do start off with lower perceived skill level, and it takes a year to reach near-parity with their peers. But by the fourth year, they have surpassed them. Note the numbers on the scale: even with the jump at the end, these students are rated far below their HSGPA>3 peers, writing center or not.

The slopes of the lines show something we've noticed before: a so-called Matthew Effect, whereby the most able students learn the fastest. Compare the blue lines (non-writing center users) in the two graphs above. The higher HSGPA students increased by .81, whereas the lower HSGPA group increased by only .32. Use of the writing center for this latter group more than doubled this increase, to .84.

I'm generally skeptical of assigning causes and effects without a lot more information, but these results are very suggestive, and certainly do nothing to contradict a conclusion that the writing center use is pushing the better students to higher performance, while enabling the less-prepared students to steadily and dramatically increase their skill.

Friday, October 01, 2010

Course Evaluations and Learning Outcomes

I've posted recently about some of our course evaluation statistics, and the effect of going from paper to electronic. A while back I also showed a summary of our Faculty Assessment of Core Skills learning assessment. I'm trying to put the two together by re-engineering the course evaluation to focus squarely on learning. The old version was a standardized one with fifty-five items, only five of which addressed learning at all, and these not very well. Here's my first draft of a new version, with comments afterward. The scale is indicated after each item. The exact wording is still in development.

  1. What was the quality of instruction in this course as it contributed to your learning? (try to set aside your feelings about the course content)
    --(ineffective to very effective)
  2. How much effort did you put into this course
    --(minimal to maximum)
  3. How much did you know about the course content before taking the course?
    --(nothing to a lot)
  4. How much do you know about the course content now?
    --(nothing to a lot)
  5. How much your skills in analytical/deductive thinking (knowing facts, following rules and formulas, learning standard methods) increase in this course?
    --(none to a lot)
  6. How much did your skills in creative/inductive thinking (trial-and-error, development of ideas, taking chances) increase in this course?
    --(none to a lot)
  7. How much did your ability to speak effectively increase in this course?
    --(none to a lot)
  8. How much did your ability to write effectively increase in this course?
    --(none to a lot)
  9. How much did this course help you understand yourself?
    --(none to a lot)
  10. How much did this course spark your interest in the content?
    --(none to a lot)
  11. Was the course enjoyable?
    --(not at all to very much)
  12. How much course content (the subject area, like chemistry or psychology) do you think you learned in this course?
    --(none to a lot)
  13. What overall rating would you give this course as a learning experience?
    --(poor to excellent)

Comments.

This is a radical departure from what we do now. The first question is what we use now on the evaluation form, and is the only one used for evaluation. Question 13 is a validity check on it because the answers should be very much the same.

The questions all focus on learning, except numbers 2 and 11. In the old evaluation, almost all of the questions were about the process of teaching, which makes a lot of assumptions about the value of those processes, and doesn’t transfer well to styles like online learning or hybrid courses.

The learning questions are split between the content area and general liberal-arts skills. This gives us a natural complement to the Faculty Assessment of Core Skills (FACS), which we launched very successfully last spring. Taken together, the teacher view and the student view will give us excellent insight into gen ed outcomes across the whole curriculum.

Question 2 is included because it matches the one on the FACS. The noncognitive “effort” is very important to performance. Here’s the graph from the spring FACS, with GPA in red and credits earned in blue numbers. More effort means better grades and better chance of advancing.

Questions 3 and 4 get at how much content was learned by asking in terms of before/after. This is checked for reliability with question 12.

Questions 5-9 are about general learning outcomes. No course would be expected to get max scores in all of these—it’s an environmental scan to help us understanding where students feel what kind of learning is happening where. It complements the NSSE, the QEP, and the FACS, and will be a gold mine of information.

Question 9 is from the temple of Apollo at Delphi: “know thyself.”

Question 11 will raise some hackles, but it’s there as a control. We know from research that students who rate courses as enjoyable also rate everything else higher. This allows us to investigate that phenomenon locally. If we get to the point where we can administer electronically, we can do these studies ourselves by comparing to course grade. With an anonymous paper survey, we’ll have less ability to do that, but can still do intra-response correlations. We could be more direct and just ask “how happy are you right now?” but that would turn off some students.

There are two free-response questions we'll carry over from the old survey. This will let students write on topics they care about most.

The survey is short for two reasons. First, we’ll get better reliability because students won’t get survey fatigue. Second, this leaves room for other surveys customized by a program, department, or college, to be administered in parallel. For example, the Lit folks could ask detailed content-related questions if they wanted, or conversely ask all about processes (office hours, syllabus, etc.).