Friday, April 29, 2011

Wordle Again

I mentioned Wordle.net a while back. It's a way to create 'word clouds' from text. It occurred to me this afternoon that it would be neat to put course descriptions in from the catalog for each major. I tried a couple of them, shown below.


Small College Initiative

I attended the SACS Commission on Colleges Small College Initiative this week. You can find the slides on the SACS web site here. Below are some of my notes and observations.

Mike Johnson talked about CR 2.5 (Institutional Effectiveness) and pointed out the difference between assessment and evaluation. In my interpretation of his remarks, the former is gathering data and the latter is using it to draw conclusions for action. In my experience, this gap is where many IE cycles break down. Signs of this are "Actions for Improvement" that:
  • Are missing altogether
  • Are too general or vague to be put into practice
  • Report that everything is fine, and no improvements are necessary
  • Suggest only improvements to the assessments
I have a lot more to say about this (there's a surprise), and am preparing a talk for the Assessment Institute and a paper for NILOA on related topics.

Another important point is that IE cannot be outsourced or even in-sourced to a director. The whole point is that is is a collaborative exercise in striving to achieve goals. I think results are proportional to participation. In a similar vein, Mike noted that computer software can help organize reporting, but doesn't magically solve the problem of generating quality IE loops. Garbage in = garbage out.

A wonderful suggestion was to use the creation of "board books"as a way to encapsulate IE reports in a natural way that's already being done. Mike's larger point here is that we already have many real IE processes--all institutions that manage to survive use data one way or another--and there's no need to create an artificial one for reporting. I saw this during a review, where the institution had wonderful processes in place, but didn't include that documentation in the compliance certification, and instead reduced all that rich information into a four-column grid that "looks like it's supposed to." Of course, one problem here (in my opinion) is that there doesn't seem to be a standardized way to look at IE processes. If we were serious about it, we'd do inter-rater reliability studies and create tight rubrics with lots of examples in a library, showing what's acceptable and what's not. I think this would go a long way toward reducing the number of out-of-compliance findings. Way back when--over a decade ago--I heard a SACS VP complaining that even back then, IE had been around a long time and college should know what to do by now. That's true as far as it goes, but it should be acknowledged that: 1) it's very hard to satisfy committees, and 2) it's not entirely clear what is acceptable and what's not. Part of the problem is that while the theory of IE loops is easy to understand, practice is far more difficult. Sort of like Socialism.

There was a clarification that it is acceptable for institutions to sample programs for reporting 3.3.1.1 in lieu of reporting outcomes for every single one. There is supposed to be a policy statement about this on the web site, but I couldn't find it after several minutes of searching the list for 3.3.1, effectiveness, outcomes, sampling, etc. If someone finds it, please let me know. The main thing is that it should be representative and not look like it was cheery-picked (e.g. reporting only programs that have discipline-based accreditation). 

It was noted that CS 2.10 implicitly has learning outcomes reporting requirements, making it a pseudo-IE standard. I included this in my recommendations for 'fixes' to the Principles in my letter to SACS, posted here for comment. Not many institutions seem to be flunking it, though, unlike 3.3.1.1 (see below).

The fifth year report was highlighted in a break-out session. You can find additional slides on this topic on the website here.  Out of 39 institutions, 28 were cited on 3.3.1.1, and alarmingly, the number of citations on the QEP Impact Report is 33%. Although this says that the review process is no  piece of cake (which is good--it should be meaningful), it points to a problem. In fact, the rationale for the Small College Initiative is to help address this problem, which is particularly acute for small schools. As a side note, over lunch I talked to an IR director who speculated that there is a bias against citing large schools, particularly ones with high rankings. It would be really interesting, in conjunction with the inter-rater reliability study I fantasized about above, to have blind reviews of 3.3.1.1. Given the growing emphasis on student learning outcomes (including the new credit-hour rules), a whole separate system for learning outcomes may need to be developed. One of the challenges on the horizon, in my view, is the contradiction of grades. On the one hand they are the basic unit of merit for courses, with a vast bureaucracy behind them. By contrast, grades are not seen as 'real' assessments. This needs to be fixed. I don't know if Western Governors University's model is the answer, but what we have now makes no sense, and it is impossible to explain to the public.

Reasons given for flunking a QEP included:
  • Bad planning, which leads to a bad report. One kind of bad plan is one that's too broad. 
  • Failure to execute it, e.g. if a new administration comes in and lacks enthusiasm for the old project
  • Not talking about goals and outcomes in the report. Hard to believe.
  • Not describing the implementation (just narrating the creation, perhaps)
  • Not collecting or using data
  • Bad writing. Ironic, since so many QEPs are about writing.
 Tips for writing QEP impact reports:
  • Follow the directions given in the SACS policy
  • Address all the elements
  • Keep narrative to 10 pages. (You can apparently link out to other documents, which I hadn't heard before. I thought everything had to be in 10 pages.) [Edit: see the update below]
  • Use data, but include analysis--don't just put in graphs with no explanation.
Networking over lunch, I gleaned a couple of nifty ideas. At one institution, faculty contracts include a 'gotcha' clause, which stipulates that if assessment reports are not done by date X, then the prof has to stick around until date Y to finish them. This provides an incentive to get them done. Also, the reports are broken down into phases across the academic year, so that not everything is done at once. Smart.

Update: Mike Johnson posted a note to the list server saying that the links in the 10 page (max) Impact Report can only be internal to the report itself, which does not allow 'extra room'. In his words:
Links within a disk or flash drive are okay as long as the documents that are part of the link are included in the ten page maximum length. So please do not use hyperlinks to documents as a means to lengthen the report.

Thursday, April 28, 2011

Motivation and Intelligence

Angela Duckworth et al have a new article in the Proceedings of the National Academy of Sciences (PNAS) entitled "Role of test motivation in intelligence testing." The abstract reads, in part:
Intelligence tests are widely assumed to measure maximal intellectual performance, and predictive associations between intelligence quotient (IQ) scores and later-life outcomes are typically interpreted as unbiased estimates of the effect of intellectual ability on academic, professional, and social life outcomes. The current investigation critically examines these assumptions and finds evidence against both. [...] After adjusting for the influence of test motivation, however, the predictive validity of intelligence for life outcomes was significantly diminished, particularly for nonacademic outcomes.
The press on the paper includes Science Daily's "Motivation Plays a Critical Role in Determining IQ Test Scores," and Discover's blog post "IQ scores reflect motivation as well as 'intelligence'."

We include 'Effort' in our faculty-assessed end-of-semester FACS survey, and have found a link between grades and this effort rating. Of course, it could just be that professors who thing students work hard also tend to give them higher grades, so over the summer we will look at multi-year correlations to eliminate that confounding factor.

The graphs below (courtesy of Google charts) shows GPA in red, and hours completed in blue above the distribution bars for rated student effort across all classes. The heights of the bars give the percent of the distribution that received that rating. The left one is our first survey, Spring 2009. The one on the right is from Fall 2010. The sample size has increased as we've gotten better participation.

      

The drop in credits earned is due to more first year students being included in the sample. The year-by-year story is similar, except that the overall averages have an interesting shape as ecological samples from first year to fourth:


The first year students in the graph are the first class to fall under the new (much higher) admissions standards. The number is the average effort rating on a scale of zero (minimum effort) to three (great effort). This is for N=1403, Fall 2010. Note that there is a survivorship bias, so that we'd expect the averages to grow as the time-in-school increases. I don't yet have true longitudinal data.

Inter-rater reliability was measured by finding the frequency of exact matches for two instructors rating the same student. There were 385 instances of this, with a match rate of 50.7%. It's not hard to find the rate of pure-chance matches (dot-product the distribution with itself), but I haven't done that. In the past, the chance of matching randomly has been around 35%. See this source for more on that.

Wednesday, April 13, 2011

Getting to Expression

 Barbara Fister's "Why the 'Research Paper' Isn't Working" has some interesting observations about the teaching and assessment of composition, and I made the connection to the deductive/inductive divide I've been going on about lately. Let me reframe the latter as the "language/expression" divide as a preface:
  • A language is a set of knowledge that usually comprises vocabulary, methods, reference points of common knowledge, and a web of connections between concepts. Understanding a language is always a prerequisite to being able to produce it intelligently.
  • Expression is the illumination of new ideas, new connections, creation of new parts of the language to contribute to the existing corpus. It is realized with different styles in varying degrees of fluency, and allows the display of insights or brilliance.
This is the analytical/deductive vs. creative/inductive divide that I've blogged about before, for example in the previous post. We see this division everywhere. Learning how to use paint versus expressing yourself in the medium. 

A "life-long" learner must either become used to learning new languages all the time or else not plan to live long. My 'stack' of languages to learn currently includes R programming, German, and photography. I think of it in over-generalized terms as a progression from confusion to understanding to expression. I'll come back to that idea.

The "photography language" is one I dabbled in when it meant smelly chemicals and a long time between when you shot a photo and when you got to look at it. Nowadays digital photography obviates many of the skills that one needed, and it does something else very important.  It's not any easier to do digital photography--you have to master software instead of stop bath--but it's so much quicker to get from the snap to the view that you can learn from trial and error in real time. This is a huge advantage. Conservatively, the gap between taking a shot on film and holding a print in your hands is at minimum several hours (leaving aside Poloroids or other quickie formats). With digital it's a matter of seconds. So learning the language through sheer trial and error has been accelerated by a factor of, say 2 hrs/2 seconds = 3600. 

Photo: David Eubanks, some rights reserved
Different learners approach learning language in different ways. Some people like to read all the manuals first, and others start pushing buttons. My wife (laughing at the end of a long work day, above) learned Italian by working all the exercises in two textbooks and then spending a month in Italy. I struggle along with German because I don't have the patience to memorize vocabulary. I try to bridge the confusion/understanding divide by reading novels translated into German (it makes the language much simpler), and look up words that come up frequently enough. Her way is much more efficient than mine.

So, in this epistemological vivisection of learning, the challenge for faculty is to teach and assess the crossing of two metaphorical bridges:

Land o' confusion -> Understanding -> Expression

In "Complexity as Pedagogy" I showed how it's possible to take a very narrow road straight to Expression. That is, one can encapsulate a small part of the language and use it to get right to the fun part. Because, let's face it, creating is fun! And if anything distinguishes humans from the rest of the biological kingdom, it's our blabbing--we like to talk.
An art professor once told me how to learn to draw. He said, just draw your hand over and over again in different positions. After about 500 times, you should be pretty good at it. I don't know if he was joking or not, but this is an example of simplifying the language to the point where you can quickly become expressive.
The practice of assessment should be very different across this divide. Testing language fluency can take many forms, but it's always about correctness, speed, conformity to convention, and so on. One is not supposed to be creative on a spelling test. Otherwise I would have gotten better grades in grade school. Similarly, we're not suppose to invent better names for state capitals for that test, or help the Germans organize the genders of their nouns better. 

Assessing language seems easy because of this necessary emphasis on mastering form. Vocabulary tests, concept inventories, and the like are easily administered, and even testing understanding of subtle connections through the use of the language itself is straightforward. 
Example: In teaching logic, it's simple to write down a logical argument and ask students to justify each step with an axiom or theorem, or even let them find errors with the proof. The only way students can be successful is if they have a good understanding of the language.
This ease of assessment is a bane, and a great peril to learning. Let me finally get to the points I liked about the article I cited way back at the beginning of this piece. Starting with the idea of forcing students to master arcane rules of correct citations, the author notes more broadly that 
I have long agreed with Richard Larson who wrote way back in 1982 that the research paper as taught in college is an artificial genre, one that works at cross-purposes to actually developing respect for evidence-based reasoning, a measured appreciation for negotiating ideas that are in conflict, or original thought.
An artificial genre that is at cross-purposes with original thought. That's pretty damning. But it's these very mechanics of any language that are easily defined, easy to get agreement on, and easy to assess. It's a quick slide down the slope to standardization of a form that becomes inimical to the actual intent of the enterprise! This happens all over the place. Whole subjects taught in school exist only because of such inertia, like Geometry in high school--there's no reason kids should be learning plane geometry with rulers and protractors in this day and age, but it's been so deeply standardized that it's become part of the culture. But I digress.

Barbara goes on to illustrate the point with a fascinating example of how students react to the low-complexity standard we've set institutionally:
I hate it when students who have hit on a novel and interesting way of looking at an issue tell me they have to change their topic because they can’t find sources that say exactly what they plan to say. I try to persuade them otherwise, but they believe that original ideas are not allowed in “research.” How messed up is that? The other and, sadly, more frequent reference desk winch-making moment involves a student needing help finding sources for a paper he’s already written. Most commonly, students pull together a bunch of sources, many of which they barely understand on a topic they know little about, and do their best to mash the contents up into the required number of pages.
Does this sound like a road map from Confusion to Expression? It doesn't to me--it sounds like a Skinner Box: mash the button to get the food pellet.

It shouldn't be hard to fix this problem. That's the good news. The recipe is simple:
  1. Focus the language to a small useful subset. In terms of composition, it would mean picking a topic that's narrow enough to actually learn something about quickly.
  2. Demonstrate and assess--with feedback!--fluency in this new language. Have conversations about what confusion levels, what you know you know, and what you know you don't know. There are lots of creative ways to organize this with mind maps and such, and it also can be fodder for oral presentations, or other engagement activities. Develop fluency in real time.
  3. Emphasize expression and creativity over form as far as it can be pushed. This isn't always possible, e.g. in logic--you have to be 100% correct--which is why the focus is so important. If you have to absolutely master some topic in order to be creative, make it a small one. 
Note that I am not advocating "free form creativity" devoid of any content or ultimate value. This might be fun for the students, but I don't see how it accomplishes any useful learning objectives. But there's a lot of road between many of our current practices and goofing off in the name of creativity. 

Barbara's ending paragraph is apposite:
But if you want first year college students to understand what sources are for and why they matter, if you want them to develop curiosity and respect for evidence, your best bet is to start by tossing that generic research paper. As for those who will complain that students should have learned how to paraphrase and cite sources in their first semester – we’ve tried to do that for decades, and it hasn’t worked yet. Isn’t it time to try something else?
Yes. 

Thursday, April 07, 2011

Creativity as the Pinnacle of Learning

In a recent post on the topic of memory, I noted that this skill was at the base of the revised Bloom's Taxonomy (or taxidermy?). This morning I woke up thinking about the other end: creativity. I've mused about the role of creativity before, and how to teach and it (here and here).

It's easy to make the unwarranted leap from creativity to aesthetics, because we associate art justifiably with a creative process. But I prefer to think of creativity as the production of new knowledge in any context. Let me give a pedantic example:
All men are mortal.
Socrates is a man.
------------------
Socrates is mortal.
The conclusion follows deductively from the two statements above it, so it is not the production of new knowledge. This is the hammer that fell on Bertrand Russell in his quest for ultimate true by means of logic. Logical, rigorous deductive thinking is an essential skill, but it's not creative. In contrast, Aristotle's encoding of logic into language was creative, but I have a more interesting example.

The other day I saw an interesting problem posted on the math subreddit. The diagram below shows a laser beam coming from the right and striking a perfect mirror (the thick black line at the bottom) at angle b, bouncing off, and then striking another mirror placed at angle a with respect to the first one:
The gray line is imaginary here--it extends the line of the bottom mirror to illustrate angle b, but the light hits the very end of the mirror itself. The light will continue to bounce off the mirrors in some way. Where does the beam end up going? I will put the solution at the end, in case you want to think about it first.

The point is that I suspected there should be an elegant way to think about the problem, where the solution--all solutions--would be obvious. So I cast about, looking for it. This is rather like trying to find the light switch in a dark room, as Andrew Wiles put it (see my other posts on creativity for the link). I gave up before I found it. I found an inelegant solution, which was correct, but wasn't creative enough to be called elegant. It was sort of a plodding, "add up the cumulative effect" solution, where you sort of crush a problem with the weight of logical facts until it leaks out its secrets like a garlic clove exudes oil.

My lack of blazing imagination does, however, illustrate that the creative process itself deserves its own "taxonomy." In other words, there are qualitative differences to creative enterprise. Let's take a look.

Creativity as producing new information can start with sheer randomness. Flipping a coin and writing down the results is creative. This sounds too trivial to be counted, but it's not. In fact, it's the singular most important spark of novelty in history. I have two examples. First, physicists have wondered how galaxies formed. If the big bang started from a single point, for example, why wasn't everything thereafter perfectly uniform? Where did all the novelty come from? One proposal is that tiny differences in the primordial universe were seeded by quantum events, which we know to be deeply random. So the largest structures in the universe may have started with infinitesimal randomness. Cool, right?

The second example is the evolution of life, which explores via an ecology a vast space of possible designs for living things. This exploration proceeds by random mutation of genes, and other ways in which genetic material may get mixed around (like parasitism), or a bacteria's lascivious lifestyle with regard to DNA. This is not the deeply puzzling randomness of quantum mechanics, but the sort that emerges from complex systems that is sometimes called chaos.

Randomness is a great entry point into creative thinking. The casting about for novelty is a skill in itself. It requires courage to be wrong, a good idea of how to recognize your intellectual quarry when you've found it, and determination--because it takes a long time for randomness to hit the right target. Louis Pasteur's "Chance favors the prepared mind." has two parts: chance, and preparation. The latter is a formed in the laborious mastering of some discipline or subject.

The whole idea of serendiptipy is based on these two elements, and our culture has benefited handsomely from it: rubber, penicillin, radioactivity, and many more are on the list. Wikipedia has many examples here.

In the last post, I showed an example of a game designed for high schoolers that is aimed at creative thinking in a mathematical context. The essential skills are being able to understand the problem and do basic math (easy), and cast about for creative solutions (fun, I hope). These solutions will start with guessing.

Guessing is a step up from randomness. Humans aren't very good at true randomness--we have to depend on the world around us for that, like moldy bread crumbs falling accidentally into a Petri dish. I suspect that good guessing is an art unto itself, and that it can be taught and practiced. There's an MIT course on the art of making educated guesses with regard to estimation (how many gas stations are in the US, do you think?). Here's the course description from the Open Courseware site (it's free!).
This course teaches the art of guessing results and solving problems without doing a proof or an exact calculation. Techniques include extreme-cases reasoning, dimensional analysis, successive approximation, discretization, generalization, and pictorial analysis. Applications include mental calculation, solid geometry, musical intervals, logarithms, integration, infinite series, solitaire, and differential equations.
This is targeted at students with a good math foundation (everybody at MIT, I guess), but I find it exciting because it shows how to teach a whole course on guessing in the context of a discipline. There's no reason that this couldn't be done in other subjects just as well. Guess-and-check is a fundamental human skill that reinforced our knowledge of the world. Think about kids and the funny way they conjugate verbs at first because they are guessing based on simple rules (e.g "I eated my peas, daddy"). The guess is close enough to communicate, and as an additional reward, they glean information about new complexities of language, if someone is kind enough to point out the right way of saying it.

Problem-Solving might be the next step in the creative chain of being. This is a natural continuation of randomness and guessing, which results in the production of new knowledge in some applied context. This works in art as well as math, I think. It's the evolution from random doodles to purposeful artistic creation. Problem solving weds the analytical/deductive process, discipline-specific skills and knowledge with the trial-and error process that I've described in prior posts on creativity. This is the nuts and bolts of creative production.

Inspiration may or may not be teachable. If we help students to be good seekers of randomness, good guessers, and good problem-solvers, can we help them elevate themselves to inspired thought? I don't know, but I guess that we can provide a fertile environment for this, and foster it in individuals who might otherwise have not reached their potential. I don't really believe that we can take every math student and produce another Gauss or Euler, but we can ameliorate one of the great hidden human tragedies--the many, many inspired thinkers who never got the intellectual cultivation they needed to allow their talents to flower.

This is all first-draft thinking. An interested group of discipline experts could turn these rough ideas into something applicable to a curriculum or institution. To include ways to assess creativity at each step along the way. Disciplines can learn from each other and share approaches, opening up the possibility of interdisciplinary learning. I often though that the math students could benefit from watching art students critically review each others' work.



Here's the solution to the problem. I have redrawn it, but I saw it first here. The original problem was in terms of a tiny billiard ball, but I changed it to a laser beam. The key insight is that reflections preserve angles, so that instead of imagining the beam bouncing off at the same angle (incidence = reflection), imagine it passing through the mirror as if it were a pane of glass. Then add another pane of glass where the return bounce would have occurred, so that copies of the mirrors look like spokes on a wheel separated by angle a. This illustrates clearly that the beam will swiftly exit the mirrors and go on its way in most arrangements. The whole process is laid bare. I've illustrated it with a=45 degrees below. This is an inspired and elegant solution, unlike my workable but problem-solving brute force approach (not shown).