Sunday, February 24, 2013

Interfaces and Education

In my last article, I used a cartoon model of intelligence to examine different aspects of whatever that thing is we call critical thinking. The usefulness of the schematic goes well beyond that exercise, however. Specifically, there's the fascinating idea of a "unit of usefulness" often called an interface. It's worthwhile examining how it works in the context of education.

An interface allows a trip to be made all the way around the diagram, which I'll reproduce below.

An elevator is a good example. We start with a motivation:
  1. We want to be on a different floor of the building
  2. We observe the elevator controls, which are carefully constructed so as to be unambiguous, and clearly map reality to a specialized language.
  3. Our internal model (i.e. ontology) of the physical arrangements allow for a simple calculation to map what we want into the language of the elevator's button scheme, so that we can properly predict what's about to happen, and
  4. Act accordingly by pushing a button.
  5. Finally, we observe that our motivation is satisfied by arriving on the right floor, designated with language that lets us know that has happened.
This process is the main function of intelligence (predicting what actions will satisfy motivations), and we do this all the time with and without interfaces. The difference is that an interface makes it transparently easy, by means of a carefully designed language and apparatus for physical change that are congruent. 

Technology Produces Interfaces at an astonishing rate these days. The miraculous devices we carry around are obvious examples, but there's another influence that may not be as apparent: the evolution of societies into machine-like systems creates interfaces too. The Department of Motor Vehicles is a sometimes-reviled interface with government bureaucracy. It's unpleasantness isn't merely from the long wait, but from being treated like a pile of documents rather than a human being. But it's ubiquitous. The person working the check-out line at the grocery store is an interface too, and we can choose to limit our interaction to that mode of operation entirely, rather than acknowledging that this is a human being. The opening sequence of Sean of the Dead portrays this zombie-like element of our lives, even before the inevitable infections begin. It is typified by automatic behavior that characterize a problem already solved.

This is the result of any interface too--no matter how complex an airliner is, to the passenger it's primarily a way to get from one place to another. Everything is standardized. It may be stressful, but that's not because you don't know what the plan is. Much of modern life is developing facility with standard interfaces--like driving a car or using a phone, and the pace of technology creates stress because we can't keep up with all of the new ones. Perhaps that's why Apple products are so popular--they make the interfaces easy to use.

So much of our internal ontology--the way we understand the world--is now tied up in technological and social interfaces that would be very foreign to our forebears. 

Interfaces in Education abound. The whole bureaucracy is designed to partition reality into neat categories of "nominal reality." Unfortunately for this endeavor, humans are not very good subjects for this, and it leads to a lot of mischief. Take, for example, the "inoculation" folk theory of education that crops up continually in the academy. Students take a class on composition, and then are assumed to be able to write. Any subsequent problems with writing point fingers back to that class, which did not fulfill its role. The assumption is that Comp 101 is a reliable factory-like interface that takes raw material and produces good writers. This is a poor reflection of reality because writing is a tremendously complex endeavor that is more akin to kindling a flame than filling a vessel, to employ the ancient metaphor.

We parse learning into boxes called courses and majors and learning outcomes, and institutions certify these with their stamps of approval--theoretically providing an interface for consumers of the product. Of late, the advertised quality of that product (and the cost of producing it) has come into question, perhaps most infamously in Academically Adrift.

We could spend a lot of time dissecting why the interface model fails. Many of the articles I've written in this blog concern how assessments (particularly standardized tests) can create nominal realities that apparently create interfaces, but fail to reflect reality. The result is optimizing only the appearance of satisfying motivations, which is the central idea behind self-limiting intelligence.

Rather than rehashing the problems with the Reality-Language (i.e. measurement) detail of educational assessment, however, I'd like to comment on the expectations that students have. Particularly those who have come from a test-heavy public education, it seems (anecdotally) that they expect a college course to be a clearly-delineated interface, similar to the check-out counter. For example, they don't seem to take to open-ended problems naturally. An interface-centric attitude results in the following expectation: "show me exactly what I need to do to get an A," as if education were an algorithm. It's very easy to teach math courses that way (at least until creativity is required in later courses), but I think it does students a disservice. For example, they can learn how to take derivatives of functions without having any real intuition about what that means. This is well documented in an ongoing research project based on the Calculus Concept Inventory, which seeks to assess conceptual understanding. A quote from one paper (source):
the first term we did TEAL on term, the over all course evaluation was terrible, the lowest of any course I have been associated with at MIT, but I can plausibly argue that that term the students learned twice as much as under the lecture system, using assessment based on Hake normalized gains.
TEAL stands for Technologically Enhanced Active Learning, which de-emphasizes lectures in favor of more active approaches. The comment implies that it worked, but students didn't like it. I presume the reason is that emphasis on active learning of concepts is open-ended and less interface-like that what others have called the 'rent model': if I sit in class long enough, you pass me.

There's a reason why it took our species a couple million years to come up with calculus, and it's not because it's complex. It's NOT complex--anyone can learn the power law in a few minutes. It's the conceptual subtlety of thought and the precise language that expresses it that is the real value of the subject. And it's not just calculus, of course.

The parts of our environment that we can control with interfaces is the easy part--that's what interfaces do. Higher education (especially the liberal arts) should not just be a catalog of new interfaces to learn, but should cultivate the general ability to wrestle with problems that don't have interfaces. Questions of politics and ethics, and the expression of creativity cannot be reduced to a pre-packaged I/O device (despite the strident voices of ideologues who argue for just such a thing). 

If higher education is going to fulfill this role, it has to do as much work in unmaking minds as building them up, because many of our students are well-trained to expect (and demand) A-B-C-degree. The automated satisfaction of motivations by itself is a wonderful thing, but it can also make us dull with expectations that everything is push-button easy.

(The image is from Staples, where you can buy one of these. Their advertising gimmick is a sign of the times.) 


Monday, February 18, 2013

Teaching Critical Thinking

I just came across a 2007 article by Daniel T. Willingham "Critical Thinking: Why is it so hard to teach?" Critical thinking is very commonly found in lists of learning outcomes for general education or even at the institution level. In practice, it's very difficult to even define, let alone teach or assess. The article is a nice survey of the problem.

The approach I've taken in the past (with the FACS assessment) I've simplified 'critical thinking' into two types of reasoning that are easy to identify: deductive and inductive. Interestingly, this shows up in the article too, where the author describes the difference (in his mind) between critical and non-critical thinking:
For example, solving a complex but familiar physics problem by applying a multi-step algorithm isn’t critical thinking because you are really drawing on memory to solve the problem. But devising a new algorithm is critical thinking.
Applying a multi-step algorithm is deductive "follow-the-rules" thinking. He's excluding that from critical thinking per se. To my mind this is splitting hairs: one cannot find a clever chess move unless one knows the rules. We would probably agree that deductive thinking is absolutely prerequisite to critical thinking, and this point is made throughout the article, where it's included in "domain knowledge."

In the quote above, the creation of a new algorithm exemplifies critical thinking--this is precisely inductive thinking, a kind of inference.

Now I don't really believe that even the combination of deductive and inductive reasoning covers all of what people call 'critical thinking,' because it's too amorphous. It's interesting to consider how one might create a curriculum that focuses on 'critical' rather than 'thinking.' It could be a course on all the ways that people are commonly fooled, either by themselves or others. It would be easy enough to come up with a reading list.

Another alternative is to focus on the 'thinking' part first. This seems like a very worthy goal, and in retrospect it's striking that we don't seem to have a model of intelligence that we apply to teaching and learning. We have domain-specific tricks and rules, conventions and received wisdom, but we generally don't try to fit all those into a common framework, which we might call "general intelligence" as easily as "critical thinking." Usually it's the other way around--how do I embed some critical thinking object into  my calculus class? This latter method doesn't work very well because the assessments results (despite our desires) don't transfer easily from one subject to the next. This is the main point of the article linked at the top--domain-specific knowledge is very important to whatever "critical thinking" may be.

A Model for Thinking

I don't presume to have discovered the way thinking works, but it's reasonable to try to organize a framework for the purposes of approaching 'critical thinking' as an educational goal. The following one comes from a series of articles I wrote for the Institute for Ethics and Emerging Technologies (first, second, third), which all began with this article. The theme is how to address threats to the survival of intelligent systems, and it's informed by artificial intelligence research.

A schematic of the model is shown below.


We might think of this as a cycle of awareness, comprising perception, prediction,  motivation, and action. If these correspond to the whims of external reality, then we can reasonably be said to function intelligently.

The part we usually think of as intelligence is the top left box, but it has no usefulness on its own. It's a general purpose predictor that I'll refer to as an informational ontology. It works with language exclusively, just as a computer's CPU does, or the neurons in a our brains do (the "language" of transmitted nerve impulses). Languages have internal organization by some convention (syntax), and associations with the real world (semantics). The latter cannot exist solely as a cognitive element--it has to be hooked up to an input/output system. These are represented by the lower left and right blue boxes. The left one converts reality into language (usually very approximately), and the right one attempts to affect external reality by taking some action described in language.

All of these parts are goal-oriented, as driven by some preset motivation. All of this perfectly models the typical view of institutional effectiveness, by the way, except that the role of the ontology is minimized--which is why IE looks easy until you try to actually do it.

Each of these components is a useful point of analysis for teaching and learning.  Going around the figure from bottom left:

Measurement/Description When we encode physical reality into language, we do so selectively, depending on the bandwidth and motivation, and our ability to use the result in our ontology. At the beach, we could spend the entire day counting grains of sand, so as to get a better idea of how many there are, but we generally don't because we don't care to that level of precision. We do care that there's sand (the point of going to beach), but there are limits to how accurately we want to know.

Some language is precise (as in the sciences), and other sorts not (everyday speech, usually). What makes it usefully precise is not the expression of the language itself (e.g. I drank 13.439594859 oz of coffee this morning), but how reliably that information can be used to make predictions that we care about. This involves the whole cycle of awareness.

Example 1: According to wikipedia, the mass of a proton is 1.672621777×10e-27. This is a very precise bit of language that means something to physicists who work with protons. That is, they have an ontology within which to use this information in ways they care about. Most of us lack this understanding, and so come away with merely "protons weigh a very tiny amount."

Example 1: Your friend says to you "Whatever you do, don't ride in the car with Stanislav driving--he's a maniac!" Assuming you know the person in question, this might be information that you perceive as important enough to act on. The summary and implication in your friend's declaration constitutes the translation from physical reality into language in a way that is instantly usable in the predictive apparatus of the ontology. Assuming you care about life and limb, you may feel disinclined to carpool with Stanislav. On the other hand, if the speaker is someone whom you think exaggerates (this is part of your ontology), then you may discount this observation as not useful information.

The point of these examples is that description is closely tied with the other elements of awareness. This is why our ways of forming information through perception are very approximate. They're good enough for us to get what we want, but no better. (This is called Interface Theory.)

Here are some questions for our nascent critical thinkers:
  1. Where did the information come from? 
  2. Can it be reliably reproduced?
  3. What self-motivations are involved?
  4. What motivations do the information's source have?
  5. What is the ontology that the information is intended to be used in?
  6. How does using the information affect physical reality (as perceived by subsequent observations)?
Notice that these questions are also very applicable to any IE loop.

Question five is a very rich one because it asks us to compare what the provider of the information believes versus what we believe. Every one of us has our own unique ontology, comprising our uses of language, beliefs, and domain-specific language. If I say that "your horoscope predicts bad luck for you tomorrow," then you are being invited to adopt my ontology as your own. You essentially have to if you want to use the information provided. This represents a dilemma that we face constantly as social animals--which bits of ontology do we internalize as our own, and which do we reject? Which brings us to the 'critical' part of 'critical thinking.'

It's interesting that the discussion around critical thinking as an academic object focuses on the cognitive at the expense of the non-cognitive. But in fact, it's purely a question of motivation. I will believe in astrology if I want to, or I will not believe in it because I don't want to. The question is much more complicated than that, of course, because every part of the ontology is linked to every other part. I can't just take my whole system of beliefs and plop astrology down in the middle and then hook up all the pipes so it works again. For me personally, it would require significant rewiring of what I believe about cause and effect, so I'd have to subtract (stop believing some things) part of the ontology. But this, in turn, is only because I like my ontology to be logical. There's no a priori reason why we can't believe two incompatible ideas, other than we may prefer not to. In fact, there are inevitably countless contradictions in what we believe, owing to the fact that we have a jumble of motivations hacked together and presented to us by our evolutionary history.

Intelligence

The usefulness of intelligence lies in being able to predict the future (with or without our active involvement) in order to satisfy motivations. The way we maintain these informational ontologies is a dark mystery. We seem to be able to absorb facts and implications reasonably easily (Moravec's Paradox notwithstanding); we can't deduce nearly as quickly as a computer can, but we manage well enough. It's the inductive/creative process that's the real mystery, and there is a lot of theoretical work on that, trying to reproduce in machines what humans can do. Within this block are several rich topics to teach and assess:
  1. Domain-specific knowledge. This is what a lot of course content is about: facts and deductive rules and conventions of various disciplines, ways of thinking about particular subjects, so that we can predict specific kinds of events. This connects to epistemology when one adds doubt as an element of knowledge, which then leads to...
  2. Inference. How do we get from the specific to the general? At what point do we believe something? This links to philosophy, the scientific method, math and logic, computer science, neuroscience, and so on. Another connection is the role of creativity or random exploration in the process of discovering patterns. We might sum up the situation as "assumptions: you can't live with them, and you can't live without them." Because inference is a fancy word for guessing, it's particularly susceptible to influence from motivation. Superstition,  for example, is an application of inference (if I break a mirror, then I will have bad luck), and one's bias toward or away from this sort of believe comes from a motivational pay-off (e.g. a good feeling that comes from understanding and hence controlling the world).
  3. Meta-cognition. This is the business of improving our ontologies by weeding out things we don't like, or by making things work better by pruning or introducing better methods of (for example) inference. This is what Daniel Kahneman's book Thinking, Fast and Slow, is about. That book alone could be a semester-length course. Any educational treatment of critical thinking is about meta-cognition.
  4. Nominal versus real. Because we live in complex information-laden societies, we deal not just with physical reality but also with system reality. For more on these, refer to my IEET articles. One example will suffice: a system pronouncement of "guilt" in a trial may or  may not correspond to events in physical reality. At the point the verdict is announced, it becomes a system reality (what I call a nominal reality). The ontology of the system becomes a big part of our own personal version, and one could spend a long time sorting out what's real and what's nominal. For more on that topic, see this paper I wrote for a lit conference.
Motivation
Humans and the systems we build are very selective about what we want to know, and what we do with that knowledge. Understanding our own motivations and the that of others (theory of mind), and the ways these influence the cycle of perceive-predict-act, is essential in order to make accurate predictions. That is, intelligence has to take motivation into consideration. This invites a conversation about game theory, for example. The interpretation of critical thinking as the kind of thing that investigative reporters to, for example, must take motivations of sources into consideration as a matter of course.

In economics, motivation looks like a utility function to be optimized. Part of what makes humans so interesting is that we are laden with a hodge-podge of motivations courtesy of our genes and culture, and they are often contradictory (we can be afraid of a car crash, yet fall asleep at the wheel). The search for an 'ultimate' motivation has occupied our race for a long time, with no end in sight.
Here's a critical thinking problem: If motivations are like utility functions, they must be acted on in the context of some particular ontology, which goes out of date as we get smarter. How then are we to update motivations? A specific example would be physical pain--it's an old hardwired system that helped our ancestors survive, but it's a crude instrument, and leads to a lot of senseless suffering. The invention of pain-killers gives us a crude hack to cut the signal, but they have their own drawbacks. Wouldn't it be better to re-engineer the whole system? But we have to be motivated to do that. Now apply that principle generally. Do you see the problem?
Taking Action
This isn't usually thought of in connection with intelligence or critical thinking, but it's integral to the whole project. This is generally not the approach we take in formal education, where we implicitly assume that lectures and tests suffice to increase student abilities. Come to think of it, we don't even have a word for "active use of intelligence." Maybe 'street smarts' comes close, because of its association with 'real-world' rather than academic, but that's an unproductive distinction. I've heard military people call it the X-factor, which I take to mean a seamless connection between perception, prediction, and action (all tied to some underlying motivation, of course).

But of course the point of all this intelligence apparatus is to allow us to act for some purpose. There are great illustrations of this in Michael Lewis's book The Big Short, which show the struggle between hope and fear (motivations) in the analysis of the looming mortgage disaster, and the actions that resulted.

I've argued before (in "The End of Preparation," which is becoming a book) that technological and societal changes allow us to introduce meaningful action as pedagogy. It's the actual proof that someone has learned to think critically--if they act on it.

Being Critical
If some framework like the one described above can be used to examine intelligence in a curriculum, where exactly does the modifier come in? What's critical about critical thinking? Perhaps the simplest interpretation is that critique allows us to distinguish between two important cases (which may vary, but correspond to motivations). For example, in a jury trial, the question is whether or not to convict, based on the perceptions and analysis of the proceedings. It's these sorts of dichotomies--the aggravating fact that we can't take both paths in the wood--that makes intelligence necessary in the first place.

This general task is big business these days, in the form of machine learning, where distinguishing between a movie you will like and one you won't is called a classification problem. Netflix paid a million dollars to the winner of a contest to find a find a better classifier for assigning movie ratings.

It also makes a nice framework for teaching, and it's a common technique to set up a A vs. B problem and ask students to defend a position (and there's a whole library of resources set up to provide support for this kind of thing). In the abstract, these undoubtedly have some value it honing research and debate skills, but it seems to me that they would be more valuable when connect to real actions that a student might take. Is it worth my while to go to Washington to protest against X? Or go door-to-door to raise money for Y? Or invest my efforts in raising awareness about Z with my project? Maybe we need a new name for this: active critical thinking, perhaps.

So as educators, we are then left with the meta-question: is this worth doing?

Next: "Interfaces and Education" continues this line of thought.

Tuesday, February 05, 2013

Networking 2.0 for Assessment Professionals

That assessment has grown as a profession is obvious from the size and number of conferences devoted to the topic, and there is a thriving email list at ASSESS-L where practitioners and theoreticians can hold asynchronous public conversations. There are, however, limitations to this approach, and the purpose of this post is to speculate on more modern professional social networking that might benefit the profession.

I just turned 50, so my first response to any new idea is "Why is this important? I don't have much time left, you know."  So let's start with...

Why?

  1. To find out what other people think about something related to assessment.
  2. To connect with others who have similar assessment interests.
  3. To disseminate information, such as job listings, conference announcements, or research findings.
  4. To help establish a portfolio of professional activity.
One of the things on my personal wish list is a repository for learning outcomes plans and reports than could be seen and commented on by others. I think this transparency would reduce the variability in (e.g.) accreditation reviews of same.

This leads to...

How?

Below I'll describe some of the models that I have come across. There are surely others.

Email Lists 
This is currently done. There's a searchable archive, but it's not tagged with meta-data to make browsing and searching easier. My purely subjective ratings by motivations 1-4 listed above are:
  1. Email is great for finding out what others think, but the relative merit of any one response to a question is not easy to ascertain from the responses; there's a silent majority. Conversations are only threaded by subject line.
  2. Connecting with others is easy enough, but searching their post history to look at the subjects is not.
  3. Disseminating information is a strength of the email list, until it becomes spam-like.
  4. Participation on email lists is probably not something you can put on your resume.
Reddit-Style Discussion Board
Reddit.com is a segmented combination of news aggregator and discussion board, with threaded comments and a voting system to allow a consensus to emerge.
 It's easy to create a 'sub-reddit' on the site itself, or one can use the open-source platform to start from scratch.

Comments related to the motivations:

1. One can write "self-posts" that are like public text messages of reasonable length, to invite others' opinions, OR post a hyperlink to something interesting on the internet. It's very flexible as a general-purpose way to share ideas and create threaded conversations. Voting is a low threshold to involvement, and so there's more participation.

2. One can easily see someone else's post history, but these are not tagged with meta-data. There is a 'friend' feature to follow people of interest, and private messaging within the system is possible.

3. Reddit is a 'pull' rather than 'push' communication, meaning you have to actually go look at the site to see posts, as compared to emails, which arrive in your in-box whether you want them to or not. For the purpose at hand, this is probably preferred by some and not by others. Many assessment professionals are probably too busy to go surf the internet during the day. There are RSS feeds, however.

4. Reddit has a reputation system built in, and active (and popular) users accumulate 'karma'. But the site is not set up to be a meeting place for professionals, and it would have to be off-sited and re-themed to change the perception of it as a place for teens to post memes.

The StackOverflow Model

Stackoverflow.com, mathoverflow.net, and many other similar sites now exist to serve as a meeting place for professionals of different stripes. Comments:

1. The 'overflow' model excels at the Q&A give and take; this is its strong suit. Users post questions with meta-data to categorize them. Others can comment on the question or post a solution. All of these (question posts, comments, and solutions) can be voted up or down, and the original poster (or OP) can select a solution as the best, at which point it gets a check mark as 'best answer.'

2. User profiles are quite detailed, with graphs of activity and reputation scores. These are easily associated with meta-data tags, so it's nearly ideal for finding others with similar interests.

3. Each site has a culture and stated rules about what should and should not be posted. For example, 'soft questions' like "how much caffeine do you consume while writing code?" are generally frowned on, as job ads would be at most sites. But this is all adjustable. Like Reddit (and email for that matter), it requires some moderation, but users providing up or down votes provide most of the filtering.

4. The reputation system built into the overflow model is plausibly usable as an indicator of professional activity. For example, see the page for Joel David Hamkins at MathOverflow.net--a site for working mathematicians.

Other possibilities including using Facebook or LinkedIn or Google+ or Academia.edu or ResearchGate.net as a base platform. These all have the vulnerability of being beholden to a corporate interest, however.

More Connections

In addition to posting learning outcomes ideas/plans/reports/findings for public review, a well-designed professional networking site could seamlessly overlap with conference presentations, so that individual sessions could have a backchannel on the site as well. Twitter can accomplish this through hash tags, but these are limited and not combined into an easily findable place.

There are also possibilities for crowd-sourcing problems using collaboration sites, but this goes beyond the present scope.