Tuesday, July 29, 2014

OK Trends

If you're not familiar with the OKCupid blog, check out it out here. Christian Rudder slices and dices data from the dating site to try to reveal human nature. I find it great fun to follow along with his train of thought, which he presents engagingly and well-illustrated with graphics. The articles could serve as examples to students of what 'critical thinking' might be.

The report linked above is particularly interesting because it addresses ethical issues. If you look at the comments, you'll see a range of reactions from "that's cool" to "how dare you!", include a couple by putative psychology researchers who mention IRB processes. This comes on the heals of Facebook's research on manipulating attitudes, and the resulting media fiasco.

This is a looming problem for us in higher education, too. As an example, imagine a software application that tracks students on campus by using the wi-fi network's access points and connections to cellphones. This could be used to identify student behaviors that are predictive of academic performance and retention (e.g. class attendance, social activity). Whereas a manual roll-taking in class is an accepted method of monitoring student behavior, cellphone tracking crosses a line into creepy. The only way to proceed with such a project would be transparently, in my opinion, which could be done with an opt-in program. In such a program, students would be given a description and opportunity to sign up. In return, they receive information back, probably both the detailed information that is being gathered as well as summary reports on the project. I have been looking for examples of colleges taking this approach. If you know of one, please let me know!

See also: "okcupid is the new facebook? more on the politics of algorithmic manipulation" at scatterplot.com.

Sunday, July 27, 2014

Survey Prospector

Survey Prospector is a web-based interface for quickly exploring survey-type data. It is intended to support a "want-know-do" cycle of intelligent action, which will become clear in this tutorial. The software is new, and being developed, so things will change.

Here's a quick tour. You can play along by downloading these files:
  • CIRP1999sample.csv a random sample of 401 rows taken from the 38,844 in HERI's 1999 CIRP survey data set
  • CIRPvars.csv, an index to the items and responses
Navigate to the Survey Prospector application, hosted by shinyapps.io. Use the file upload interfaces to load the data and index, then click in the dialog box identified with the arrow in the image below.

We WANT to make the world a better place. For this exercise, I'm going to assume we want to understand the difference between males and females in college in 1999, and see what we can learn from this survey that might inform policy. The dialog box that appears should show you a 1 and a 2. Click the '1', which is the code for males. A button "Create Predictors" will appear. Click it. You should see a screen full of interesting stuff appear. For the moment, take a look at the variable that appears first, and its distribution.

We KNOW about the world by finding patterns in data. In this case, the distribution shows that this sample contains more than twice as many females as males (from inspection). 
Click on the drop-down selector under "Select Predictor". 

The variable names are sorted in order of how well they predict SEX = 1, with a maximum predictor power of one, and least .5 (guessing). We see that SEX_TFS, which is what the student entered during their freshman year, perfectly predicts SEX, which is what they entered on the senior survey. This gives us some confidence in the internal validity of the data. The SEX variable predicts itself perfectly too, and "predicted" is a created variable that mimics the one we are interested in. The first interesting variables are the next ones, which have predictive power in the sixties and seventies, as measured by their ROC area under the curve (AUC). 

Click on VIEW14_TFS, and the screen will refresh to tell us about that item. The top half is shown here.

In this data set, the strongest non-trivial predictor of SEX is an attitudinal item assessed in the freshman year (that's what the _TFS on the variable name means). The order of the items that make for the best predictor is shown on these graphical displays: 2,3,1,4. The most popular choices are 3 and 4. The numbers above the bar graph show the fraction of in-class (meaning male, or SEX = 1) for that item response. So 79% of those who responded with VIEWS14_TFS = 3 were male. The ROC chart tells us something about the way the predictor works. Ideally, the graph goes straight up (more true positives without incurring false positives). We can see that happens quite nicely with the '3' response. After that, the '1' gives us a little more, but not much change. So the difference between '3' and '4' is where most of the predictive power comes from. Looking at the bottom of the display, we get more information.

The left graph gives binomial confidence intervals on the fractions given in the bar graph above it. If we trace the pink area down from the 2, we can see that there isn't great certainty in its 1.0 value, but it's still probably significantly different from the 1 and 4 rates. the band for '3' is smaller, but '1' and '4' may not really be distinguishable. The main message here is that '2' and '3' really are probably greater than '1' and '4' with regard to these rates. If this is true, it means that men are responding to the question in a less extreme way than women. In fact, if we look at the whole survey, with 21,656 responses to this item, we see the same pattern, with AUC = .64, and rates for 2,3,1,4 being .85,.74,.44,.28. You can see the report on the large data set here, generated from my perl and R scripts.

The table at right above gives the statistics for using the predictor VIEWS14_TFS in (2,3,1) => SEX = 1. There are 29% males in this data set. The predictor predicts males correctly 70% of the time, and females 81% of the time. 

Click on the next couple of variables: VIEWS12_TFS and VIEWS21_TFS. These are similar to the one we explored in detail, and not surprising. The next best predictor is RATE15, a self-assessment of physical health (as a senior) with most frequent responses being 3,4,5--the healthy end of the scale. However, males report better physical health overall. 

Try a different question by selecting POLIVIEW from the 'select target' drop-down, and then select '1', and '2' to create a target classification. These are the most conservative students, reported as seniors. What predicts that? A quick scroll through the sorted variables shows that the most important factor is the students reported political stance as a freshman: POLIVIEW_TFS, with a whopping .83 for AUC.

DOING something to improve the world is the point of all this investigation. Assuming we really care about the indicators that we identify, then understanding should lead to action.

Technical details. If you want to try this on your own data, please don't upload anything with student identifiers or other sensitive information. Also, for the moment, the data has to be in a pretty rigid format:
  • Data files need a header row with variables that don't have spaces or other funny characters, and start with a letter. More specifically, R has to recognize them as variable names. No duplicates, obviously. The data itself must be integers within a small range, like 1-10. 
  • Index files do not have a header. They are just a variable name, a comma, and the description without commas. Only one comma per line unless you want to put the description in quotes. 

Please leave feedback below or email me at deubanks.office@gmail.com.



Monday, July 14, 2014

Finding and Using Predictors of Student Attrition

A while back, I wrote about finding meaning in data, and this has turned into a productive project. In this article I'll describe some findings on causes of student attrition, a conceptual framework of actions to prevent attrition, and an outline of the methods we use.

In order to find predictors of attrition, we need to find information that was gathered before the student left. A different approach is to ask the student why he or she is leaving during the withdrawal process, but I won't talk about that here. We use HERI's The Freshman Survey each fall and get a high response rate (the new students are all in a room together). Combining this with the kinds of information gathered during the admissions process gives several hundred individual pieces of information. These data rows are 'labeled' per student with the binary variables for attrition (first semester, second semester, and so on). In several years of data, and relying on two different liberal arts colleges, we get the same kinds of predictors of attrition:

  • Low social engagement
  • High financial need
  • Poor academics
  • Psychology that leads to attrition: initial intent to transfer, extreme homesickness, and so on.
These can occur in various combinations. A student who is financially stressed and working two jobs off campus may find it hard to keep up her grades. At the AIR Forum (an institutional research conference) this June, we saw a poster from another liberal arts college that identified the same four categories. They used different methods, and we have a meeting set with them to compare notes.
For purposes of forming actions, we assume that these predictive conditions are causal. There's no way to prove that without randomized experiments, which are impossible, and doing nothing is not an option. In order to match up putative causes with actions, we relied on Vincent Tinto's latest book Completing College: Rethinking Institutional Action, taking his categories of action and cross-indexing them with our causes. Then we annotated it with the existing and proposed actions we were considering. The table below shows that conceptual framework for action.
Each letter designates some action. The ones at the bottom are actions related to getting better information. An example of how this approach generates thoughtful action is given next.

Social Engagement may happen through students attending club meetings, having work-study, playing sports, taking a class at the fitness center, and so on. This can be hard to track. This led us to consider adopting a software product that would do two things: (1) help students more easily find social activities to engage with, and (2) help us better track participation. As it turned out, there was a company in town that does exactly this, called Check I'm Here. We had them over for a demo, and then I went to their place to chat with Reuben Pressman, the CEO and founder. I was very impressed with the vision and passion of Reuben and his team. You can click through the link to their web site for a full rundown of features, but here's a quote from Reuben:

The philosophy is based around a continuous process to Manage, Track, Assess, & Engage students & organizations. It flows a lot like the MVP idea behind the book "The Lean Startup" that talks about a process of trying something, seeing how it goes, getting feedback, and making it better, than starting over again. We think of our engagement philosophy the same way:
  • Manage -- Organize and structure your organizations, events, and access to the platform.
  • Track -- Collect data in real-time and verify students live with mobile devices
  • Assess -- Integrate newly collected data and structurally combine it with existing data to give real-time assessment of what works and doesn't and what kinds of students are involved
  • Engage -- Use your new information to make educated decisions and use our tools for web and mobile to attract students in new ways
  • Rinse, and Repeat for more success!
A blog post talking more about our tracking directly is here. We take a focus on Assessing Involvement, Increasing Engagement, Retaining Students, and Successfully Allocating Funding.
Currently, we can get card-swipe counts for our fitness center, because it's controlled for security reasons. An analysis of the data gives some indication (this is not definitive) that there is an effect present for students who use the fitness center more than those who don't. This manifests itself after about a year, for a bonus of three percentage points in retention. The ability to capture student attendance at club events, academic lectures, and so on with an easy portable card-swipe system like Check I'm Here is very attractive. It also helps these things happen--students can check an app on their phones to see what's coming up, and register their interest in participating.

Methods 

I put this last, because not everyone wants to know about the statistics. At the AIR forum, I gave a talk on this general topic, which was recorded and is available through the organization. I think you might have to pay for access, though.

The problem of finding which variables matter among hundreds of potential ones is sometimes solved with  step-wise linear regression, but in my experience this is problematic. For one thing, it assumes that relationships are linear, when they might well not be. Suppose the students who leave are those with the lowest and the highest grades. That wouldn't show up in a linear model. I suppose you could cross multiply all the variables to get non-linear ones, but now you've got tens of thousands of variables instead of hundreds.

There are more sophisticated methods available now, like lasso, but they aren't attractive for what I want. As far as I can tell, they assume linearity too. Anyway, there's a very simple solution that doesn't assume anything. I began developing the software to quickly implement it two years ago, and you can see an early version here.

I've expanded that software to create a nice work flow that looks like this:
  1. Identify what you care about (e.g. retention, grades) and create a binary variable out of it per student
  2. Accumulate all the other variables we have at our disposal that might be predictors of the one we care about. I put these in a large CSV file that may have 500 columns
  3. Normalize scalar data to (usually) quartiles, and truncate nominal data (e.g. state abbreviations) by keeping only the most frequent ones and calling the rest 'other'. NAs can be included or not.
  4. Look at a map of correlates within these variables, to see if there is structure we'd expect (SAT should correlate with grades, for example)
  5. Run the univariate predictor algorithm against the one we care about, and rank these best to worst. This usually takes less than a minute to set up and run.
  6. Choose a few (1-6 or so) of the best predictors and see how they perform pairwise. This means taking them two at a time to see how much the predictive power improves when both are considered. 
  7. Take the best ones that seem independent and combine them in a linear model if that seems appropriate (the variables need to act like linear relationships). Cross-validate the model by generating it on half the data, testing against the other half, do this 100 times an plot the distribution of predictive power.
Once the data set is compiled, all of this takes no more that four or five minutes. A couple of sample ROC curves and AUC histograms are show below.


This predictor is a four-variable model for a largish liberal arts college I worked with. It predicts student retention based on grades, finances, social engagement, and intent to transfer (as asked by the entering freshman CIRP survey).

At the end of this process, we can have some confidence in what predictors are resilient (not merely accidental), and how they work in combination with each other. The idea for me is not to try to predict individual students, but to understand the causes in such a way that we can treat them systematically. The example with social engagement is one such instance. Ideally, the actions taken are natural ones that make sense and have a good chance of improving the campus. 

Saturday, July 12, 2014

A Cynical Argument for the Liberal Arts: Part Sixteen

Previously: Part Zero ... Part Fifteen

The "Cynical Business Award" goes to ReservationHop, which I read about in CBC News here. Quote:
A San Francisco startup that sells restaurant reservations it has made under assumed names is raising the ire of Silicon Valley critics.  
ReservationHop, as it's called, is a web app with a searchable database of reservations made at "top SF restaurants." The business model is summed up in the website's tagline: "We make reservations at the hottest restaurants in advance so you don't have to."  
Users can buy the reservations, starting at $5 apiece, and assume the fake identity ReservationHop used to book the table. "After payment, we'll give you the name to use when you arrive at the restaurant," the website says. 
 The "coin of the realm" in this case is the trust between diner and restaurant that is engaged when a reservation is placed. This is an informal social contract that assures the diner a table and assures the restaurant a customer. Sometimes these agreements are broken in either direction, but the system is valuable and ubiquitous. The ReservationHop model is to take advantage of the anonymity of phone calls to falsely gain the trust of the restaurant and essentially sell it at $5 a pop. This erodes trust, debasing the aforementioned coin of the realm. Maybe the long term goal of the company is to become the Ticketmaster of restaurant reservations.

One can imagine monetizing all the informal trust systems in society in this way. Here's a business model, free of charge: you know all those commercial parking lots that give you a time stamped ticket when you drive in? It's easy to subvert that with a little forethought. Imagine an app that you use when you are ready to leave. With it, you meet up with someone who is just entering the parking lot and exchange tickets with them. You get to leave by paying for almost no time in the lot. If they do the same, so can they, ad infinitum. Call it ParkingHop.

One can argue that these disruptive innovations lead to improvements. In both the cases above, the debasement of the trust-coin is due to anonymity, which nowadays can be easily fixed. The restaurant can just ask for cell-phone number to verify the customer by instead of a name, for example, and check it by calling the phone. This isn't perfect, but the generally the fixing of personal identity to actions creates more responsible acts. The widely-quoted faking of Amazon.com book reviews, for example, is greatly facilitated by paid-for "sock puppet" reviewers taking on many identities. So anonymity can be a power multiplier, the way money is in politics.  The natural "improvement," if we want to call it that, is better record keeping and personal registration of transactions. This is what the perennial struggle to get an intrusive "cybersecurity" law passed is all about (so kids can't download movies without paying for them), and the NSA's vacuuming up of all the data it can. We move from "trust," to "trust but verify," to "verify."

These are liberal artsy ideas about what it is to be human and what it is to be a society. The humanities are dangerous. How many millions have died because of religion or ideology? I've been wondering lately how we put that tension in the classroom. Imagine a history class with a real trigger warning: Don't take this class if you have a weak disposition. If you aren't genuinely terrified by the end of a class, I haven't done my job.

Thursday, June 19, 2014

A Cynical Argument for the Liberal Arts, Part Fifteen

Previously: Part Zero ... Part Fourteen

After being derailed last time by a quotation, I'll take up the question of constructive deconstruction: how can the corrosive truth-destroying effect of Cynical "debasing the coin of the realm" lead to improvements within a system? Note that a system (loosely defined: society, a company, a government) is the required  'realm' in which to have a 'coin.' My modern interpretation is official or conventional signalling within a group. Waving hello and smiling are social coins of the realm. Within organizations, a common type of signal is a measure of goal attainment, like quarterly sales numbers. It's this latter, more formal type of signal, that is our focus today.

A formalization of organizational decision-making might look like this (from a slide at my AIR Forum talk):


Along the top, we observe what’s going on in the world and encode it from reality-stuff into language. That’s the R → L arrow. This creates a description like “year over year, enrollment is up 5%,” which captures something we care about via a huge reduction in data. R → L is data compression with intent.

As we get smarter, we build up models of how the world behaves, and we reason inside our language domain about what future observations might look like with or without our intervention. We learn how to recruit students better under certain conditions. When we lay plans and then act on them, it is a translation from language back into the stuff of reality—we DO something. By acting, we contribute to the causes that influence the world. We don’t directly control the unfolding of time (red arrow), and all pertinent influences on enrollment are taken into account by the magical computer we call reality. Maybe our plans come to fruition, maybe not.

Underlying this diagram are the motivations for taking actions. Our intelligence, whether as an individuals or as an organization, is at the service of what we want. It's an interesting question as to whether one can even define motivation without a modicum of intelligence--just enough to transform the observable universe (including internal states) into degrees of satisfaction with the world. As animals, we depend on nerve signals to know when we are hungry or in pain. These are purely informational in form, since the signals can be interrupted. That is, there is no metaphysical "hunger" that is a fundamental property of the universe--it's simply an informational state that we observe and encode in a particular way. In a sense, it's arbitrary.

For organizations, analogs to hunger include many varieties of signals that are presumed to affect the health of the system. Financial and operational figures, for example. These are often bureaucratized, resulting in "best salesman of the quarter" awards and so forth. An essential part of the bureaucracy is the GOAL, which is an agreed-upon level of motivation achievement, measured by well-defined signals. An example from higher education is "Our goal for first year student enrollment is an increase of 6% over three years."

A new paper from the Harvard Business School "Goals gone wild," by Lisa D. Ordóñez, Maurice E. Schweitzer, Adam D. Galinsky, and Max H. Bazerman, provocatively challenges the notion that formal goals are always good for an organization. This itself is a cynical act (challenging established research by writing about it), but the paper itself is a great source of examples of how Cynical employees can react to a goal bureaucracy in two ways:

  • Publicly or privately subverting goals through actions that lead to real improvements in the organization, or 
  • Privately debasing the motivational signals that define goals for personal gain.
The first case is desirable when the goals of an organization, when taken too seriously, are harmful to it. Too much emphasis on short-term gains at the expense of long-term gains is an example. This positive reaction to bad goals is not the topic of the paper, but proceeds from our discussion here. The second case is ubiquitous and unremarkable: any form of cheating-like behavior that inflates one's nominal rank, reaping goals-rewards while not really helping the organization (or actually harming it).


Earlier, I referenced a AAC&U survey of employers that claimed they wanted more "critical thinkers." Employees who find clever ways to inflate their numbers is probably not what they have in mind. Before the Internet came around, I used to read a lot of programming magazines, including C Journal and Dr. Dobbs. One of those had a (possibly apocryphal) story about a team manager that decided to set high bug-fixing goals for the programmers. The more bugs they found and fixed, the higher they were ranked. Of course, being programmers, they were perfectly placed to create bugs too, and the new goals created an incentive for them to do just that: create a mistake, "find" it, fix it, get credit, repeat. This is an example of organizational "wire heading." 

From the Harvard paper's executive summary, we can see problems that an ethical Cynic can fix;
The use of goal setting can degrade employee performance, shift focus away from important but non-specified goals, harm interpersonal relationships, corrode organizational culture, and motivate risky and unethical behaviors.
Armed with the liberal-artsy knowledge of signals and their interception, use, and abuse, AND prepared with an ethical foundation that despises cheating, an employee has some immunity to the maladies described above. Of course, this depends on his or her position within the organization, but generally, Cynical sophistication allows the employee to see goals as what they really are--conventions that poorly approximate reality. This is the "big-picture" perspective CEO-types are always going on about. Moreover, an ethical Cynic who is in a position of power is less likely to misuse formal goal-setting as a naive management tool. 

The paper itself is engagingly written, and is worth reading in its entirety. With the introduction above, I think you'll see the potential positive (and negative) applications of Cynical acts. Informally, the advice is "don't take these signals and goals too seriously," which is a point the original Cynics repeatedly and dramatically made. More formally, we can think of a narrow focus on a small set of goals as a reduction in computational complexity, in which our intelligent decision making has to make do with a drastic simplification of the world. Sometimes that doesn't work very well, and the Cynics are the ones throwing the plucked chickens to prove it.

Next: Part Sixteen

Friday, June 06, 2014

A Cynical Argument for the Liberal Arts, Part Fourteen

Previously: Part Zero ... Part Thirteen

Because Cynical "debasing the coin of the realm" corrupts the way in which individuals or organizations are aware of the world, it may be hard to see how one gets beyond this destruction. Now I would like to speculate on how we can think of Cynicism as a constructive method.

Cynical attacks on categorical ways of knowing (I called them signals earlier) challenge the categories by turning something into its negation. A counterfeit is, and is not, a coin. When I started thinking about using a destructive method of constructing, the Sherlock Holmes quote came to mind. This led me down a rabbit hole that ironically illustrates the point. When I searched for the quote, I found the following attributed to Sir Arthur Conan Doyle:
Once you eliminate the impossible, whatever remains, no matter how improbable, must be the truth.
I wanted to be sure, and it took some time to track down a reference. One site put these words in the mouth of Doyle's Sherlock Holmes in "The Adventure of the Beryl Coronet". However, this doesn't seem to be the case. After more work I found the complete works of Doyle in a single text file here. The word 'eliminate' is only used twice in the whole body of work. Here's the one we want, in the first chapter of The Sign of Four,  with Watson speaking first:
"How, then, did you deduce the telegram?"
"Why, of course I knew that you had not written a letter, since I sat opposite to you all morning. I see also in your open desk there that you have a sheet of stamps and a thick bundle of postcards.  What could you go into the post-office for, then, but to send a wire? Eliminate all other factors, and the one which remains must be the truth."
Now it's possible that Doyle used the first quote outside of a novel, but the story becomes odder when we look at the record. Using Google's ngram server, I searched for "eliminate the impossible" and "eliminate all other factors." Here's the result:

Red line is "eliminate all other factors", blue is "eliminate the impossible"
Tantalizingly, this has the common quote coinciding with the publication date of The Sign of Four, according to this site. The oldest edition I could find was a book (not the original magazine, in other words) scanned from University of Michigan's library:


Its publication date is "MDCCCCI," or 1901. I tried adding the "you" in front of the first quote, and got no hits on the ngram server, but it may have a limit on how long phrases can be. After more searching I found a scanned copy of the original 1890 Lippincott's Monthly Magazine. It has the same text as the book shown above. This would seem to rule out a change in the language between the original magazine publication in 1890 and subsequent compilation in 1901.

Google Books results show that there are 19th century instances of "eliminate the impossible," which probably explain the eruption of the blue line in the graph before the (book) publication of The Sign of Four in 1901. I looked for other partial phrases too, like "no matter how improbable." These didn't turn up Doyle, but other treasures. From Doing Business in Paris:


 and this one:
The sentiments of both of these is that untruths can propagate wildly, given the right conditions, which may be what we're seeing here, and a good reason for Cynical weeding out.

Moving forward in time, I scanned 1900-1920 with Google Books, and found a magazine called Woman Citizen with this text from April 26, 1919.

The offhand rephrasing switches "Eliminate all other factors" with "Eliminate the impossible." Six years later, we find:

This has the "however improbable" drama that Doyle didn't put in the mouth of Holmes. The attributed quote finally appears in full form in 1949:

Not only is the hyperbolic "however improbable" in appearance, it's in italics for extra effect. The book seems to be an edited volume by various authors of Doyle/Holmes-related biography, anecdotes, and lit crit, although all I have to go by is the random samples that the stingy owner of copyright allows Google to produce. Worldcat coughed up only five copies, none of which are online. Let us allow the matter to come to rest here.

By eliminating possibilities, we come to the tentative conclusion that Doyle's famous quote is actually due to fan fiction, which can easily be seen as a Cynical debasement of the original. This example illustrates this by switching out Doyle's concise "Eliminate all other factors, and the one which remains must be the truth" with the nearly breathless "Eliminate the impossible, and whatever remains, however improbable, must be the truth." The fact that the latter one is much more well known, and attributed to Doyle is a clear debasement of the latter's words, and is a minor disturbance in the reality of anyone who believes the wrong version. The famous quote is, and is not, by Sherlock Holmes.

It takes a lot of work to separate signal from noise, which makes Cynics dangerous. Now what about that construction? That will have to wait until after the rabbit soup.

Next: Part Fifteen

Thursday, June 05, 2014

A Cynical Argument for the Liberal Arts, Part Thirteen

Previously: Part Zero ... Part Twelve

Here we continue the discussion of Cynicism in higher education, where I continue to use the charge "debase the coin of the realm" as the tool for analysis. In a previous installment I said that any statement of fact was a violent act. Allow me to pick up that idea here, since much of college consists of listening to statements of fact.

We constantly observe the world through our senses of internal and external affairs, and some of these we encode into language, as in "the cat just knocked over the vase." This entails data compression, since we don't have time to describe everything about the cat and the vase or the irrelevant features of the situation. But it's such a convenience that we may forget that it's just language, just a crude approximation of what was actually observed in that moment. If you'll allow me a neologism, I'd like to refer to compressed data as 'da'. So we pass da back and forth, decompress it internally, ignore the inevitable errors, and this becomes a high value "coin of the realm." If you tell me "the bridge is washed out ahead, but if you turn at the gas station you can get to town that way," it's useful da. However, passing falsehoods along debases this coinage, and lying is something we tell kids not to do.

A statement like "seasons are due to the fact that the Earth's axis of rotation is tilted relative to its orbital plane" require work to decompress. Much work in college is just to build up an ontology that lets a student associate new ideas in useful ways, sometimes by translating them into pictures. A movie that illustrates the principle in the sentence will be more effective than that sentence because moving pictures can directly simulate motion of the Earth and sun, and a viewer can create his or her own da from it. I assume that much of the time students don't know what we professors are talking about. It's just so much da-da, and that represents a failure to teach or learn or both.

Statements of fact that are inaccessible for technical reasons may not be threatening simply because it's a foreign language, and easily dismissed. On the other hand, much of what students learn in the humanities will challenge the way they think about the world. Religious fundamentalism running up against modern biology is an obvious example. But the critical theory that accompanies much of the humanities is more insidious, and can undermine the whole project of reasoning and knowing with a challenge from relativism. A student who internalizes this may not know what to believe by the end of it. There is something good in this idea that there is not always a single correct answer to a question, and that the process of reasoning is more valuable than any one product of it. It seems that the opposite is generally trained into students in the typical K-12 curriculum, where advancement may depend on a standardized test.

To explicate the process of deconstructing truth, allow me to re-use the example of Diogenes tossing the plucked chicken at the feet of Socrates, after the latter declared man to be a featherless biped. How does this trick work?

The Cynical method of attack in this case comprises an action that demonstrates a contradiction between the real world and the way we are told to perceive it. A modern example is found in "The Serial Killer Has Second Thoughts: The Confessions of Thomas Quick," which I will take at face value for the purposes of this argument (feel free to debase that value). A mentally confused man confesses to brutal crimes, and becomes a celebrated serial killer. Then it turns out that he couldn't have committed the crimes he confessed to. Many people have been convicted of crimes that they didn't commit. But it seems that he willingly participated in this fiction, which makes him a fowl-flinging Cynic. The coin of the realm debased here is the operation of the justice system and media (in Sweden in this case) and its accurate representation of reality. Like most public Cynical acts, it comes at a high price.

In order to construct such an epistemological attack on a system, one needs to find categories that are incompatible and by acting produce examples that occupy both categories. For example, the conception of worker-as-machine, which is an employer's point of view, conflicts with worker-as-human, which is the employees' point of view. A worker strike is an incompatible categorization: workers who are not working, and therefore a Cynical chicken toss. Once you see the trick, other examples become obvious: art that is not art (Duchamps), slave-as-beast versus slave-as-human, animal-as-meat versus animal-as-living-creature, Earth-as-resource versus Earth-as-home.

Some of the major discoveries of the world are paradoxical category-defiers. The discoverers of the calculus used infinitesimals, which are quantities 'infinitely small but not zero.'  The only thing infinitely small can mean is zero, so this is contradictory, and yet it is exactly the idea needed to unlock differential calculus. Or the Copenhagen interpretation, in which reality acts like probability waves collapsing into real particles. Or Einstein's category-breaking conceptions of space and time (how can time operate at different rates, when time determines what a rate is?). Or the Dirac Delta, which is essentially a box that has zero width, is infinitely tall, and contains one unit of area! The Liar Paradox "this statement is false" in the hands of Russel and Turing and Gödel caused a rethinking of the fundamentals of mathematics and set logical limits on what we can know. These we should probably consign to 'cynical' rather than 'Cynical' status, since they are arguments rather than physical acts, but this may be splitting hairs.

More common than public abuse of our feathered friends is the secret use of contradicting categories for personal gain. When these become public, they may serve as inadvertent Cynical examples. For example, Slate's "Here’s the Awful 146-Word “Essay” That Earned an A- for a UNC Jock". Quote:
The University of North Carolina–Chapel Hill has already been embroiled in a scandal for allowing its athletes to enroll in fake courses for easy credit. 
A university's motivation for recruiting a top athlete is different from its motivation for recruiting a top student, yet NCAA rules require that athletes also be successful students (there is no requirement the other way around). When forced to coincide for purposes of classification, the overlap "student-athletes" seems to often include examples that fit the latter but not the former category. Actually producing these examples is a Cynical act, even though it's not intended to be public (quite the opposite). There is a straightforward way to debase the coin of the realm (the meaning of college grades). Unfortunately for these Cynics, sometimes people find out make it public.

It seems that private Cynicism for gain, as opposed to public Cynicism, needs a name. I will refer to it as crypto-Cynicism, meaning hidden. The name seems appropriate, since this is a lot of what spies do, like assuming the identity of someone else, forging documents, getting people to trust them who shouldn't, or hiding a real message within an apparent one.

The examples show that Cynicism (as I have interpreted it) remains a powerful force for change, and that it doesn't come with ethical instructions. Colleges that teach the students the deepest forms of subversion, like deconstruction, relativism, critical theory, and so on (these overlap) hand over to the young minds solvents that can turn the world--and their own minds--to goo. Since becoming secular, colleges shy away from construction part, which is unfortunate. We could be more intentional about the existentialist project that should follow the deconstruction--finding a personal meaning to construct out of the goo. But aside from that, there is a more troubling question.

Perhaps what employers want is really two things: (1) a class of graduates who are smart enough to pick up new tasks of varying complexity and fill the technical and social demands of being a machine-part in an organization, and (2) a smaller cadre of crypto-Cynics who are willing to break rules in order to get ahead. If so, then the unsettling conclusion is that a two-class system is exactly what's needed. The first is trained to follow rules and keep their heads down, mastering jobs that real machines will eventually take. The second provides motivation and vision uncluttered by the norms of society, its laws, science, or even a common understanding of the world, to serve as leadership. The spectrum of this second group would range from normal humans to psychopaths and mystics, who succeed or fail depending on their environment. The contrast between these two types (1) and (2) can perhaps be seen in stories like this one from the Huffington Post: "For-Profit College Enrolls, 'Exploits' Student Who Reads at Third-grade Level" in these two quotes:
A librarian at a southern California campus of Everest College abruptly resigned last week, deeply upset that the for-profit school had admitted into its criminal justice program a 37-year-old man who appears to read at a third-grade level.  
versus
Everest is owned by for-profit giant Corinthian Colleges, which is facing a lawsuit for fraud by the attorney general of California and is under investigation by 17 other state attorneys general and four federal agencies. [A senior administrator] at Corinthian, told me today that the campus believed it was appropriate to take a chance on admitting the student.
There's a natural fix to this particular problem: require that the leadership of the company must come from its own graduates.