Tuesday, July 29, 2014

OK Trends

If you're not familiar with the OKCupid blog, check out it out here. Christian Rudder slices and dices data from the dating site to try to reveal human nature. I find it great fun to follow along with his train of thought, which he presents engagingly and well-illustrated with graphics. The articles could serve as examples to students of what 'critical thinking' might be.

The report linked above is particularly interesting because it addresses ethical issues. If you look at the comments, you'll see a range of reactions from "that's cool" to "how dare you!", include a couple by putative psychology researchers who mention IRB processes. This comes on the heals of Facebook's research on manipulating attitudes, and the resulting media fiasco.

This is a looming problem for us in higher education, too. As an example, imagine a software application that tracks students on campus by using the wi-fi network's access points and connections to cellphones. This could be used to identify student behaviors that are predictive of academic performance and retention (e.g. class attendance, social activity). Whereas a manual roll-taking in class is an accepted method of monitoring student behavior, cellphone tracking crosses a line into creepy. The only way to proceed with such a project would be transparently, in my opinion, which could be done with an opt-in program. In such a program, students would be given a description and opportunity to sign up. In return, they receive information back, probably both the detailed information that is being gathered as well as summary reports on the project. I have been looking for examples of colleges taking this approach. If you know of one, please let me know!

See also: "okcupid is the new facebook? more on the politics of algorithmic manipulation" at scatterplot.com.

Sunday, July 27, 2014

Survey Prospector

(last updated 5/30/2015)

Survey Prospector is a web-based interface for quickly exploring discrete data. It is intended to support a "want-know-do" cycle of intelligent action. It allows you to quickly execute a predictor-finding workflow to try to find potential cause/effect relationships you care about.

Workflow:
  1. Normalize scalar or nominal data into bins or small numbers of categories if necessary. 
  2. Apply any filters of interest (e.g. just males or just females).
  3. Identify a target (dependent) variable and create a binary classification.
  4. List the independent variables in decreasing order of predictive power over the dependent variable, with graphs and suitable statistics automatically generated.
  5. Browse these top predictors to get a sense of what is important, including linear and non-linear relationships between pairs of them.
  6. Visually inspect correlational maps between the most important independent variables.
  7. Create multivariate predictors using combinations of the best individual predictors.
  8. Cross-validate the model by keeping some data back to test the predictor on.
  9. Assign modeled probabilities to cases, e.g. to predict attrition. 

This all happens in real time, so that this can be used in meetings to answer questions if you like. A common malady of IR offices is that it's much easier to ask questions than to answer them. This tool can be used to effectively prioritize research. It lends itself to data warehousing, where you might build a longitudinal history of student data across a spectrum of types. Then it becomes trivial to ask and answer questions on the fly like "what student non-cognitives predict good grades their first semester?" or "what's the effect of work-study on first year attrition?"

Here is the application: Survey Prospector v2-1-15 [Note: if the online app doesn't work, it's because my hourly limit for the month has been reached.]. A video demo can be found here, using this data set. If you need a primer on predictors and ROC curves, try this.

Here's a video tour using a 45M CIRP national data set. You can do something similar with a small sample by downloading these files:
  • CIRP1999sample.csv a random sample of 401 rows taken from the 38,844 in HERI's 1999 CIRP survey data set
  • CIRPvars.csv, an index to the items and responses

Technical details. If you want to try this on your own data, please don't upload anything with student identifiers or other sensitive information. The data should be in this format:
  • Data files need a header row with variable names.
  • Index files do not have a header. They are just a variable name, a comma, and the description without commas. Only one comma per line unless you want to put the description in quotes. Index files are optional, but they help decipher results later on. Download the example CIRPvars.csv listed above to see one.
I'm in the process of creating a real website for this project, but it's not finished yet.

Screenshots

This is the main tab, where the predictors are sorted and displayed in order of importance.


The graphs show the data distribution (with case types in blue or pink), a ROC curve for assessing predictor power, ratios with confidence intervals to assess statistical significance, and a table with case numbers and ratios in order to identify useful thresholds. 

The screen capture above shows a dynamic exploration of the correlations between best predictors (red lines are negative correlations). The sliders at the top allow for coarse- or fine-grain inspection of variables. This allows you to see how predictors cluster visually. In researching attrition, this technique easily allowed identification of major categories of risk evident in our data: academic, financial, social engagement, and psychological.  

Please leave feedback below or email me at deubanks.office@gmail.com.

Monday, July 14, 2014

Finding and Using Predictors of Student Attrition

A while back, I wrote about finding meaning in data, and this has turned into a productive project. In this article I'll describe some findings on causes of student attrition, a conceptual framework of actions to prevent attrition, and an outline of the methods we use.

In order to find predictors of attrition, we need to find information that was gathered before the student left. A different approach is to ask the student why he or she is leaving during the withdrawal process, but I won't talk about that here. We use HERI's The Freshman Survey each fall and get a high response rate (the new students are all in a room together). Combining this with the kinds of information gathered during the admissions process gives several hundred individual pieces of information. These data rows are 'labeled' per student with the binary variables for attrition (first semester, second semester, and so on). In several years of data, and relying on two different liberal arts colleges, we get the same kinds of predictors of attrition:

  • Low social engagement
  • High financial need
  • Poor academics
  • Psychology that leads to attrition: initial intent to transfer, extreme homesickness, and so on.
These can occur in various combinations. A student who is financially stressed and working two jobs off campus may find it hard to keep up her grades. At the AIR Forum (an institutional research conference) this June, we saw a poster from another liberal arts college that identified the same four categories. They used different methods, and we have a meeting set with them to compare notes.
For purposes of forming actions, we assume that these predictive conditions are causal. There's no way to prove that without randomized experiments, which are impossible, and doing nothing is not an option. In order to match up putative causes with actions, we relied on Vincent Tinto's latest book Completing College: Rethinking Institutional Action, taking his categories of action and cross-indexing them with our causes. Then we annotated it with the existing and proposed actions we were considering. The table below shows that conceptual framework for action.
Each letter designates some action. The ones at the bottom are actions related to getting better information. An example of how this approach generates thoughtful action is given next.

Social Engagement may happen through students attending club meetings, having work-study, playing sports, taking a class at the fitness center, and so on. This can be hard to track. This led us to consider adopting a software product that would do two things: (1) help students more easily find social activities to engage with, and (2) help us better track participation. As it turned out, there was a company in town that does exactly this, called Check I'm Here. We had them over for a demo, and then I went to their place to chat with Reuben Pressman, the CEO and founder. I was very impressed with the vision and passion of Reuben and his team. You can click through the link to their web site for a full rundown of features, but here's a quote from Reuben:

The philosophy is based around a continuous process to Manage, Track, Assess, & Engage students & organizations. It flows a lot like the MVP idea behind the book "The Lean Startup" that talks about a process of trying something, seeing how it goes, getting feedback, and making it better, than starting over again. We think of our engagement philosophy the same way:
  • Manage -- Organize and structure your organizations, events, and access to the platform.
  • Track -- Collect data in real-time and verify students live with mobile devices
  • Assess -- Integrate newly collected data and structurally combine it with existing data to give real-time assessment of what works and doesn't and what kinds of students are involved
  • Engage -- Use your new information to make educated decisions and use our tools for web and mobile to attract students in new ways
  • Rinse, and Repeat for more success!
A blog post talking more about our tracking directly is here. We take a focus on Assessing Involvement, Increasing Engagement, Retaining Students, and Successfully Allocating Funding.
Currently, we can get card-swipe counts for our fitness center, because it's controlled for security reasons. An analysis of the data gives some indication (this is not definitive) that there is an effect present for students who use the fitness center more than those who don't. This manifests itself after about a year, for a bonus of three percentage points in retention. The ability to capture student attendance at club events, academic lectures, and so on with an easy portable card-swipe system like Check I'm Here is very attractive. It also helps these things happen--students can check an app on their phones to see what's coming up, and register their interest in participating.

Methods 

I put this last, because not everyone wants to know about the statistics. At the AIR forum, I gave a talk on this general topic, which was recorded and is available through the organization. I think you might have to pay for access, though.

The problem of finding which variables matter among hundreds of potential ones is sometimes solved with  step-wise linear regression, but in my experience this is problematic. For one thing, it assumes that relationships are linear, when they might well not be. Suppose the students who leave are those with the lowest and the highest grades. That wouldn't show up in a linear model. I suppose you could cross multiply all the variables to get non-linear ones, but now you've got tens of thousands of variables instead of hundreds.

There are more sophisticated methods available now, like lasso, but they aren't attractive for what I want. As far as I can tell, they assume linearity too. Anyway, there's a very simple solution that doesn't assume anything. I began developing the software to quickly implement it two years ago, and you can see an early version here.

I've expanded that software to create a nice work flow that looks like this:
  1. Identify what you care about (e.g. retention, grades) and create a binary variable out of it per student
  2. Accumulate all the other variables we have at our disposal that might be predictors of the one we care about. I put these in a large CSV file that may have 500 columns
  3. Normalize scalar data to (usually) quartiles, and truncate nominal data (e.g. state abbreviations) by keeping only the most frequent ones and calling the rest 'other'. NAs can be included or not.
  4. Look at a map of correlates within these variables, to see if there is structure we'd expect (SAT should correlate with grades, for example)
  5. Run the univariate predictor algorithm against the one we care about, and rank these best to worst. This usually takes less than a minute to set up and run.
  6. Choose a few (1-6 or so) of the best predictors and see how they perform pairwise. This means taking them two at a time to see how much the predictive power improves when both are considered. 
  7. Take the best ones that seem independent and combine them in a linear model if that seems appropriate (the variables need to act like linear relationships). Cross-validate the model by generating it on half the data, testing against the other half, do this 100 times an plot the distribution of predictive power.
Once the data set is compiled, all of this takes no more that four or five minutes. A couple of sample ROC curves and AUC histograms are show below.


This predictor is a four-variable model for a largish liberal arts college I worked with. It predicts student retention based on grades, finances, social engagement, and intent to transfer (as asked by the entering freshman CIRP survey).

At the end of this process, we can have some confidence in what predictors are resilient (not merely accidental), and how they work in combination with each other. The idea for me is not to try to predict individual students, but to understand the causes in such a way that we can treat them systematically. The example with social engagement is one such instance. Ideally, the actions taken are natural ones that make sense and have a good chance of improving the campus. 

Saturday, July 12, 2014

A Cynical Argument for the Liberal Arts: Part Sixteen

Previously: Part Zero ... Part Fifteen

The "Cynical Business Award" goes to ReservationHop, which I read about in CBC News here. Quote:
A San Francisco startup that sells restaurant reservations it has made under assumed names is raising the ire of Silicon Valley critics.  
ReservationHop, as it's called, is a web app with a searchable database of reservations made at "top SF restaurants." The business model is summed up in the website's tagline: "We make reservations at the hottest restaurants in advance so you don't have to."  
Users can buy the reservations, starting at $5 apiece, and assume the fake identity ReservationHop used to book the table. "After payment, we'll give you the name to use when you arrive at the restaurant," the website says. 
 The "coin of the realm" in this case is the trust between diner and restaurant that is engaged when a reservation is placed. This is an informal social contract that assures the diner a table and assures the restaurant a customer. Sometimes these agreements are broken in either direction, but the system is valuable and ubiquitous. The ReservationHop model is to take advantage of the anonymity of phone calls to falsely gain the trust of the restaurant and essentially sell it at $5 a pop. This erodes trust, debasing the aforementioned coin of the realm. Maybe the long term goal of the company is to become the Ticketmaster of restaurant reservations.

One can imagine monetizing all the informal trust systems in society in this way. Here's a business model, free of charge: you know all those commercial parking lots that give you a time stamped ticket when you drive in? It's easy to subvert that with a little forethought. Imagine an app that you use when you are ready to leave. With it, you meet up with someone who is just entering the parking lot and exchange tickets with them. You get to leave by paying for almost no time in the lot. If they do the same, so can they, ad infinitum. Call it ParkingHop.

One can argue that these disruptive innovations lead to improvements. In both the cases above, the debasement of the trust-coin is due to anonymity, which nowadays can be easily fixed. The restaurant can just ask for cell-phone number to verify the customer by instead of a name, for example, and check it by calling the phone. This isn't perfect, but the generally the fixing of personal identity to actions creates more responsible acts. The widely-quoted faking of Amazon.com book reviews, for example, is greatly facilitated by paid-for "sock puppet" reviewers taking on many identities. So anonymity can be a power multiplier, the way money is in politics.  The natural "improvement," if we want to call it that, is better record keeping and personal registration of transactions. This is what the perennial struggle to get an intrusive "cybersecurity" law passed is all about (so kids can't download movies without paying for them), and the NSA's vacuuming up of all the data it can. We move from "trust," to "trust but verify," to "verify."

These are liberal artsy ideas about what it is to be human and what it is to be a society. The humanities are dangerous. How many millions have died because of religion or ideology? I've been wondering lately how we put that tension in the classroom. Imagine a history class with a real trigger warning: Don't take this class if you have a weak disposition. If you aren't genuinely terrified by the end of a class, I haven't done my job.

Thursday, June 19, 2014

A Cynical Argument for the Liberal Arts, Part Fifteen

Previously: Part Zero ... Part Fourteen

After being derailed last time by a quotation, I'll take up the question of constructive deconstruction: how can the corrosive truth-destroying effect of Cynical "debasing the coin of the realm" lead to improvements within a system? Note that a system (loosely defined: society, a company, a government) is the required  'realm' in which to have a 'coin.' My modern interpretation is official or conventional signalling within a group. Waving hello and smiling are social coins of the realm. Within organizations, a common type of signal is a measure of goal attainment, like quarterly sales numbers. It's this latter, more formal type of signal, that is our focus today.

A formalization of organizational decision-making might look like this (from a slide at my AIR Forum talk):


Along the top, we observe what’s going on in the world and encode it from reality-stuff into language. That’s the R → L arrow. This creates a description like “year over year, enrollment is up 5%,” which captures something we care about via a huge reduction in data. R → L is data compression with intent.

As we get smarter, we build up models of how the world behaves, and we reason inside our language domain about what future observations might look like with or without our intervention. We learn how to recruit students better under certain conditions. When we lay plans and then act on them, it is a translation from language back into the stuff of reality—we DO something. By acting, we contribute to the causes that influence the world. We don’t directly control the unfolding of time (red arrow), and all pertinent influences on enrollment are taken into account by the magical computer we call reality. Maybe our plans come to fruition, maybe not.

Underlying this diagram are the motivations for taking actions. Our intelligence, whether as an individuals or as an organization, is at the service of what we want. It's an interesting question as to whether one can even define motivation without a modicum of intelligence--just enough to transform the observable universe (including internal states) into degrees of satisfaction with the world. As animals, we depend on nerve signals to know when we are hungry or in pain. These are purely informational in form, since the signals can be interrupted. That is, there is no metaphysical "hunger" that is a fundamental property of the universe--it's simply an informational state that we observe and encode in a particular way. In a sense, it's arbitrary.

For organizations, analogs to hunger include many varieties of signals that are presumed to affect the health of the system. Financial and operational figures, for example. These are often bureaucratized, resulting in "best salesman of the quarter" awards and so forth. An essential part of the bureaucracy is the GOAL, which is an agreed-upon level of motivation achievement, measured by well-defined signals. An example from higher education is "Our goal for first year student enrollment is an increase of 6% over three years."

A new paper from the Harvard Business School "Goals gone wild," by Lisa D. Ordóñez, Maurice E. Schweitzer, Adam D. Galinsky, and Max H. Bazerman, provocatively challenges the notion that formal goals are always good for an organization. This itself is a cynical act (challenging established research by writing about it), but the paper itself is a great source of examples of how Cynical employees can react to a goal bureaucracy in two ways:

  • Publicly or privately subverting goals through actions that lead to real improvements in the organization, or 
  • Privately debasing the motivational signals that define goals for personal gain.
The first case is desirable when the goals of an organization, when taken too seriously, are harmful to it. Too much emphasis on short-term gains at the expense of long-term gains is an example. This positive reaction to bad goals is not the topic of the paper, but proceeds from our discussion here. The second case is ubiquitous and unremarkable: any form of cheating-like behavior that inflates one's nominal rank, reaping goals-rewards while not really helping the organization (or actually harming it).


Earlier, I referenced a AAC&U survey of employers that claimed they wanted more "critical thinkers." Employees who find clever ways to inflate their numbers is probably not what they have in mind. Before the Internet came around, I used to read a lot of programming magazines, including C Journal and Dr. Dobbs. One of those had a (possibly apocryphal) story about a team manager that decided to set high bug-fixing goals for the programmers. The more bugs they found and fixed, the higher they were ranked. Of course, being programmers, they were perfectly placed to create bugs too, and the new goals created an incentive for them to do just that: create a mistake, "find" it, fix it, get credit, repeat. This is an example of organizational "wire heading." 

From the Harvard paper's executive summary, we can see problems that an ethical Cynic can fix;
The use of goal setting can degrade employee performance, shift focus away from important but non-specified goals, harm interpersonal relationships, corrode organizational culture, and motivate risky and unethical behaviors.
Armed with the liberal-artsy knowledge of signals and their interception, use, and abuse, AND prepared with an ethical foundation that despises cheating, an employee has some immunity to the maladies described above. Of course, this depends on his or her position within the organization, but generally, Cynical sophistication allows the employee to see goals as what they really are--conventions that poorly approximate reality. This is the "big-picture" perspective CEO-types are always going on about. Moreover, an ethical Cynic who is in a position of power is less likely to misuse formal goal-setting as a naive management tool. 

The paper itself is engagingly written, and is worth reading in its entirety. With the introduction above, I think you'll see the potential positive (and negative) applications of Cynical acts. Informally, the advice is "don't take these signals and goals too seriously," which is a point the original Cynics repeatedly and dramatically made. More formally, we can think of a narrow focus on a small set of goals as a reduction in computational complexity, in which our intelligent decision making has to make do with a drastic simplification of the world. Sometimes that doesn't work very well, and the Cynics are the ones throwing the plucked chickens to prove it.

Next: Part Sixteen

Friday, June 06, 2014

A Cynical Argument for the Liberal Arts, Part Fourteen

Previously: Part Zero ... Part Thirteen

Because Cynical "debasing the coin of the realm" corrupts the way in which individuals or organizations are aware of the world, it may be hard to see how one gets beyond this destruction. Now I would like to speculate on how we can think of Cynicism as a constructive method.

Cynical attacks on categorical ways of knowing (I called them signals earlier) challenge the categories by turning something into its negation. A counterfeit is, and is not, a coin. When I started thinking about using a destructive method of constructing, the Sherlock Holmes quote came to mind. This led me down a rabbit hole that ironically illustrates the point. When I searched for the quote, I found the following attributed to Sir Arthur Conan Doyle:
Once you eliminate the impossible, whatever remains, no matter how improbable, must be the truth.
I wanted to be sure, and it took some time to track down a reference. One site put these words in the mouth of Doyle's Sherlock Holmes in "The Adventure of the Beryl Coronet". However, this doesn't seem to be the case. After more work I found the complete works of Doyle in a single text file here. The word 'eliminate' is only used twice in the whole body of work. Here's the one we want, in the first chapter of The Sign of Four,  with Watson speaking first:
"How, then, did you deduce the telegram?"
"Why, of course I knew that you had not written a letter, since I sat opposite to you all morning. I see also in your open desk there that you have a sheet of stamps and a thick bundle of postcards.  What could you go into the post-office for, then, but to send a wire? Eliminate all other factors, and the one which remains must be the truth."
[Update: I subsequently found the actual quote by searching for 'improbable'. Leaving the rest of the post unchanged.]

Now it's possible that Doyle used the first quote outside of a novel, but the story becomes odder when we look at the record. Using Google's ngram server, I searched for "eliminate the impossible" and "eliminate all other factors." Here's the result:

Red line is "eliminate all other factors", blue is "eliminate the impossible"
Tantalizingly, this has the common quote coinciding with the publication date of The Sign of Four, according to this site. The oldest edition I could find was a book (not the original magazine, in other words) scanned from University of Michigan's library:


Its publication date is "MDCCCCI," or 1901. I tried adding the "you" in front of the first quote, and got no hits on the ngram server, but it may have a limit on how long phrases can be. After more searching I found a scanned copy of the original 1890 Lippincott's Monthly Magazine. It has the same text as the book shown above. This would seem to rule out a change in the language between the original magazine publication in 1890 and subsequent compilation in 1901.

Google Books results show that there are 19th century instances of "eliminate the impossible," which probably explain the eruption of the blue line in the graph before the (book) publication of The Sign of Four in 1901. I looked for other partial phrases too, like "no matter how improbable." These didn't turn up Doyle, but other treasures. From Doing Business in Paris:


 and this one:
The sentiments of both of these is that untruths can propagate wildly, given the right conditions, which may be what we're seeing here, and a good reason for Cynical weeding out.

Moving forward in time, I scanned 1900-1920 with Google Books, and found a magazine called Woman Citizen with this text from April 26, 1919.

The offhand rephrasing switches "Eliminate all other factors" with "Eliminate the impossible." Six years later, we find:

This has the "however improbable" drama that Doyle didn't put in the mouth of Holmes. The attributed quote finally appears in full form in 1949:

Not only is the hyperbolic "however improbable" in appearance, it's in italics for extra effect. The book seems to be an edited volume by various authors of Doyle/Holmes-related biography, anecdotes, and lit crit, although all I have to go by is the random samples that the stingy owner of copyright allows Google to produce. Worldcat coughed up only five copies, none of which are online. Let us allow the matter to come to rest here.

By eliminating possibilities, we come to the tentative conclusion that Doyle's famous quote is actually due to fan fiction, which can easily be seen as a Cynical debasement of the original. This example illustrates this by switching out Doyle's concise "Eliminate all other factors, and the one which remains must be the truth" with the nearly breathless "Eliminate the impossible, and whatever remains, however improbable, must be the truth." The fact that the latter one is much more well known, and attributed to Doyle is a clear debasement of the latter's words, and is a minor disturbance in the reality of anyone who believes the wrong version. The famous quote is, and is not, by Sherlock Holmes.

It takes a lot of work to separate signal from noise, which makes Cynics dangerous. Now what about that construction? That will have to wait until after the rabbit soup.

Next: Part Fifteen

Thursday, June 05, 2014

A Cynical Argument for the Liberal Arts, Part Thirteen

Previously: Part Zero ... Part Twelve

Here we continue the discussion of Cynicism in higher education, where I continue to use the charge "debase the coin of the realm" as the tool for analysis. In a previous installment I said that any statement of fact was a violent act. Allow me to pick up that idea here, since much of college consists of listening to statements of fact.

We constantly observe the world through our senses of internal and external affairs, and some of these we encode into language, as in "the cat just knocked over the vase." This entails data compression, since we don't have time to describe everything about the cat and the vase or the irrelevant features of the situation. But it's such a convenience that we may forget that it's just language, just a crude approximation of what was actually observed in that moment. If you'll allow me a neologism, I'd like to refer to compressed data as 'da'. So we pass da back and forth, decompress it internally, ignore the inevitable errors, and this becomes a high value "coin of the realm." If you tell me "the bridge is washed out ahead, but if you turn at the gas station you can get to town that way," it's useful da. However, passing falsehoods along debases this coinage, and lying is something we tell kids not to do.

A statement like "seasons are due to the fact that the Earth's axis of rotation is tilted relative to its orbital plane" require work to decompress. Much work in college is just to build up an ontology that lets a student associate new ideas in useful ways, sometimes by translating them into pictures. A movie that illustrates the principle in the sentence will be more effective than that sentence because moving pictures can directly simulate motion of the Earth and sun, and a viewer can create his or her own da from it. I assume that much of the time students don't know what we professors are talking about. It's just so much da-da, and that represents a failure to teach or learn or both.

Statements of fact that are inaccessible for technical reasons may not be threatening simply because it's a foreign language, and easily dismissed. On the other hand, much of what students learn in the humanities will challenge the way they think about the world. Religious fundamentalism running up against modern biology is an obvious example. But the critical theory that accompanies much of the humanities is more insidious, and can undermine the whole project of reasoning and knowing with a challenge from relativism. A student who internalizes this may not know what to believe by the end of it. There is something good in this idea that there is not always a single correct answer to a question, and that the process of reasoning is more valuable than any one product of it. It seems that the opposite is generally trained into students in the typical K-12 curriculum, where advancement may depend on a standardized test.

To explicate the process of deconstructing truth, allow me to re-use the example of Diogenes tossing the plucked chicken at the feet of Socrates, after the latter declared man to be a featherless biped. How does this trick work?

The Cynical method of attack in this case comprises an action that demonstrates a contradiction between the real world and the way we are told to perceive it. A modern example is found in "The Serial Killer Has Second Thoughts: The Confessions of Thomas Quick," which I will take at face value for the purposes of this argument (feel free to debase that value). A mentally confused man confesses to brutal crimes, and becomes a celebrated serial killer. Then it turns out that he couldn't have committed the crimes he confessed to. Many people have been convicted of crimes that they didn't commit. But it seems that he willingly participated in this fiction, which makes him a fowl-flinging Cynic. The coin of the realm debased here is the operation of the justice system and media (in Sweden in this case) and its accurate representation of reality. Like most public Cynical acts, it comes at a high price.

In order to construct such an epistemological attack on a system, one needs to find categories that are incompatible and by acting produce examples that occupy both categories. For example, the conception of worker-as-machine, which is an employer's point of view, conflicts with worker-as-human, which is the employees' point of view. A worker strike is an incompatible categorization: workers who are not working, and therefore a Cynical chicken toss. Once you see the trick, other examples become obvious: art that is not art (Duchamps), slave-as-beast versus slave-as-human, animal-as-meat versus animal-as-living-creature, Earth-as-resource versus Earth-as-home.

Some of the major discoveries of the world are paradoxical category-defiers. The discoverers of the calculus used infinitesimals, which are quantities 'infinitely small but not zero.'  The only thing infinitely small can mean is zero, so this is contradictory, and yet it is exactly the idea needed to unlock differential calculus. Or the Copenhagen interpretation, in which reality acts like probability waves collapsing into real particles. Or Einstein's category-breaking conceptions of space and time (how can time operate at different rates, when time determines what a rate is?). Or the Dirac Delta, which is essentially a box that has zero width, is infinitely tall, and contains one unit of area! The Liar Paradox "this statement is false" in the hands of Russel and Turing and Gödel caused a rethinking of the fundamentals of mathematics and set logical limits on what we can know. These we should probably consign to 'cynical' rather than 'Cynical' status, since they are arguments rather than physical acts, but this may be splitting hairs.

More common than public abuse of our feathered friends is the secret use of contradicting categories for personal gain. When these become public, they may serve as inadvertent Cynical examples. For example, Slate's "Here’s the Awful 146-Word “Essay” That Earned an A- for a UNC Jock". Quote:
The University of North Carolina–Chapel Hill has already been embroiled in a scandal for allowing its athletes to enroll in fake courses for easy credit. 
A university's motivation for recruiting a top athlete is different from its motivation for recruiting a top student, yet NCAA rules require that athletes also be successful students (there is no requirement the other way around). When forced to coincide for purposes of classification, the overlap "student-athletes" seems to often include examples that fit the latter but not the former category. Actually producing these examples is a Cynical act, even though it's not intended to be public (quite the opposite). There is a straightforward way to debase the coin of the realm (the meaning of college grades). Unfortunately for these Cynics, sometimes people find out make it public.

It seems that private Cynicism for gain, as opposed to public Cynicism, needs a name. I will refer to it as crypto-Cynicism, meaning hidden. The name seems appropriate, since this is a lot of what spies do, like assuming the identity of someone else, forging documents, getting people to trust them who shouldn't, or hiding a real message within an apparent one.

The examples show that Cynicism (as I have interpreted it) remains a powerful force for change, and that it doesn't come with ethical instructions. Colleges that teach the students the deepest forms of subversion, like deconstruction, relativism, critical theory, and so on (these overlap) hand over to the young minds solvents that can turn the world--and their own minds--to goo. Since becoming secular, colleges shy away from construction part, which is unfortunate. We could be more intentional about the existentialist project that should follow the deconstruction--finding a personal meaning to construct out of the goo. But aside from that, there is a more troubling question.

Perhaps what employers want is really two things: (1) a class of graduates who are smart enough to pick up new tasks of varying complexity and fill the technical and social demands of being a machine-part in an organization, and (2) a smaller cadre of crypto-Cynics who are willing to break rules in order to get ahead. If so, then the unsettling conclusion is that a two-class system is exactly what's needed. The first is trained to follow rules and keep their heads down, mastering jobs that real machines will eventually take. The second provides motivation and vision uncluttered by the norms of society, its laws, science, or even a common understanding of the world, to serve as leadership. The spectrum of this second group would range from normal humans to psychopaths and mystics, who succeed or fail depending on their environment. The contrast between these two types (1) and (2) can perhaps be seen in stories like this one from the Huffington Post: "For-Profit College Enrolls, 'Exploits' Student Who Reads at Third-grade Level" in these two quotes:
A librarian at a southern California campus of Everest College abruptly resigned last week, deeply upset that the for-profit school had admitted into its criminal justice program a 37-year-old man who appears to read at a third-grade level.  
versus
Everest is owned by for-profit giant Corinthian Colleges, which is facing a lawsuit for fraud by the attorney general of California and is under investigation by 17 other state attorneys general and four federal agencies. [A senior administrator] at Corinthian, told me today that the campus believed it was appropriate to take a chance on admitting the student.
There's a natural fix to this particular problem: require that the leadership of the company must come from its own graduates.

[Part Fourteen]

Monday, May 26, 2014

2014 AIR Forum: Correlation, Prediction, and Causation

I've put my slides for my AIR presentation on dropbox. You can access them here. The text of my remarks is included as comments under the slides.

Saturday, May 24, 2014

A Cynical Argument for the Liberal Arts, Part Twelve

Previously: Part Zero ... Part Eleven

Last time I compared organizations to biological organisms competing against Nature and against the rest of the ecology for survival. The battlefield is physical and virtual. Arm & Hammer's factories producing baking soda with less energy cost is good for the company. Convincing consumers that they need to buy a new box to put in the fridge every month is gold. Members of an organization are valuable to it in ways parallel to these two dimensions. Engineers that can improve efficiencies or design new products are valuable. So are accountants that can make profits tax-free. At the top of the organization, the role is entirely virtual. Generals push around symbols on a map while privates sweat in foxholes. A janitor who shows genius-level proficiency with a mop is not going to become CEO due to that skill. However, a CEO who doesn't understand how the physical world works--insofar as this affects the business--is probably not going to make very good decisions.

Higher education does a fantastic job of teaching students about physical reality (assuming said students want to learn about it). There's no substitute for experts in theory and practice, and the labs and equipment needed to engage physical reality in sophisticated ways. If you want to become an expert in what happens to molecules when you "ring" them with a sudden electromagnetic pulse, you can learn all about Fourier Transforms and whatnot, but you need a Nuclear Magnetic Resonance machine to actually do it.

The Enlightenment victory over shy Nature justifies the role of universities, but I think it also lends itself to the argument that education should be about physical stuff--learning how to stick needles in someone's arm or design a turbine blade OR low-complexity information-shuffling, like learning two-column accounting or how to integrate partial fractions. These are all safely science-y, easy to verify when accomplished, and straightforward to teach.

The victories and failures of The Enlightenment in the informational co-domain do not seem to be of as much interest in the public discourse on higher education. Ideally, this is where liberal arts education provides a benefit, but this message isn't being conveyed, and perhaps the institutions themselves haven't really internalized it.

With this lens in place, let's look at the signal-domain role of education as preparation for life in an organization. The latter might be a business, the military, a government bureau, or it might mean "to be a citizen," which can be restricted to a nationality or not. The Cynics invented "cosmopolitan," and we might agree that the highest calling of any educated person is to be of service to humanity as a whole (like Elon Musk, who brilliantly navigates both the physical and virtual landscape).

These respective roles are sometimes mutually exclusive. A citizen of a country may be at odds with a citizen of the world, and be the same person. Governments do bad things sometimes, and we might agree that the role of the citizen sometimes is to correct that in the name of some more abstract notion of what it means to be a citizen. It's the same with any organization.

Imagine this hypothetical advertisement from a college:
Our business school produces graduates that have the training to meet your most stringent demands in management, accounting, marketing, business law, international relations, and many other areas. In addition, they have been indoctrinated to be completely loyal to your organization, no matter how far you want to bend the law or even human decency--you can count on them to do the right thing!
This imaginary school is trying to guarantee that any cognitive dissonance in a new hire's mind between what the business wants done and any other role (e.g. citizen, human) will be resolved in favor of the business. I don't mean to demonize businesses with this example. In a real organization, including the military, loyalty is probably limited by intent. For example, a soldier swears to uphold the constitution, not to do what generals tell him/her to do, which allows a loophole for higher order goals (like preventing coups). The point is that it's important to an organization's survival to have a "signals" strategy in order to manage the virtual battlefield it competes on. And since most organizations still have humans in them, this means being intentional about the abilities and intentions of members or employees. The later Bush administration's Justice Department hires and fires did this rather crudely, and people noticed. Machiavelli talks about the idea in The Prince. Paraphrasing: when your enemies are physically beaten back is the best time to beat them at the information game too.

Separating signals from motivations is impossible because we only care about signals we care about. So any "coin of the realm" comes as a package including:

  • A signal (the coin as a denomination of exchange value, if taken literally)
  • Whatever the signal signifies in the physical world.
  • A realm (the organization that relies on, and probably enforces, the signal)
  • Attitudes toward the signal and realm:
    • An organization's official stance (may or may not be the 'realm' that endorses the signal)
    • An organization's practical stance (e.g as enforced)
    • The stance of individuals who come in contact with the signal, which may not be a single attitude.
If this seems complex, it is! Take the US dollar as a simple example of a signal (packaged in different informational form as currency, bonds, electronic accounting, etc.). It's clear that different nations have different intentional stances toward it, including outright debasement (North Korea). 


As a more complex example, consider an anecdote from Nadezhda Mandelstam's Hope Against Hope, where she describes spending the night at a friend's house during The Terror:
They were on the seventh story, so you couldn't hear cars stopping outside, but if ever we heard the elevator coming up at night, we all four of us raced to the door and listened. "Thank God," we would say, "it's downstairs" or "it's gone past." 
In the years of the terror, there was not a home in the country where people did not sit trembling at night, their ears straining to catch the murmur of passing cars or the sound of the elevator.

Stalin was sometimes presented with lists of names, beside which he would--or more likely would not--place a check mark to spare the individual. This informational signal filtered its way through the corridors of the NKVD and eventually manifest as a knock on the door at night for the unfortunate people who were identified. The Terror originated in the signal domain (to affect behavior by getting to the source of it), with physical effects (people killed or sent to the gulag). There are many rich complexities, such as the definition of "Kulak," and the show trials (see Darkness at Noon). I have written more about this in "Nominal Reality and Subversion of Intelligence."

Compared to engineering problems, where Nature may be cruel but not fickle, understanding the role of individuals in toxic situations is a hard problem. The Great Terror was not the work of Stalin alone, and it's easy enough to demonize the NKVD agents, but since they were presumably human beings too, it makes more sense to try to understand the signal/motivation balance that made them behave as they did. What were the signals and debasements thereof? What competing realms? What intentional stances toward these?

A liberal arts curriculum is bound to include more of this kind of wrestling with hard problems. I think it's mostly done on paper, as thought exercises, but this is better than nothing. There are ethical limits how what sorts of practice we can engage in (a 'lab experience' in Great Terror sounds pretty dicey), but there may be a homeopathic "The Small Fright" that can be experienced by undergraduates without damaging them, and that would let them try hands-on Cynicism.

----

The 'Cynic of the day' award goes to NPR's "'Mischievous Responders' Confound Research On Teens," which gives an amusing account of epistemological struggle.

The runner-up is the BBC's "Should we all be a bit psychopathic at work?", which asks how far we should prune back our internal signals for getting along with others.

Next: Part Thirteen

Thursday, May 22, 2014

A Cynical Argument for the Liberal Arts, Part Eleven

See also parts: [Zero] [One] [Two] [Three] [Four] [Five] [Six] [Seven] [Eight] [Nine] [Ten]

The line of thought so far is that:

1. Cynicism is powerful and can be beneficial or detrimental to the individual, to society, and employers,
2. Cynicism is better learned or cultivated at liberal arts colleges than professional programs, and in particular there is a better chance that through diluted exposure, graduates are more likely to be responsible critics and users of the philosophy.
3. That this is not obvious to employers, despite what they say on surveys, and
4. That the employability deficit can be overcome by liberal arts colleges themselves.

We start by showing that organizations breed Cynicism. Because I have cast classical Cynicism as a solvent for epistemology, allow me to describe organizations accordingly. The Darwinian understanding of the production and memory of novelty is fundamentally about information and its relationship to the world, which is an ideal tool for us here. In biology, informational signals are transmitted through time by genetic and epi-genetic states that get translated into phenotypes (the bodies of plants and animals and their behavior), which then compete with each other for survival and reproduction. The result is a competitive truth-finding exercise that randomly explores the natural world (including the ecology itself) for relative advantages. Evolution only proceeds non-randomly where these truths are discoverable. For example, it may be in the long-term interests of bacteria to figure out how to travel to other planets, but this ability may not lie within the discoverable landscape of genetic traits.

Modern organizations are similar to biological entities. They encode information into processes, procedures, paperwork, job definitions, and so on, which I'll refer to as an ontology. The ontology loosely represents the way the organization "understands" the world. Of course, it's not really intelligent the way people are--it's more like a machine, which is the metaphor I began with. The machine has a certain amount of randomness in its behavior, but it will probably have well-defined ways of perceiving the world and encoding those perceptions into the bureaucratic language of its ontology. For example, a team of accountants that produce an audit report create an official understanding of the organization's monetary value, cash flow, and so on. This information can be transmuted into reality too, for example a company with solid financial statements can get a loan to build a new factory. There's nothing in the ontology that requires morality (Google's "don't be evil" aside), which lies within individuals, the not organization per se.

Just like in biological ecologies, most organizations compete for limited resources. This is truth-finding when advantages are discoverable, which implies that they can be perceived by the organization. Those with limited ability to understand the world will be at a disadvantage. You can watch this play out in real time at a basketball game. Motivations are clearly understood through the rules of the game, and the ways of knowing success are deliberately clear--the basket even has a net hanging from it so that it's obvious when points are scored (compare this to rating figure skating). This creates a competition between the two teams for truth-finding, which in this case means finding more effective ways of playing the game (better strategy, tactics, training, players, etc.). This would not be the case if points were scored entirely at random. From this point of view, the fans turn out to see the evolution of team ontologies. These unfolding histories are the subject of counterfactual conjectures ("what the coach should have done was..."), which the fans are unlikely to think of as metaphysics, but it fits the mold.

Cynicism is the life and death to organizations. If its conception of reality is sufficiently undermined, say by an enemy general using deception, an organization may make bad decisions. It can also easily fool itself. I wrote a series of articles about this for the Institute for Ethics & Emerging Technologies, which you can find here, here, and here, and I'll pass over this point, called "wireheading" in computer science literature.

But Cynical attacks on the reality of others is so effective that it can be a means of keeping an organization alive too. This morning a colleague walked in, and in our conversation volunteered the following story. I have no way to verify it, but it illustrates the point.
"Joe" gets a degree from an online college, and is delighted when an school-arranged internship turns into a real job upon graduation. He is happy at the job, but is fired after six months and a day for unspecified reasons. A new graduate is hired in his place. He discovers that the CEO of the company is also on the board of the college he graduated from, and hypothesizes that  the company is used to inflate gainful employment percentages for the college.
In this tale, the college is debasing what "gainful employment" means to the department of education. The next story should also be treated as apocryphal. It illustrates how tangled these signals get, and how Cynicism naturally emerges. The story was told to me by a historian friend who said it originated with someone in the State Department.
As the story goes, the leadership of the cold war USSR needed good information about the size of their economy. But they couldn't trust their underlings because Cynicism was a survival trait: tell the boss what he wants to hear. Instead of accepting the bloated over-optimistic estimates of their own people, they relied on the CIA to tell them the truth. On the other side of the world, the CIA had indeed calculated what they thought the size of the USSR's economy was, but the number was so small that they thought no one in Washington would believe them. So they artificially doubled the number. Therefore the Kremlin used an estimate of their own economy that was about twice as big as it should have been.
These stories, true or not, illustrate the kinds games that are played within and between organizations. When they have happy endings, sometimes we call them "disruptive technologies" or "competitive advantage." On the other hand, sometimes we call them Enron and Bernie Madoff and mortgaged-backed securities.

Bureaucrats are usually thought of as boring, but nothing could be further from the truth. They handle, with their copy-fluid-stained fingers, the neurology of the organization. The reality by which it lives and dies is contained in those forms and procedures for acting on forms, and the relationship between the content and actual reality is constantly being subverted. People (gasp) lie on paperwork to get what they want. A sufficient break with reality leaves the organization in a state like psychosis. Or like the dodo bird--choose your metaphor.

For a vivid development of a psychotic break with reality, read Michael Lewis's The Big Short, where he describes how the ratings agencies were 'gamed' to bless crummy investments with the official stamp of worth. These ratings are almost literally "coins of the realm," since they limit the behavior of institutional investors. A quote from page 98 of Lewis's book:
The big Wall Street firms [...] had the same goal as any manufacturing business: to pay as little as possible for raw material (home loans) and charge as much as possible for their end product (mortgage bonds). The price of the end produce was driven by the ratings assigned to it by the models used by Moody's and S&P. The inner workings of these models were, officially, a secret: Moody's and S&P claimed they were impossible to game. But everyone on Wall Street knew that the people who ran the models were ripe for exploitation.
This is epistemological warfare, and you want the most capable Cynics on your side.

Update: The image below is taken from the SEC's 2008 report "Summary Report of Issues Identified in the 
Commission Staff’s Examinations of Select Credit Rating Agencies", and has been reformatted around this single bullet point.
Next: Part Twelve

Wednesday, May 21, 2014

A Cynical Argument for the Liberal Arts, Part Ten

See also parts: [Zero] [One] [Two] [Three] [Four] [Five] [Six] [Seven] [Eight] [Nine]

In contrasting liberal arts education and 'jobs training', I've compared the latter to the construction of robots. This is fair for some kinds of jobs--like assembling parts to make a consumer product--where it's clear that automation is well-advanced, but what about those "high-paying" jobs that college is supposed to prepare people for? According to AAC&U's LEAP initiative, which included surveying employers [source], the liberal arts comes out looking pretty good. According to the report::

  • "Ability to innovate" is overwhelmingly important
  •  "Capacity to think critically, communicate clearly, and solve complex problems, [which] is more important than [a candidate’s] undergraduate major.”
  • "Ethical judgment and integrity; intercultural skills; and the capacity for continued new learning."
  • "When read a description of a 21st-century liberal education*, a large majority of employers recognize its importance; 74 percent would recommend this kind of education to a young person they know as the best way to prepare for success in today’s global economy."
We should note, however, that:
The mission of the Association of American Colleges and Universities (AAC&U) is to make liberal education and inclusive excellence the foundation for institutional purpose and educational practice in higher education.
Also, what people say on surveys is not necessarily indicative of how they act. I went looking for contrary opinions, and found "What's a Liberal Arts Education Good For?" at Huffpost.com. This article reinforces the survey with a philosophical argument, but some of the comments that follow are from unhappy liberal arts graduates. Here are some edited samples, emphasis added:
This is the same sort of garbage that got me where I am today, the poorhouse.
A liberal arts education is a hideous waste of time for nearly all those who get one. It prepares the graduate for absolutely nothing. If you emerge from 4 years of college with a degree and no one is recruiting you for a job, you just wasted 4 years of life, a lot of money and a whole lot of effort. --newsreader64

Liberal arts do not translate to making any money so that had better not be a factor in the choice. It is for rich people. --escobar
Recent personal events have led me to a rather different conclusion. I have a BA from a small liberal arts college, and an MA in a mushy semi-science (anthropology). [...] Now, without a professional degree, I can't even get an interview for positions which I could do with ease. I suspect this has a lot to do with the sheer volume of job-seekers on the market and the handy shortcut that a professional degree offers the HR person tasked with reading hundreds of resumes. So, despite my fervent belief in liberal arts, I am contemplating a return to school to get a law degree. -- kpod

This last comment has a kernel for the Cynic to chew on, and more fodder is served up by this last one:
As a newly minted grad with my Masters in History, fortunate enough to be teaching a a community college this semester, I am a big booster for Liberal Arts. I spent the first 25 years of my life pursuing a very successful career in a fortune 500 company and always wondered what it was about engineers and MBA's that left me feeling that some aspect of their education was lacking. After returning to school and starting with an associates degree in Liberal Arts the answer is now very clear. On the whole most of them had had the creative skills driven out of them by empirical doctrine and a value system of conformity. Give them a project or a goal and they were fine, immoral to a large degree when it came to people management but perfectly capable of meeting their objectives. --Paulo1
Although these samples are not guaranteed to be representative, it's worth considering these bullet points:

  • Some liberal arts degrees may not signal value to employers because of their apparent mismatch to job descriptions, and therefore these candidates for jobs are automatically screened out.
  • Professional (non-liberal arts) training may lend itself to conformity (or attract those kinds of people) and in the context of the job, immorality.
The first of these is just signalling, and as such is amenable to cynical or Cyncial attack--something liberal arts colleges ought to be good at. The second point is an argument that education in the humanities produces graduates with more humanity, and gets back to the 'employee as robot' metaphor. 

All of this boils down to the argument that liberal arts education can create valuable outcomes (those in the survey at the top), but that these are not easily marketed to employers. It's like they are saying they really want to eat healthy food, but belly up to the fast food counter in practice. Next time I'll walk deeper into the weeds. I find the more substantive issue more interesting: what place does a Cynic have in a bureaucracy?

Next: These Go to Eleven

Tuesday, May 20, 2014

A Cynical Argument for the Liberal Arts, Part Nine

See also: [Part Zero] [Part One] [Part Two] [Part Three] [Part Four] [Part Five] [Part Six] [Part Seven] [Part Eight]

The awareness we have of the world is mediated through signals from sensory organs and the meaning we make of these. As a practical matter, the information we receive has to be compressed in order to make sense of it. For example, we receive more than a million bytes per second through vision alone, and formulating cause/effect hypotheses about the world without compression would be practically impossible. Right now there is a fork laying on the table to my right, but the tines are hidden by a bag of dried fruit. That sentence comprises a hundred or so bytes of information, but communicates many possible ways to visualize it (decompression)--picking one makes it concrete enough to build a narrative from. This is only possible because of very high data compression.

We also have simplified internal signals. We can be "hungry for red beans and rice," but when our stomach grumbles, it just signals a generic need to be indulged. Pain too certainly comes in flavors, but an itch on the back is pretty similar to an itch on the leg--the most important information (an itch, a burn, a bug crawling up your neck) is signaled efficiently. By contrast, imagine if you were presented with a full account at the cellular level of all the relevant activity and had to sort through it all for meaning.

Perhaps one of the fundamental attributes of being human is the ability to recognize perceptual signals on this meta level (as abstractions, in other words) that can be manipulated. New ones can be created, for example by slipping small magnets under the skin to directly feel electrical/magnetic flux, or developing a taste for Scotch whisky. More familiar is the interdiction of signals, as with pain medication. A more fanciful idea is described in NYmag's "Is It Possible to Create an Anti-Love Drug?".

Two heirs to classical Cynicism, the Stoics and Epicureans, addressed internal signals. For example, ideas about the nature of grief and what to do about it is described in "How (And Maybe Why) To Grieve Like an Ancient Philosopher". The signals-based viewpoint also leads directly to the idea that death is not something to be feared, because it is simply an absence of signals. Contrast this to religions that recommend optimizing actions in life so as to produce the attractive signals in the "afterlife."

We can think of internal signals as "coins of the realm" and proceed to debase them. Drug addiction is one way to do that, but also meditation, counseling, and meta-cognition can subvert our out-of-the-box internal signals. Traditional liberal arts curricula explore this idea from many angles, even if it's not usually packaged that way. For example, our intuition versus rational thought (signals of what's real) are topics in psychology (e.g. see Daniel Kahneman's book Thinking, Fast and Slow). Add social, political, ethical, and biological signals: these are all explored from innumerable angles in the sciences and humanities. These perspectives--if taken seriously--can create in the learner a sophisticated meta-cognition that can be practically applied as an existentialist project. It goes like this: all signals are abstract by definition, which means there is a fundamental arbitrariness to them from the point of view of the receiver of the signal. Given the possibility to prefer some signals over others, we imaging a project of internal engineering to attenuate or amplify signals according to our most demanding desires.

This is a caustic process, and fully as dangerous as any Cynical enterprise. If one strips away too much, tossing aside all social and moral guides, for example, one could become a sociopath (this resembles Marquis de Sade's Cynical project, as described in The Cynic Enlightenment, starting on page 106). Or strip all the signals away and you get nihilism or suicide. But, more positively, the ongoing process of constructing a personal ontology can produce a freedom of mind that was modeled by Diogenes.

Liberal arts curricula expose internal signals and ways of attacking them, with relativism, post-modern thought, critical theory, and simply the exposure to many ways of thinking, historical decisions, and thought experiments. And so on. As with the academy in general, the approach is mostly theory and exposition rather than active mind-engineering. There is undoubtedly more colleges could do to enable self-subversion, but it would also be dangerous. I think there is some middle ground where we could operate in sandbox mode, so that students could gain some experience, and there are some experience like this available. For example, an assignment to sleep on the street for a couple of nights or practice asceticism in some form. My daughter's high school history teacher runs a project for weeks that consists of secretly identifying students as being 'communist' or 'capitalist', and prohibiting one side from communicating with the other. Students don't know which side they are on, and the teacher has spies everywhere--he shows them photos and social media screenshots of their interactivity, and deducts points accordingly. This is Cynical in that it undermines normal discourse--designed to loosely model The Terror, I'm sure. The benefits to students potentially includes reflection on the active management of feelings of unfairness or even fear. Anyone who can't see the applicability to a work environment isn't trying.

Beyond dramatic life-changes, internal freedom to attenuate and amplify signals has the potential to produce better workers too. How many of our new graduates are going to fall into their dream jobs right away? How many workplaces are unfair to employees or have abusive bosses or mean co-workers, or arbitrary rules or demeaning requirements? What, exactly, in "jobs training" is supposed to prepare a young mind for these assaults? Wouldn't it be better if they'd read and internalized The Prince? Wouldn't it be better if they knew about Foucault and the evolution of ontology and power, and how signals are ultimately arbitrary and malleable, and constantly being subverted by those who can do so to further their own ends?

Well, no. That's probably not what the employer wants. Foxconn's  replacement of humans with robots apparently involves collaboration with Google to design an appropriate operating system. This is, in effect, an attempt to specify in code what a perfect employee is. You can be there won't be a subroutine named for Machiavelli or Diogenes. (Update: apparently Google's self-driving cars have never gotten a traffic citation.)

Next time: signals and subversion at work, or "Diogenes as assistant to the regional manager."

[Go to Part Ten]