In "
On Design," I gave three versions of designing for an outcome: soft, forward, and inverse, in increasing degree of difficulty. The question "how difficult is inverse design?" is of utmost importance when we consider complex systems. As a real example, consider how the government of the United States is "designed." By this I mean, the way laws and policies are created and enforced. Because of conflicting goals of different constituents and the inherent difficulty of the project, there is no complete empirical language to describe a state of affairs, let alone do a forward simulation to see the status of the nation in, say, three years. And even if we
did have such a language, we would be limited to simulation and prediction of only the "easiest" parameters. And even these would subject to the whims of entropy, a subject I'll take up later.
I submit that individual governments, as well as companies, universities, and militaries use soft design with bits and pieces of forward and inverse design thrown in (for example trying to forecast near term economic conditions to help determine monetary policy).
Government in the general sense has been "designed" by the process of being tossed into the blender of fate, to be tested by real events. See the following video of the history of Europe to see what I mean.
Natural selection would seem to be be at work here, weeding out the worst designs. But it's not Darwinian because the countries change rapidly over time with the population and minds of leaders. One truly spectacular bad idea (like invading Russia, apparently) can bring down a whole nation. So what we are left with is a very temporary list of "least bad designs." Of course, many other factors are important, such as geography, natural resources, and so on. Even so, the people who live there still have to make use of such advantages. If Switzerland abandoned its natural mountain fortress and invaded Russia, it likely wouldn't end well.
Darwinian evolution is different from this national evolution. In the former, good solutions can be remembered and reused through genes or any other information passed from generation to generation. Diversity is created through recombination, mutation, population isolation, and so on. Darwinian evolution comes with an empirical language that we partly understand. To make a metaphor of it, "programs" are written in phenotypes and these "are computed by" the laws of physics and chemistry using the design and environment as "inputs." The fact that scientists can discover this language and use it to make predictions should be appreciated for the miracle that it is: we are witnesses to a dynamic but understandable problem-solving machine of enormous scope that has worked spectacularly well at producing viable designs
with only an empirical language. Evolution does not use predictive techniques (that is anticipating that a critter will need wings and therefore building them--for a dramatic example of this, see
this video). But there do exist creatures who
do use forward and inverse design to plan their day. If you throw a ball at target, you're predicting. If you go to the fridge to get food, you're using the inverse technique: starting with the outcome (get food) and working back to the solution. But this still isn't good enough.
Here's the rub: forward design isn't enough to guarantee any more than short term outcomes, and our ability to do inverse thinking is very limited. Let me pose a problem:
What present actions will lead to [insert subject] existing in a healthy state 100 years from now? 1000 years from now? 10,000 years from now?
The question is posed as an inverse design problem, starting with the goal and asking what needs to be done to achieve that goal. If we had a good language with which to describe the states, we could at least imagine an evolutionary approach, shown in the diagram below, with the red dot being the desired goal.
We're asking "where do we need to be NOW in order to end up where we want to be AFTER?" The forward design approach is to simulate lots of "befores" and see where then end up "after," and choose the best solution we can find. If we are lucky, we can use a Darwinian approach, combining partially successful solutions or tweaking "near misses" to home in on the best solution we can find. This depends on the sort of problem we're trying to solve, and specifically whether or not it is
continuous in the right way. Sometimes being close isn't any good--those "a miss is as good as a mile" problem. All of this highlights the importance of the empirical language, which must have rules precise and reliable enough to allow this kind of prediction and analysis. Clearly in the case of governments, companies, universities, or even our own selves, this is not possible.
With only forward design techniques and lacking a good empirical language, we can still solve the problem with massive brute force: by actually creating a host of alternatives and seeing what happens in real time. A computer game company could do this, for example. Rather than spending a lot of money to find the bugs in its game, it could just begin selling it with the knowledge that the game will be reproduced on many different systems and display many kinds of problems. With this data in hand, it can begin to debug. This seems to be a real strategy. This sort of solution obviously won't work for a government, although in a democracy we have a non-parallel version: swapping out one set of leadership for another routinely, to see what works best (in theory at least).
Paying it Backward. What would it look like if we had all the tools to solve the inverse problem? We could pose it precisely, work forward simulations like the one pictured above,
and we would have a huge advantage, shown in the graphic below.
Again, the red dot is where we've decided we want to be after a while. The inverse solver shows us all of our possible starting places in the now. There are multiple ones if we have not completely specified the eventual outcome, which tells us what opportunities we have now to optimize other things than the one we were thinking about when we posed the problem.
Example: An artillery officer is given the task of attacking a distant building. The locations of his guns and the target are fixed. Using ballistics equations we can work backwards to show all the possible solutions: e.g., a high arcing shell like a mortar, or a flat trajectory. This decision will determine how much powder is used.
The inverse solver illuminates "free" parameters and allows us to customize our solution.
Conclusions. For the real-world messy problems we face in complex human organizations, it's fair to say that we are not very good at long-term planning. In some cases it may be impossible because the forward simulators simply don't exist--there are no reliable cause and effect mechanisms. Trying to predict the stock market might fall into that category. In other areas, short-term goals are given preference over long term goals. This is reasonable for at least two reasons. First, the longer out the prediction is, the more likely it is wrong. Second, as individual humans our ability to affect events is a narrow window (e.g. a term for a legislator), and most of us want positive feedback now, not 1000 years from now.
As a very real example of our collective difficulty with long-term planning, consider the issue of human-caused climate change. With the terminology given in these two posts, it's easy to dissect the arguments:
- Empirical language. The basic facts about temperature change and CO2 levels are accepted by the scientific community, but still debated as a political matter.
- Forward design: Cause and effects are challenged in the political discourse, computer simulations are therefore called into question. Unaccounted-for causes are conjured to explain away data that is (to some extent) agreed on.
- Inverse design: Prescriptions about what to do now to affect the future climate are attacked as being too detrimental to the present, and/or useless.
The question of what happens to the climate is scientific--this is the arena where we are best at design. If we can't understand and act on such threats intelligently, it's very hard to make the argument that we have any long-term planning ability collectively. Note that I'm not neutral on this particular question. The evidence is overwhelming that the risk to our decedents is very high. But go read the experts at
RealClimate.org.
Higher Education. This isn't a climate change blog; what does this analysis have to do with your day to day job in the academy? Everything. From the oracle at Delphi: "Know Thyself." What functional aspects of your institution are understood? How many are understood well enough to make predictions? How many are understood well enough to make inverse predictions?
We work backwards all the time. Suppose budgetary concerns have pushed up freshman enrollment targets, so the admissions office is tasked with bringing in 1000 new students for the fall. This is posed as an inverse problem, and with the standard empirical language of admissions we can build basic predictors. This is the "admissions funnel" prospects->applicants->accepted->enrolled (this is the simplified version). We usually have historical data on the conversion rates between these stages. If the universe is kind, we can use Darwinian methods--keep what works, assuming what works last time will work again, tweak to see if we can make it better. If there's a significant discontinuity (suppose state grants suddenly dried up, or we have a new competitor) the old solutions may not work anymore.
We can and should work hard to create an empirical language and use it to simulate our futures, trying to end up in a good one. This isn't enough.
Short term optimization leads to long-term optimization sometimes, but not as a rule. Think of it as a maze you're trying to escape. If at every junction you choose a path that takes you closer to the opposite wall of the maze, you may make smart moves in the short term only to discover there's no exit there: a long-term failure.
Even though we can't solve the inverse problem entirely, we may find that we have enough empirical vocabulary to make some important decisions. What do we want the demographics of the student body to look like in 10 years? Answering this question about the future puts constraints on how we operate now, even if they are fuzzy and inexact. Forget about solving the problem exactly--that's impossible--and think about what constraints are imposed in the big picture. Do we want a strong research program, or a large endowment, a national reputation for X? More idealistically, what do we want to be able to say about our alumni in a decade or two? How much does their success matter, and what sort of success are we talking about? Work backwards. I'll finish with an example.
Example: Student Success. If we focus on a goal for our ultimate accomplishment: the education of students, what does this reveal? How exactly do we want our students to benefit from their education? Some possibilities:
- Happiness with life
- Being good citizens
- Being successful financially
- Being loyal to their alma mater
- Getting a job right out of school
The overwhelming narrative in the public discourse is that we want students to be "globally competitive" and "get well-paying jobs." But that's a means to some end. What is the ultimate aim? On a national scale, the answer might be better national security and a stronger economy. For an institution, the answer might be that we want our products strongly identified with our brand, or that we want them to donate lots of money in the annual campaign. I think it's important to start there and ask "why do we care?" You could follow that up with lots of activities, like telling the students themselves why you care, if that's appropriate.
Suppose that we care because we want the institution to be able to rely on an international body of alumni who will contribute back in money, connections, expertise, and other intangibles. This will enable the university to grow by presenting global options that may not be apparent now, but also establish an exponentially-growing revenue stream through an expanding endowment as successful alumni give large gifts back to the institution. Working backwards, we might identify several tracks to success:
- The state department track--prepare students for high-level international government positions
- The military track--ditto for the miltary
- The global entrepreneur track--help them achieve independence
- The big corporate track--give students the skills to compete and succeed within vast multi-nationals
- The wildcard track--for those students who don't fit the mold, are intelligent and creative, but don't want to be entrepreneurs or work in a cubical. This could include scientists, philosophers, and artists of all stripes.
If we keep working backwards, even without exact solutions, we can make some good guesses as to the curriculum each track needs, and the type of faculty mentors we need. This neatly sidesteps the drift toward vocational education that the public narrative implies, and gives the institution a raison d'être. There's nothing wrong with telling students "we want you to succeed so that you'll help us succeed." This sort of pseudo-altruism is what keeps the population going, after all. Thanks, mom and dad.
Think Backwards. Short term forward planning is like beer: it's obviously a good idea at the time, but watch out for the hangover.