22 September 2014

Within reason

Why we’re not as irrational as the nudgers want us to think

Humanity’s achievements and its self-perception are today at curious odds. We can put autonomous robots on Mars and genetically engineer malarial mosquitoes to be sterile. Yet the news lately from popular psychology, neuroscience, economics, and other fields is that we are not as rational as we like to assume. We are prey to a dismaying variety of hard-wired errors, and prefer winning to being right. At best, so the story goes, our faculty of reason is at constant war with an irrational darkness within. At worst, we should abandon the attempt to be rational altogether.

Yet the modern thesis of severely compromised rationality is more open to challenge and reinterpretation than many of its followers accept. And its eager adoption by today’s governments threatens social consequences that many might find undesirable. A culture that accepts on faith the idea that its citizens are not reliably competent reasoners will treat those citizens differently than a culture that respects their reflective autonomy. Which kind of culture do we want to be?

The present climate of distrust in our reasoning capacity finds much of its rationale in the field of behavioural economics, particularly the work by Daniel Kahneman and Amos Tversky described in Kahneman’s bestselling Thinking, Fast and Slow. There, Kahneman divides the mind into two allegorical systems, the intuitive “System 1”, which often gives wrong answers, and the reflective reasoning of “System 2”. “The attentive System 2 is who we think we are,” he writes; but on his view it is the intuitive, biased, “irrational” System 1 that is in charge most of the time — indeed, it “is also the origin of most of what we do right.”

Other versions of the message are expressed in more strongly negative terms. You Are Not So Smart is the title of a bestselling popular book on cognitive bias. According to a widely reported study by researchers Hugo Mercier and Dan Sperber, reason evolved not to find “truth” but merely to win arguments. And in The Righteous Mind, the psychologist Jonathan Haidt calls the idea that reason is “our most noble attribute” a mere “delusion”. The worship of reason, he adds, “is an example of faith in something that does not exist.”

Your brain, runs the prevailing wisdom, is mainly a tangled, damp and contingently cobbled-together knot of cognitive biases and fear. It’s a scientized version of original sin, whose political implications are troubling. If reasoning isn’t going to work well for you, you might as well abandon the attempt. Have you ever been told to stop “overthinking” something? Perhaps someone else will benefit if you give up.

For most of recorded human thought, it has been taken as evidently true that rationality is what separates us from the rest of our fellow organisms. Aristotle called humans “the rational animal”, and Plato argued that hatred of reason (misology) sprang from the same source as hatred of humankind. It seemed obvious to Spinoza that, just as “a dog is a barking animal”, so “man [is] a rational animal”. Jonathan Swift famously varied the old formula. In a 1725 letter to Pope, he wrote:

I have got materials towards a treatise, proving the falsity of that definition animal rationale, and to show it would only be rationis capax. Upon this great foundation of Misanthropy […] the whole building of my Travels is erected.

His Travels were, of course, Gulliver’s Travels. To take that book together with the letter implies an ambivalent view of rationality indeed. On the one hand, man is only “capable of rationality” (because he is also capable of evil and imbecility). On the other hand, what would perfectly rational beings look like? They would look like the Houyhnhnms of the novel’s last section, intelligent horses who lack all passions and seek to exterminate lesser beings. Gulliver loves them, but it is far less obvious that Swift meant the reader to love them too.

Thinkers had long differed meanwhile about the nature and limits of rationality. Immanuel Kant argued against “rationalists”, such as Leibniz, who held that pure reasoning could attain insight into the nature of reality, while Hegel insisted that individual reasoners cannot escape their particular historical context, and Hume observed that reason alone cannot motivate action. But until recently it was still largely assumed that rationality, whatever its character and limits, was a fundamentally definitional aspect of humankind. (Even if some, like Hume, thought some animals also had a limited capacity for reason.) To decry the operation of reason inasmuch as it led to certain horrific outcomes was, to that extent, to take a pessimistic view of humanity itself. Hence the despairing apotheosis of Romantic anti-rationalist thinking in later 20th-century ideas that the Enlightenment led straight to the Gulag and the Holocaust.

Today, however, we are told we can abandon with a light heart the notion that rationality is central to human identity. But does the evidence show that we must?

Modern scepticism about rationality is motivated in large part by years of experiments on cognitive bias. People are prone to apparently irrational phenomena such as the anchoring effect (being made to think of some arbitrary number will affect your snap response to an unrelated question) or the availability error (when you judge a question according to the examples that come most easily to mind, rather than a wide sample of evidence). There has been some controversy over the correct statistical interpretations of some studies, and some experiments ostensibly demonstrating “priming” effects, in particular, have notoriously proven difficult to replicate. More generally, however, the extent to which such findings can show we are acting irrationally often depends on what we agree should count as “rational” in the first place.

During the development of game theory and decision theory in the mid-20th century, a “rational” person in economic terms became defined as a lone individual all of whose decisions were calculated to maximize self-interest, and one whose preferences were (logically or mathematically) consistent in combination and over time. It turns out that people are not in fact “rational” in this homo economicus way, the elegant demonstration of which fact was the subject of Daniel Kahneman’s own research (with Amos Tversky) over the following decades. Given choices between complex bets, for example, people often prefer what is mathematically inferior; and potential losses loom more heavily in our minds than equal potential gains. The thorny question is whether these widespread departures from the economic definition of “rationality” should be taken to show that we are irrational, or whether they merely show that the economic definition of rationality is defective.

If we adopt a wider sense of “rational”, some of our apparent cognitive hiccups don’t seem so silly after all. Take Kahneman and Tversky’s famous “Linda problem”. Imagine you are told the following about Linda:

Linda is 31 years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in anti-nuclear demonstrations.

Now, which of these statements is more probable?

1. Linda is a bank teller.
2. Linda is a bank teller and is active in the feminist movement.

A majority of people say that 2 is more probable: that Linda is a bank teller and an active feminist. Well, as Kahneman points out, 2 cannot be more probable in the statistical sense, since there are many more bank tellers (the feminist ones plus all the rest) than there are feminist bank tellers. People who answer 2, he says, think it’s more probable that Linda belongs to a smaller population than to a larger, more inclusive one. Mathematically, this is just wrong. So Kahneman’s story is that we are primed by the irrelevant information about Linda’s personality to commit what he calls the “conjunction fallacy”, and so we make an “irrational” judgment.

But this does not take into account some important nuances. Consider what the philosopher Paul Grice would have called the “conversational implicature” of the puzzle as posed. According to Grice’s “maxim of relevance”, people will naturally assume that the information about Linda’s personality is being given to them because it is relevant. That leads them to infer a definition of “probability” that is different from the strict mathematical one, because giving the mathematical answer would render the personality sketch pointless. (After all, we could reasonably wonder, why did they tell me this?) Thus, respondents who give the “wrong” answer are interpreting “probability” as something more akin to narrative plausibility. (Tellingly, psychologists Ralph Hertwig and Gert Gigerenzer reported in 1999 that when people are given the same puzzle but asked to guess about relative “frequencies” instead of what is more “probable”, they give the mathematically correct answer much more often.) One might add that, if we are talking plausibility, then the notion that Linda is a bank teller and an active feminist fits the whole story better. All the available information is now consistent. Arguably, therefore, it is a perfectly rational inference.

There are many other good reasons to give “wrong” answers to questions that seem to elicit cognitive bias. The cognitive psychologist Jonathan St B.T. Evans was one of the first to propose a “dual-process” picture of reasoning in the 1980s, but he resists talk of “System 1” and “System 2” as though they are entirely discrete, and argues against the automatic inference from bias to irrationality. In a 2005 survey, he offers examples such as the following. Experimental subjects are asked:

If she meets her friend, she will go to a play.
She meets her friend.
What follows?

As one might expect, the vast majority of people (96% in one study) give the correct logical inference: she goes to the play. But look what happens when an extra conditional statement is added:

If she meets her friend, she will go to a play.
If she has enough money, she will go to a play.
She meets her friend.
What follows?

Now, only 38% of respondents said that she goes to the play. In strictly logical terms, the other 62% were wrong. “In standard logic,” Evans explains, “an argument that follows from some premises must still follow if you add new information.” But the people who didn’t conclude that she goes to the play were not necessarily being irrational. Evans diagnoses their thought process thus: “The extra conditional statement introduces doubt about the truth of the first. People start to think that, even though she wants to go to the play with her friend, she might not be able to afford it.” And so, it is quite reasonable to be unsure whether she goes to the play.

In general, Evans concludes that, in many such cases of deductive reasoning, a “strictly logical” answer will be less “adaptive to everyday needs” for most people. “A related finding,” he continues, “is that, even though people may be told to assume the premises of arguments are true, they are reluctant to draw conclusions if they personally do not believe the premises. In real life, of course, it makes perfect sense to base your reasoning only on information that you believe to be true.” In any contest between what “makes perfect sense” in normal life and what is defined as “rational” by economists or logicians, you might think it rational, according to a more generous meaning of that term, to prefer the former. Evans concludes: “It is far from clear that such biases should be regarded as evidence of irrationality.”

A wider definition of “rationality”, moreover, will often have the happy effect of showing that those who disagree with us are not stupid. In an article entitled “Making climate-science communication evidence-based”, for example, Dan M. Kahan, a professor of law and psychology, argues that people who reject the established facts about global warming and instead adopt the opinions of their peer group are also being perfectly rational in a certain light:

Nothing any ordinary member of the public personally believes about […] global warming will affect the risk that climate changes poses to her, or to anyone or anything she cares about. […] However, if she forms the wrong position on climate change relative to the one [shared by] people with whom she has a close affinity — and on whose high regard and support she depends on in myriad ways in her daily life — she could suffer extremely unpleasant consequences, from shunning to the loss of employment. Because the cost to her of making a mistake on the science is zero and the cost of being out of synch with her peers potentially catastrophic, it is indeed individually rational for her to attend to information on climate change in a manner geared to conforming her position to that of others in her cultural group.

Of course, when one combines what Kahan stresses are “individually rational” such decisions into a group belief, one may judge that the group as a whole is being irrational in rejecting robust scientific evidence. This is, perhaps, an intellectual version of the tragedy of the commons. There, too, each individual acts “rationally” according to self-interest (getting the most they can out of the shared resource), but the aggregate behaviour (for instance, overgrazing and thus exhausting a piece of land) seems irrational. Perhaps it ought not to be surprising that, if individuals can act rationally or irrationally, so too can groups. Not all crowds are wise.

Nonetheless, there is surely empathetic value in the determination to discover what is rational about an individual’s holding an apparently false belief. Kahan’s argument about the woman who does not believe in global warming, indeed, is a surprising and persuasive example of the general principle that, if we want to understand others, we can always ask what it is that is making their behaviour rational from their point of view. If, on the other hand, we just assume they are irrational, no further conversation can take place.

That, in a nutshell, is the problem of the practical application of behavioural economics to modern governance, in the form of nudge politics. Kahan argues against what he calls the “public irrationality thesis”: the idea, influenced by bias research, that ordinary citizens will, most of the time, act irrationally. He thinks this thesis is ungrounded, but the liberal-paternalist architects of nudge policy simply assume it — in, or so they claim, our own best interests.

The idea went mainstream with Nudge, a 2008 book by the law professor Cass Sunstein and the economist Richard Thaler. Official policy, they suggest, should deliberately bypass the reflective or reasoning process in the citizenry. This is done by designing a “choice architecture” in which the alternatives are precisely targeted to citizens’ cognitive weaknesses so that they will “automatically” make the desired decision. So, for example, a school cafeteria should put healthy meals at eye level (following the strategy of supermarkets), while relegating “junk” foods to harder-to-reach places. This targets laziness of immediate perception and action. And to get more organ donors, the state should automatically enrol everyone as a potential donor, so that you have to “opt out” if you really don’t want to be one, rather than needing to opt in and register in the first place. This targets status quo bias. Nudge was so successful, seemingly offering the magic key to controlling obstreperous populations intent on eating burgers and riding motorcycles, that Sunstein was plucked from Harvard to become Barack Obama’s regulation czar, and the UK government set up a “Behavioural Insights Team” (informally known as the “nudge unit”) in the Cabinet Office, later part-privatized. Similar approaches have been tried in France, Brazil, Australia, and New Zealand.

The mental tendencies targeted by nudges, such as status quo bias, cannot so easily be recuperated as rational-in-a-wider-sense as those involved in reasoning puzzles. They might well be adaptive as rules of thumb for fast decision-making, but they will not always result in sensible decisions — unless the choice architect, having unilaterally decided what should count as the “rational” decision in a given context, sets up the environment in the right way. The choice architect is thus a kind of benevolent god designing a garden maze that leads sinners to the right exit. Nudge politics, in this way, may be seen as a technocratic concretization of the tendency that the philosopher Alasdair MacIntyre noted already in 1988, in Whose Justice? Which Rationality?:

[I]n a society in which preferences […] are assigned the place which they have in a liberal order, power lies with those who are able to determine what the alternatives are to be between which choices will be available. The consumer, the voter, and the individual in general are accorded the right of expressing their preferences for one or more out of the alternatives which they are offered, but the range of possible alternatives is controlled by an elite, and how they are presented is also so controlled. The ruling elites within liberalism are thus bound to value highly competence in the persuasive presentation of alternatives, that is, in the cosmetic arts.

Nudging is far from being a dystopian tool of state mind control. After all, we remain free to make the “wrong” choices. The more fundamental problem, however, is that nudging bypasses political discussion. There is no public consultation about choice architecture. (Is it always irrational to eat fatty food? Is it irrational to refuse to donate one’s organs?) The attempt to bypass our reasoning selves with “nudge” politics creates a problem of consent, a short-circuiting of democracy. Why bother having a political argument if you can make (most) people do what you want anyway?

In nudging, instead, the pre-existing values of market liberalism are reinforced and reconstructed at the level of the nudged individual, as John McMahon argues in his recent paper “Behavioral economics as neoliberalism”:

Behavioral economics should be understood as a political economic apparatus of neoliberal governmentality that has the objective of using the state to manage and regulate individuals, interests, and populations – by attempting to correct their deviations from rational, self-interested, utility-maximizing cognition and behavior – such that they more effectively and efficiently conform to market logics and processes […] behavioral economics is in many ways an attempt to produce a more rational homo economicus, one more well-suited to be an entrepreneur of self on the market.

Further objections arise as nudging techniques are increasingly allied with the surveillance capabilities of personal technology — for instance, smart cards that offer discounts on local taxes if citizens use them to go to the gym regularly. This might make it easier to blame individuals for their own poor health, or to increase their insurance premiums, because of their demonstrable and recorded bad behaviour. (“For what else could possibly explain their health problems but their personal failings?,” the critic Evgeny Morozov asks sardonically. “It’s certainly not the power of food companies or class-based differences or various political and economic injustices.”) If refusing a nudge carries a financial or other penalty, how free does the nudged choice remain?

It is possible, however, that too great a faith in nudging is itself irrational. A 2012 report by the House of Lords Science and Technology Select Committee on the subject went off-message by concluding that “soft” approaches such as nudging were not sufficient to tackle major social problems such as obesity or transport. Moreover, since nudging depends on citizens ordinarily following their automatic biases, its efficacy will be under threat if we are actually able to overcome our biases on a regular basis.

Our capacity do to this is a matter of debate. How good can we become at “debiasing”? Many researchers agree that, if we remember to remind ourselves that a certain kind of bias might be triggered by the present problem, we can make sure we employ rational processes. Daniel Kahneman considers this difficult to achieve reliably, but some of his colleagues in the field are more optimistically meliorist. One is the psychologist Keith E Stanovich, who in Rationality and the Reflective Mind (2011) prefers a tripartite picture of mental systems: the “autonomous” (prey to biases), the “algorithmic”, and the “reflective”. In this way he distinguishes between intelligence narrowly conceived (whatever is measured by IQ tests or SAT scores), which is the business of the “algorithmic” mind, and accurate reasoning, which he ascribes to the “reflective” mind. And the good news, on his account, is that it is indeed possible and desirable to teach “rational thinking mindware and procedures”.

To do so would surely be an admirable exercise of public reason. But it would presumably be very disappointing to the nudgers. Nudging depends on our cognitive biases being reliably exploitable — thus, on citizens not receiving a Stanovichian programme of upgrades to their rational mindware. Nudge politics, then, is not only predicated upon a thesis that we will most of the time make irrational choices; to continue to be viable, it must be opposed to any increase in our rationality. In this sense it is at odds with public reason itself.

Yet it is public reason that offers the most effective counterweight to the scepticism about our capacity for rationality that underlies nudge theory in the first place. What ought to give the pessimists in behavioural economics and other fields more hope, after all, is what they collectively prove so well: that the flaws of any one individual can be corrected when reasoning is part of a conversation. (Thus, the literature on cognitive biases and the allegedly limited rationality of individuals is itself conducted according to the highest standards of public rationality.) Reasoning as a deliberative social process has not yet discovered any hard limits to the intellectual capacity of our species — hence, among other things, those robots on Mars.

Indeed, even as he calls the “worship” of reason a “delusion”, Jonathan Haidt celebrates humans’ ability to reason together. “If you put individuals together in the right way,” he writes, “such that some individuals can use their reasoning powers to disconfirm the claims of others, and all individuals feel some common bond or shared fate that allows them to interact civilly, you can create a group that ends up producing good reasoning as an emergent property of the social system.” Combining reasoning indivuals “in the right way” is important, so as to avoid the irrational effects introduced by phenomena such as group polarization or informational cascades. Yet we are all familiar with various examples of the right way to combine individuals into public bodies capable of high-level reasoning: scientific societies, universities — even, sometimes, parliaments.

The public irrationality thesis figures us, homo economicus style, as atomized individuals, yet reasoning itself is a social institution even when practised by solitary thinkers. (To count as a competent reasoner, I must abide by publicly agreed standards of what counts as a valid inference, and so on.) Indeed, reasoning is the social institution whose reliability underwrites all the other kinds of civil and political institutions of civilized life.

And so there is less reason than many think to doubt humans’ ability to be reasonable. The dissenting critiques of the cognitive-bias literature argue that people are not, in fact, as individually irrational as the present cultural climate assumes. And proponents of debiasing argue that we can each become more rational with practice. But even if we each acted as irrationally as often as the most pessimistic picture implies, that would be no cause to abandon the idea that humans are a fundamentally rational species. And it would be insufficient motivation to flatten democratic deliberation into the weighted engineering of consumer choices, as nudge politics seeks to do. Public reason is nothing short of our best hope for survival. Even a reasoned argument to the effect that human rationality is fatally compromised is itself an exercise in rationality. Albeit rather a perverse one, and — we may suppose — ultimately self-defeating.