I’ve been reading more lately, partially for the obvious reasons. Mostly, I’ve been catching up on books everyone else already read.
One such book is Daniel Kahneman’s “Thinking, Fast and Slow”. With all the talk lately about cognitive biases, Kahneman’s account of his research on decision-making was quite familiar ground. The book turned out to more interesting as window into the culture of psychology research. While I had a working picture from psychologist friends in grad school, “Thinking, Fast and Slow” covered the other side, the perspective of a successful professor promoting his field.
Most of this wasn’t too surprising, but one passage struck me:
Several economists and psychologists have proposed models of decision making that are based on the emotions of regret and disappointment. It is fair to say that these models have had less influence than prospect theory, and the reason is instructive. The emotions of regret and disappointment are real, and decision makers surely anticipate these emotions when making their choices. The problem is that regret theories make few striking predictions that would distinguish them from prospect theory, which has the advantage of being simpler. The complexity of prospect theory was more acceptable in the competition with expected utility theory because it did predict observations that expected utility theory could not explain.
Richer and more realistic assumptions do not suffice to make a theory successful. Scientists use theories as a bag of working tools, and they will not take on the burden of a heavier bag unless the new tools are very useful. Prospect theory was accepted by many scholars not because it is “true” but because the concepts that it added to utility theory, notably the reference point and loss aversion, were worth the trouble; they yielded new predictions that turned out to be true. We were lucky.
Thinking Fast and Slow, page 288
Kahneman is contrasting three theories of decision making here: the old proposal that people try to maximize their expected utility (roughly, the benefit they get in future), his more complicated “prospect theory” that takes into account not only what benefits people get but their attachment to what they already have, and other more complicated models based on regret. His theory ended up more popular, both than the older theory and than the newer regret-based models.
Why did his theory win out? Apparently, not because it was the true one: as he says, people almost certainly do feel regret, and make decisions based on it. No, his theory won because it was more useful. It made new, surprising predictions, while being simpler and easier to use than the regret-based models.
This, a theory defeating another without being “more true”, might bug you. By itself, it doesn’t bug me. That’s because, as a physicist, I’m used to the idea that models should not just be true, but useful. If we want to test our theories against reality, we have a large number of “levels” of description to choose from. We can “zoom in” to quarks and gluons, or “zoom out” to look at atoms, or molecules, or polymers. We have to decide how much detail to include, and we have real pragmatic reasons for doing so: some details are just too small to measure!
It’s not clear Kahneman’s community was doing this, though. That is, it doesn’t seem like he’s saying that regret and disappointment are just “too small to be measured”. Instead, he’s saying that they don’t seem to predict much differently from prospect theory, and prospect theory is simpler to use.
Ok, we do that in physics too. We like working with simpler theories, when we have a good excuse. We’re just careful about it. When we can, we derive our simpler theories from more complicated ones, carving out complexity and estimating how much of a difference it would have made. Do this carefully, and we can treat black holes as if they were subatomic particles. When we can’t, we have what we call “phenomenological” models, models built up from observation and not from an underlying theory. We never take such models as the last word, though: a phenomenological model is always viewed as temporary, something to bridge a gap while we try to derive it from more basic physics.
Kahneman doesn’t seem to view prospect theory as temporary. It doesn’t sound like anyone is trying to derive it from regret theory, or to make regret theory easier to use, or to prove it always agrees with regret theory. Maybe they are, and Kahneman simply doesn’t think much of their efforts. Either way, it doesn’t sound like a major goal of the field.
That’s the part that bothered me. In physics, we can’t always hope to derive things from a more fundamental theory, some theories are as fundamental as we know. Psychology isn’t like that: any behavior people display has to be caused by what’s going on in their heads. What Kahneman seems to be saying here is that regret theory may well be closer to what’s going on in people’s heads, but he doesn’t care: it isn’t as useful.
And at that point, I have to ask: useful for what?
As a psychologist, isn’t your goal ultimately to answer that question? To find out “what’s going on in people’s heads”? Isn’t every model you build, every theory you propose, dedicated to that question?
And if not, what exactly is it “useful” for?
For technology? It’s true, “Thinking Fast and Slow” describes several groups Kahneman advised, most memorably the IDF. Is the advantage of prospect theory, then, its “usefulness”, that it leads to better advice for the IDF?
I don’t think that’s what Kahneman means, though. When he says “useful”, he doesn’t mean “useful for advice”. He means it’s good for giving researchers ideas, good for getting people talking. He means “useful for designing experiments”. He means “useful for writing papers”.
And this is when things start to sound worryingly familiar. Because if I’m accusing Kahneman’s community of giving up on finding the fundamental truth, just doing whatever they can to write more papers…well, that’s not an uncommon accusation in physics as well. If the people who spend their lives describing cognitive biases are really getting distracted like that, what chance does, say, string theory have?
I don’t know how seriously to take any of this. But it’s lurking there, in the back of my mind, that nasty, vicious, essential question: what are all of our models for?
Bonus quote, for the commenters to have fun with:
I have yet to meet a successful scientist who lacks the ability to exaggerate the importance of what he or she is doing, and I believe that someone who lacks a delusional sense of significance will wilt in the face of repeated experiences of multiple small failures and rare successes, the fate of most researchers.
Thinking Fast and Slow, page 264