Here’s a math problem:
Medical researchers have developed a new cream fro treating skin rashes. New treatments often work but sometimes make rashes worse. Even when treatments don’t work, skin rashes sometimes get better and sometimes get worse on their own. As a result, it is necessary to test any new treatment in an experiment to see whether it makes the skin condition of those who use it better or worse than if they had not used it.
Researchers have conducted an experiment on patients with skin rashes. In the experiment, one group of patients used the new cream for two weeks, and a second group did not use the new cream.
In each group, the number of people whose skin condition got better and the number whose condition got worse are recorded in the table below. Because patients do not always complete studies, the total number of patients in each two groups is not the same, but this does not prevent assessment of the results.
Please indicate whether the experiment shows that using the new cream is likely to make the skin condition better or worse.
|
Result |
|
Rash Got Better |
Rash Got Worse |
Patients who DID use the new skin cream |
223 |
75 |
Patients who did NOT use the new skin cream |
107 |
21 |
What result does the study support?
[] People who used the skin cream were more likely to get better than those who didn’t.
[] People who used the skin cream were more likely to get worse than those who didn’t.
The solution to the problem
Wait a mo’. Stop reading now if you’re keen to solve it yourself.
Okay? We’re all set?
Okay then.
The solution to the problem is to convert the numbers into percentages before comparing them. 75% of the people who used the new skin cream got better, while 84% of those who didn’t use the skin cream got better. So the answer is that cream users were more likely to get worse.
I’m pretty sure the average “Alas” reader would be able to solve that math problem correctly. But what if the nouns were changed? Apparently, we’d do terribly. Chris Mooney reports:
The study, by Yale law professor Dan Kahan and his colleagues, has an ingenious design. At the outset, 1,111 study participants were asked about their political views and also asked a series of questions designed to gauge their “numeracy,” that is, their mathematical reasoning ability. Participants were then asked to solve a fairly difficult problem that involved interpreting the results of a (fake) scientific study. But here was the trick: While the fake study data that they were supposed to assess remained the same, sometimes the study was described as measuring the effectiveness of a “new cream for treating skin rashes.” But in other cases, the study was described as involving the effectiveness of “a law banning private citizens from carrying concealed handguns in public.” [...]
So how did people fare on the handgun version of the problem? They performed quite differently than on the skin cream version, and strong political patterns emerged in the results—especially among people who are good at mathematical reasoning. Most strikingly, highly numerate liberal Democrats did almost perfectly when the right answer was that the concealed weapons ban does indeed work to decrease crime (version C of the experiment)—an outcome that favors their pro-gun-control predilections. But they did much worse when the correct answer was that crime increases in cities that enact the ban (version D of the experiment).
The opposite was true for highly numerate conservative Republicans: They did just great when the right answer was that the ban didn’t work (version D), but poorly when the right answer was that it did (version C). [...]
For study author Kahan, these results are a fairly strong refutation of what is called the “deficit model” in the field of science and technology studies—the idea that if people just had more knowledge, or more reasoning ability, then they would be better able to come to consensus with scientists and experts on issues like climate change, evolution, the safety of vaccines, and pretty much anything else involving science or data (for instance, whether concealed weapons bans work). Kahan’s data suggest the opposite—that political biases skew our reasoning abilities, and this problem seems to be worse for people with advanced capacities like scientific literacy and numeracy.
A graph from Kevin Drum:
Looking at that graph, it does seem that a substantial minority – I’d eyeball it as 30 to 40 percent? – of highly numerate partisans were able to do the math correctly when the correct answer cut against their own biases. If I’m correct about that, then that’s a thirty to forty percent reason for hope.
This sort of thing makes me feel terribly bleak about the point of even arguing about politics, especially when I think about issues like climate change. It’s not that people never change their minds; it’s that most of us don’t change our minds in response to facts or logic.
This is another reason I find same-sex marriage fascinating: It’s an issue on which large numbers of Americans have changed their minds over a pretty short period of time. What makes SSM so different? Is there any way that the success of SSM can be applied to issues like climate change?
Related posts:
- Avant-Garde Theater in Iran – Art as Politics, The Politics of Art
- Men's Right Activists Can't Do Math
- Obama's Nowruz Message to Iran: The Poetry of the Politics and the Politics in the Poetry