Fake news and neurobabble: how do we critically assess what we read?

With unprecedented access to news and knowledge, how do we make judgements about what we read? Neuroscience news is a case in point

 We have unprecedented access to news and knowledge on a daily basis, but how do we make judgments about whether to accept what we read?
We have unprecedented access to news and knowledge on a daily basis, but how do we make judgments about whether to accept what we read? Photograph: Blend Images/REX

Fake news and neurobabble: how do we critically assess what we read?

With unprecedented access to news and knowledge, how do we make judgements about what we read? Neuroscience news is a case in point

In an era of fake news and alternative facts, it seems as if our collective ability to critically assess information is starting to falter. We have unprecedented access to news and knowledge on a daily basis, but how do we make judgments about whether to accept what we read? There’s still a lot of work to do in this area, but an influential psychology experiment from 2009 might provide a good starting point – at least when it comes to thinking about how neuroscience is presented in the news.

Try it yourself

The study in question was conducted by Weisberg and colleagues at Yale University. First, different groups of people, who varied in terms of neuroscience expertise, were given a short description of a well-known psychological phenomenon – here’s an example taken from the paper itself:

Researchers created a list of facts that about 50% of people knew. Subjects in this experiment read the list of facts and had to say which ones they knew. They then had to judge what percentage of other people would know those facts. Researchers found that the subjects responded differently about other people’s knowledge of a fact when the subjects themselves knew that fact. If the subjects did know a fact, they said that an inaccurately large percentage of others would know it, too. For example, if a subject already knew that Hartford was the capital of Connecticut, that subject might say that 80% of people would know this, even though the correct answer is 50%. The researchers call this finding “the curse of knowledge.”

Next, the participants were given one of four different explanations for the above phenomenon, and asked to rate how satisfying they found that explanation. Here are two of them – which are you most satisfied with?

  1. The researchers claim that this “curse” happens because subjects have trouble switching their point of view to consider what someone else might know, mistakenly projecting their own knowledge onto others.
  2. Brain scans indicate that this “curse” happens because of the frontal lobe brain circuitry known to be involved in self-knowledge. Subjects make more mistakes when they have to judge the knowledge of others. People are much better at judging what they themselves know.

The ‘seductive allure’ of neuroscience

The four different explanations that were presented in the study were two versions of a ‘good’ or a ‘bad’ explanation, each of which contained irrelevant neuroscience information, or did not. In the above two examples, the first is a good explanation without neuroscience – it provides additional useful information about the phenomenon. The second is a bad explanation with added (but irrelevant) neurononsense. Rather than adding any useful information, it simply provides a circular redescription of the phenomenon, and the vague reference to ‘frontal lobe circuitry’ doesn’t actually provide any additional detail.

Nevertheless, Weisberg showed that non-expert groups of participants were more likely to rate the bad explanations as satisfying when they were buffered with redundant neuroscience content. In other words, people seem to put a pause on their critical faculties when presented with explanations that sound neuroscientific, and are therefore more likely to accept them as correct.

Since the original experiments, a number of studies have replicated and extended these findings, suggesting that people aren’t particularly good at detecting poor, circular redescriptions in science generally, but the effect is strongest for neuroscience information. Moreover, one study suggested that, at least for undergraduate students, they use the presence of neuroscience information explicitly as a marker to determine what constitutes a good explanation. It doesn’t even have to be detailed information – even simply referring to brains has an effect.

These are important findings, because they suggest that people systematically vary in their ability to apply basic critical thinking, simply as a result of the presence of irrelevant science-speak. What we don’t have a complete handle on yet are the consequences of this, especially beyond neuroscience. But the findings from these studies may also suggest an avenue for improving matters. Rather than advocating for more education in basic neuroscience (or science more generally), perhaps instead the solution is to provide training in critical-thinking and debate skills to better equip people in identifying and confronting neurobollocks.