Blog

An Astronaut’s Guide to Mental Models

There isn’t a harsher environment for a human being to live than outer space. Chris Hadfield shares some of the thinking tools he acquired as an astronaut to make high stakes decisions, be innovative in the face of failure, and stay cool under pressure. 

***

How do you survive in space? Turns out that mental models are really useful. In his book An Astronaut’s Guide to Life on Earth, Chris Hadfield gives an in-depth look into the learning and knowledge required for a successful space mission. Hadfield was, among other roles with NASA, the first Canadian commander of the International Space Station. He doesn’t call out mental models specifically, but the thinking he describes demonstrates a ton of them, from circle of competence to margin of safety. His lessons are both counter-intuitive and useful far beyond space missions. Here are some of them:

  • “An astronaut is someone who’s able to make good decisions quickly, with incomplete information, when the consequences really matter. I didn’t miraculously become one either, after just eight days in space. But I did get in touch with the fact that I didn’t even know what I didn’t know.” (circle of competence)
  • “Over time, I learned how to anticipate problems in order to prevent them, and how to respond effectively in critical situations.” (second order thinking)
  • “Success is feeling good about the work you do throughout the long, unheralded journey that may or may not wind up at the launch pad. You can’t view training solely as a stepping stone to something loftier. It’s got to be an end in itself.” (velocity)
  • “A lot of our training is like this: we learn how to do things that contribute in a very small way to a much larger mission but do absolutely nothing for our own career prospects.” (cooperation)
  • “If you’re not sure what to be alarmed about, everything is alarming.” (probabilistic thinking)
  • “Truly being ready means understanding what could go wrong – and having a plan to deal with it.” (margin of safety)
  • “A sim [simulation] is an opportunity to practice but frequently it’s also a wake-up call: we really don’t know exactly what we’re doing and we’d better figure it out before we’re facing this situation in space.” (back-up systems)
  • “In any field, it’s a plus if you view criticism as potentially helpful advice rather than as a personal attack.” (inversion)
  • “At NASA, we’re not just expected to respond positively to criticism, but to go one step further and draw attention to our own missteps and miscalculations. It’s not easy for hyper-competitive people to talk openly about screw-ups that make them look foolish or incompetent. Management has to create a climate where owning up to mistakes is permissible and colleagues have to agree, collectively, to cut each other some slack.” (friction and viscosity)
  • “If you’re only thinking about yourself, you can’t see the whole picture.” (relativity)
  • “Over the years I’ve learned that investing in other people’s success doesn’t just make them more likely to enjoy working with me. It also improves my own chances of survival and success.” (reciprocity)
  • “It’s obvious that you have to plan for a major life event like a launch. You can’t just wing it. What’s less obvious, perhaps, is that it makes sense to come up with an equally detailed plan for how to adapt afterward.” (adaptation and the red queen effect)
  • “Our expertise is the result of the training provided by thousands of experts around the world, and the support provided by thousands of technicians in five different space agencies.” (scale)
  • “The best way to contribute to a brand-new environment is not by trying to prove what a wonderful addition you are. It’s by trying to have a neutral impact, to observe and learn from those who are already there, and to pitch in with grunt work wherever possible.” (ecosystem)
  • “When you’re the least experienced person in the room, it’s not the time to show off. You don’t yet know what you don’t know – and regardless of your abilities, your experience and your level of authority, there will definitely be something you don’t know.” (circle of competence)
  • “Ultimately, leadership is not about glorious crowning acts. It’s about keeping your team focused on a goal and motivated to do their best to achieve it.” (hierarchical instincts)
  • “If you start thinking that only your biggest and shiniest moments count, you’re setting yourself up to feel like a failure most of the time.” (map is not the territory)

There is so much to learn from this book. Thinking in terms of mental models can help you see the underlying logic and structure in books on a wide range of topics. It can also help you pick up lessons to apply to your life from unexpected places. The analogy between a space walk and a business negotiation you’re going into tomorrow might not be obvious. But by using mental models you can see the fundamental wisdom underlying both.

The Illusory Truth Effect: Why We Believe Fake News, Conspiracy Theories and Propaganda

When a “fact” tastes good and is repeated enough, we tend to believe it, no matter how false it may be. Understanding the illusory truth effect can keep us from being bamboozled.

***

A recent Verge article looked at some of the unsavory aspects of working as Facebook content moderators—the people who spend their days cleaning up the social network’s most toxic content. One strange detail stands out. The moderators the Verge spoke to reported that they and their coworkers often found themselves believing fringe, often hatemongering conspiracy theories they would have dismissed under normal circumstances. Others described experiencing paranoid thoughts and intense fears for their safety.

An overnight switch from skepticism to fervent belief in conspiracy theories is not unique to content moderators. In a Nieman Lab article by Laura Hazard Owen, she explains that researchers who study the spread of disinformation online can find themselves struggling to be sure about their own beliefs and needing to make an active effort to counteract what they see. Some of the most fervent, passionate conspiracy theorists admit that they first fell into the rabbit hole when they tried to debunk the beliefs they now hold. There’s an explanation for why this happens: the illusory truth effect.

The illusory truth effect

Facts do not cease to exist because they are ignored.

Aldous Huxley

Not everything we believe is true. We may act like it is and it may be uncomfortable to think otherwise, but it’s inevitable that we all hold a substantial number of beliefs that aren’t objectively true. It’s not about opinions or different perspectives. We can pick up false beliefs for the simple reason that we’ve heard them a lot.

If I say that the moon is made of cheese, no one reading this is going to believe that, no matter how many times I repeat it. That statement is too ludicrous. But what about something a little more plausible? What if I said that moon rock has the same density as cheddar cheese? And what if I wasn’t the only one saying it? What if you’d also seen a tweet touting this amazing factoid, perhaps also heard it from a friend at some point, and read it in a blog post?

Unless you’re a geologist, a lunar fanatic, or otherwise in possession of an unusually good radar for moon rock-related misinformation, there is a not insignificant chance you would end up believing a made-up fact like that, without thinking to verify it. You might repeat it to others or share it online. This is how the illusory truth effect works: we all have a tendency to believe something is true after being exposed to it multiple times. The more times we’ve heard something, the truer it seems. The effect is so powerful that repetition can persuade us to believe information we know is false in the first place. Ever thought a product was stupid but somehow you ended up buying it on a regular basis? Or you thought that new manager was okay, but now you participate in gossip about her?

The illusory truth effect is the reason why advertising works and why propaganda is one of the most powerful tools for controlling how people think. It’s why the speech of politicians can be bizarre and multiple-choice tests can cause students problems later on. It’s why fake news spreads and retractions of misinformation don’t work. In this post, we’re going to look at how the illusory truth effect works, how it shapes our perception of the world, and how we can avoid it.

The discovery of the illusory truth effect

Rather than love, than money, than fame, give me truth.

Henry David Thoreau

The illusory truth effect was first described in a 1977 paper entitled “Frequency and the Conference of Referential Validity,” by Lynn Hasher and David Goldstein of Temple University and Thomas Toppino of Villanova University. In the study, the researchers presented a group of students with 60 statements and asked them to rate how certain they were that each was either true or false. The statements came from a range of subjects and were all intended to be not too obscure, but unlikely to be familiar to study participants. Each statement was objective—it could be verified as either correct or incorrect and was not a matter of opinion. For example, “the largest museum in the world is the Louvre in Paris” was true.

Students rated their certainty three times, with two weeks in between evaluations. Some of the statements were repeated each time, while others were not. With each repetition, students became surer of their certainty regarding the statements they labelled as true. It seemed that they were using familiarity as a gauge for how confident they were of their beliefs.

An important detail is that the researchers did not repeat the first and last 10 items on each list. They felt students would be most likely to remember these and be able to research them before the next round of the study. While the study was not conclusive evidence of the existence of the illusory truth effect, subsequent research has confirmed its findings.

Why the illusory truth effect happens

The sad truth is the truth is sad.

Lemony Snicket

Why does repetition of a fact make us more likely to believe it, and to be more certain of that belief? As with other cognitive shortcuts, the typical explanation is that it’s a way our brains save energy. Thinking is hard work—remember that the human brain uses up about 20% of an individual’s energy, despite accounting for just 2% of their body weight.

The illusory truth effect comes down to processing fluency. When a thought is easier to process, it requires our brains to use less energy, which leads us to prefer it. The students in Hasher’s original study recognized the repeated statements, even if not consciously. That means that processing them was easier for their brains.

Processing fluency seems to have a wide impact on our perception of truthfulness. Rolf Reber and Norbert Schwarz, in their article “Effects of Perceptual Fluency on Judgments of Truth,” found that statements presented in an easy-to-read color are judged as more likely to be true than ones presented in a less legible way. In their article “Birds of a Feather Flock Conjointly (?): Rhyme as Reason in Aphorisms,” Matthew S. McGlone and Jessica Tofighbakhsh found that aphorisms that rhyme (like “what sobriety conceals, alcohol reveals”), even if someone hasn’t heard them before, seem more accurate than non-rhyming versions. Once again, they’re easier to process.

Fake news

“One of the saddest lessons of history is this: If we’ve been bamboozled long enough, we tend to reject any evidence of the bamboozle. We’re no longer interested in finding out the truth. The bamboozle has captured us. It’s simply too painful to acknowledge, even to ourselves, that we’ve been taken. ”

— Carl Sagan

The illusory truth effect is one factor in why fabricated news stories sometimes gain traction and have a wide impact. When this happens, our knee-jerk reaction can be to assume that anyone who believes fake news must be unusually gullible or outright stupid. Evan Davis writes in Post Truth, “Never before has there been a stronger sense that fellow citizens have been duped and that we are all suffering the consequences of their intellectual vulnerability.” As Davis goes on to write, this assumption isn’t helpful for anyone. We can’t begin to understand why people believe seemingly ludicrous news stories until we consider some of the psychological reasons why this might happen.

Fake news falls under the umbrella of “information pollution,” which also includes news items that misrepresent information, take it out of context, parody it, fail to check facts or do background research, or take claims from unreliable sources at face value. Some of this news gets published on otherwise credible, well-respected news sites due to simple oversight. Some goes on parody sites that never purport to tell the truth, yet are occasionally mistaken for serious reporting. Some shows up on sites that replicate the look and feel of credible sources, using similar web design and web addresses. And some fake news comes from sites dedicated entirely to spreading misinformation, without any pretense of being anything else.

A lot of information pollution falls somewhere in between the extremes that tend to get the most attention. It’s the result of people being overworked or in a hurry and unable to do the due diligence that reliable journalism requires. It’s what happens when we hastily tweet something or mention it in a blog post and don’t realize it’s not quite true. It extends to miscited quotes, doctored photographs, fiction books masquerading as memoirs, or misleading statistics.

The signal to noise ratio is so skewed that we have a hard time figuring out what to pay attention to and what we should ignore. No one has time to verify everything they read online. No one. (And no, offline media certainly isn’t perfect either.) Our information processing capabilities are not infinite and the more we consume, the harder it becomes to assess its value.

Moreover, we’re often far outside our circle of competence, reading about topics we don’t have the expertise in to assess accuracy in any meaningful way. This drip-drip of information pollution is not harmless. Like air pollution, it builds up over time and the more we’re exposed to it, the more likely we are to end up picking up false beliefs which are then hard to shift. For instance, a lot of people believe that crime, especially the violent kind, is on an upward trend year by year—in a 2016 study by Pew Research, 57% of Americans believed crime had worsened since 2008. This despite violent crime having actually fallen by nearly a fifth during that time. This false belief may stem from the fact that violent crime receives a disproportional amount of media coverage, giving it wide and repeated exposure.

When people are asked to rate the apparent truthfulness of news stories, they score ones they have read multiple times more truthful than those they haven’t. Danielle C. Polage, in her article “Making Up History: False Memories of Fake News Stories,” explains that a false story someone has been exposed to more than once can seem more credible than a true one they’re seeing for the first time. In experimental settings, people also misattribute their previous exposure to stories, believing they read a news item from another source when they actually saw it as part of a prior part of a study. Even when people know the story is part of the experiment, they sometimes think they’ve also read it elsewhere. The repetition is all that matters.

Given enough exposure to contradictory information, there is almost no knowledge that we won’t question.

Propaganda

If a lie is only printed often enough, it becomes a quasi-truth, and if such a truth is repeated often enough, it becomes an article of belief, a dogma, and men will die for it.

Isa Blagden

Propaganda and fake news are similar. By relying on repetition, disseminators of propaganda can change the beliefs and values of people.

Propaganda has a lot in common with advertising, except instead of selling a product or service, it’s about convincing people of the validity of a particular cause. Propaganda isn’t necessarily malicious; sometimes the cause is improved public health or boosting patriotism to encourage military enrollment. But often propaganda is used to undermine political processes to further narrow, radical, and aggressive agendas.

During World War II, the graphic designer Abraham Games served as the official war artist for the British government. Games’s work is iconic and era-defining for its punchy, brightly colored visual style. His army recruitment posters would often feature a single figure rendered in a proud, strong, admirable pose with a mere few words of text. They conveyed to anyone who saw them the sorts of positive qualities they would supposedly gain through military service. Whether this was true or not was another matter. Through repeated exposure to the poster, Games instilled the image the army wanted to create in the minds of viewers, affecting their beliefs and behaviors.

Today, propaganda is more likely to be a matter of quantity over quality. It’s not about a few artistic posters. It’s about saturating the intellectual landscape with content that supports a group’s agenda. With so many demands on our attention, old techniques are too weak.

Researchers Christopher Paul and Miriam Matthews at the Rand Corporation refer to the method of bombarding people with fabricated information as the “firehose of propaganda” model. While the report focuses on modern Russian propaganda, the techniques it describes are not confined to Russia. These techniques make use of the illusory truth effect, alongside other cognitive shortcuts. Firehose propaganda has four distinct features:

  • High-volume and multi-channel
  • Rapid, continuous and repetitive
  • Makes no commitment to objective reality
  • Makes no commitment to consistency

Firehose propaganda is predicated on exposing people to the same messages as frequently as possible. It involves a large volume of content, repeated again and again across numerous channels: news sites, videos, radio, social media, television and so on. These days, as the report describes, this can also include internet users who are paid to repeatedly post in forums, chat rooms, comment sections and on social media disputing legitimate information and spreading misinformation. It is the sheer volume that succeeds in obliterating the truth. Research into the illusory truth effect suggests that we are further persuaded by information heard from multiple sources, hence the efficacy of funneling propaganda through a range of channels.

Seeing as repetition leads to belief in many cases, firehose propaganda doesn’t need to pay attention to the truth or even to be consistent. A source doesn’t need to be credible for us to end up believing its messages. Fact-checking is of little help because it further adds to the repetition, yet we feel compelled not to ignore obviously untrue propagandistic material.

Firehose propaganda does more than spread fake news. It nudges us towards feelings like paranoia, mistrust, suspicion, and contempt for expertise. All of this makes future propaganda more effective. Unlike those espousing the truth, propagandists can move fast because they’re making up some or all of what they claim, meaning they gain a foothold in our minds first.  First impressions are powerful. Familiarity breeds trust.

How to combat the illusory truth effect

So how can we protect ourselves from believing false news and being manipulated by propaganda due to the illusory truth effect? The best route is to be far more selective. The information we consume is like the food we eat. If it’s junk, our thinking will reflect that.

We don’t need to spend as much time reading the news as most of us do. As with many other things in life, more can be less. The vast majority of the news we read is just information pollution. It doesn’t do us any good.

One of the best solutions is to quit the news. This frees up time and energy to engage with timeless wisdom that will improve your life. Try it for a couple of weeks. And if you aren’t convinced, read a few days’ worth of newspapers from 1978. You’ll see how much the news doesn’t really matter at all.

If you can’t quit the news habit, stick to reliable, well-known news sources that have a reputation to uphold. Steer clear of dubious sources whenever you can—even if you treat it as entertainment, you might still end up absorbing it. Research unfamiliar sources before trusting them. Be cautious of sites that are funded entirely by advertising (or that pay their journalists based on views) and seek to support reader-funded news sources you get value from if possible. Prioritize sites that treat their journalists well and don’t expect them to churn out dozens of thoughtless articles per day.  Don’t rely on news in social media posts without sources, from people outside of their circle of competence.

Avoid treating the news as entertainment to passively consume on the bus or while waiting in line. Be mindful about it—if you want to inform yourself on a topic, set aside designated time to learn about it from multiple trustworthy sources. Don’t assume breaking news is better, as it can take some time for the full details of a story to come out and people may be quick to fill in the gaps with misinformation. Accept that you can’t be informed about everything and most of it isn’t important. Pay attention to when news items make you feel outrage or other strong emotions, because this may be a sign of manipulation. Be aware that correcting false information can further fuel the illusory truth effect by adding to the repetition.

We can’t stop the illusory truth effect from existing. But we can recognize that it is a reality and seek to prevent ourselves from succumbing to it in the first place.

Conclusion

Our memories are imperfect. We are easily led astray by the illusory truth effect, which can direct what we believe and even change our understanding of the past. It’s not about intelligence—this happens to all of us. This effect is too powerful for us to override it simply by learning the truth. Cognitively, there is no distinction between a genuine memory and a false one. Our brains are designed to save energy and it’s crucial we accept that.

We can’t just pull back and think the illusory truth only applies to other people. It applies to everyone. We’re all responsible for our own beliefs. We can’t pin the blame on the media or social media algorithms or whatever else. When we put effort into thinking about and questioning the information we’re exposed to, we’re less vulnerable to the illusory truth effect. Knowing about the effect is the best way to identify when it’s distorting our worldview. Before we use information as the basis for important decisions, it’s a good idea to verify if it’s true, or if it’s something we’ve just heard a lot.

Truth is a precarious thing, not because it doesn’t objectively exist, but because the incentives to warp it can be so strong. It’s up to each of us to seek it out.

The Positive Side of Shame

Recently, shame has gotten a bad rap. It’s been branded as toxic and destructive. But shame can be used as a tool to effect positive change.

***

A computer science PhD candidate uncovers significant privacy-violating security flaws in large companies, then shares them with the media to attract negative coverage. Google begins marking unencrypted websites as unsafe, showing a red cross in the URL bar. A nine-year-old girl posts pictures of her school’s abysmal lunches on a blog, leading the local council to step in.

What do each of the aforementioned stories have in common? They’re all examples of shame serving as a tool to encourage structural changes.

Shame, like all emotions, exists because it conferred a meaningful survival advantage for our ancestors. It is a universal experience. The body language associated with shame — inverted shoulders, averted eyes, pursed lips, bowed head, and so on — occurs across cultures. Even blind people exhibit the same body language, indicating it is innate, not learned. We would not waste our time and energy on shame if it wasn’t necessary for survival.

Shame enforces social norms. For our ancestors, the ability to maintain social cohesion was a matter of life or death. Take the almost ubiquitous social rule that states stealing is wrong. If a person is caught stealing, they are likely to feel some degree of shame. While this behavior may not threaten anyone’s survival today, in the past it could have been a sign that a group’s ability to cooperate was in jeopardy. Living in small groups in a harsh environment meant full cooperation was essential.

Through the lens of evolutionary biology, shame evolved to encourage adherence to beneficial social norms. This is backed up by the fact that shame is more prevalent in collectivist societies where people spend little to no time alone than it is in individualistic societies where people live more isolated lives.

Jennifer Jacquet argues in Is Shame Necessary?: New Uses For An Old Tool that we’re not quite through with shame yet. In fact, if we adapt it for the current era, it can help us to solve some of the most pressing problems we face. Shame gives the weak greater power. The difference is that we must shift shame from individuals to institutions, organizations, and powerful individuals. Jacquet states that her book “explores the origins and future of shame. It aims to examine how shaming—exposing a transgressor to public disapproval—a tool many of us find discomforting, might be retrofitted to serve us in new ways.”

Guilt vs. shame

Jacquet begins the book with the story of Sam LaBudde, a young man who in the 1980s became determined to target practices in the tuna-fishing industry leading to the deaths of dolphins. Tuna is often caught with purse seines, a type of large net that encloses around a shoal of fish. Seeing as dolphins tend to swim alongside tuna, they are easily caught in the nets. There, they either die or suffer serious injuries.

LaBudde got a job on a tuna-fishing boat and covertly filmed dolphins dying from their injuries. For months, he hid his true intentions from the crew, spending each day both dreading and hoping for the death of a dolphin. The footage went the 1980s equivalent of viral, showing up in the media all over the world and attracting the attention of major tuna companies.

Still a child at the time, Jacquet was horrified to learn of the consequences of the tuna her family ate. She recalls it as one of her first experiences of shame related to consumption habits. Jacquet persuaded her family to boycott canned tuna altogether. So many others did the same that companies launched the “dolphin-safe” label, which ostensibly indicated compliance with guidelines intended to reduce dolphin deaths. Jacquet returned to eating tuna and thought no more of it.

The campaign to end dolphin deaths in the tuna-fishing industry was futile, however, because it was built upon guilt rather than shame. Jacquet writes, “Guilt is a feeling whose audience and instigator is oneself, and its discomfort leads to self-regulation.” Hearing about dolphin deaths made consumers feel guilty about their fish-buying habits, which conflicted with their ethical values. Those who felt guilty could deal with it by purchasing supposedly dolphin-safe tuna—provided they had the means to potentially pay more and the time to research their choices. A better approach might have been for the videos to focus on tuna companies, giving the names of the largest offenders and calling for specific change in their policies.

But individuals changing their consumption habits did not stop dolphins from dying. It failed to bring about a structural change in the industry. This, Jacquet later realized, was part of a wider shift in environmental action. She explains that it became more about consumers’ choices:

As the focus shifted from supply to demand, shame on the part of corporations began to be overshadowed by guilt on the part of consumers—as the vehicle for solving social and environmental problems. Certification became more and more popular and its rise quietly suggested that responsibility should fall more to the individual consumer rather than to political society. . . . The goal became not to reform entire industries but to alleviate the consciences of a certain sector of consumers.

Shaming, as Jacquet defines it, is about the threat of exposure, whereas guilt is personal. Shame is about the possibility of an audience. Imagine someone were to send a print-out of your internet search history from the last month to your best friend, mother-in-law, partner, or boss. You might not have experienced any guilt making the searches, but even the idea of them being exposed is likely shame-inducing.

Switching the focus of the environmental movement from shame to guilt was, at best, a distraction. It put the responsibility on individuals, even though small actions like turning off the lights count for little. Guilt is a more private emotion, one that arises regardless of exposure. It’s what you feel when you’re not happy about something you did, whereas shame is what you feel when someone finds out. Jacquet writes, “A 2013 research paper showed that just ninety corporations (some of them state-owned) are responsible for nearly two-thirds of historic carbon dioxide and methane emissions; this reminds us that we don’t all share the blame for greenhouse gas emissions.” Guilt doesn’t work because it doesn’t change the system. Taking this into account, Jacquet believes it is time for us to bring back shame, “a tool that can work more quickly and at larger scales.”

The seven habits of effective shaming

So, if you want to use shame as a force for good, as an individual or as part of a group, how can you do so in an effective manner? Jacquet offers seven pointers.

Firstly, “The audience responsible for the shaming should be concerned with the transgression.” It should be something that impacts them so they are incentivized to use shaming to change it. If it has no effect on their lives, they will have little reason to shame. The audience must be the victim. For instance, smoking rates are shrinking in many countries. Part of this may relate to the tendency of non-smokers to shame smokers. The more the former group grows, the greater their power to shame. This works because second-hand smoke impacts their health too, as do indirect tolls like strain on healthcare resources and having to care for ill family members. As Jacquet says, “Shaming must remain relevant to the audience’s norms and moral framework.”

Second, “There should be a big gap between the desired and actual behavior.” The smaller the gap, the less effective the shaming will be. A mugger stealing a handbag from an elderly lady is one thing. A fraudster defrauding thousands of retirees out of their savings is quite another. We are predisposed to fairness in general and become quite riled up when unfairness is significant. In particular, Jacquet observes, we take greater offense when it is the fault of a small group, such as a handful of corporations being responsible for the majority of greenhouse gas emissions. It’s also a matter of contrast. Jacquet cites her own research, which finds that “the degree of ‘bad’ relative to the group matters when it comes to bad apples.” The greater the contrast between the behavior of those being shamed and the rest of the group, the stronger the annoyance will be. For instance, the worse the level of pollution for a corporation is, the more people will shame it.

Third, “Formal punishment should be missing.” Shaming is most effective when it is the sole possible avenue for punishment and the transgression would otherwise go ignored. This ignites our sense of fury at injustice. Jacquet points out that the reason shaming works so well in international politics is that it is often a replacement for formal methods of punishment. If a nation commits major human rights abuses, it is difficult for another nation to use the law to punish them, as they likely have different laws. But revealing and drawing attention to the abuses may shame the nation into stopping, as they do not want to look bad to the rest of the world. When shame is the sole tool we have, we use it best.

Fourth, “The transgressor should be sensitive to the source of shaming.” The shamee must consider themselves subject to the same social norms as the shamer. Shaming an organic grocery chain for stocking unethically produced meat would be far more effective than shaming a fast-food chain for the same thing. If the transgressor sees themselves as subject to different norms, they are unlikely to be concerned.

Fifth, “The audience should trust the source of the shaming.” The shaming must come from a respectable, trustworthy, non-hypocritical source. If it does not, its impact is likely to be minimal. A news outlet that only shames one side of the political spectrum on a cross-spectrum issue isn’t going to have much impact.

Sixth, “Shaming should be directed where possible benefits are greatest.” We all have a limited amount of attention and interest in shaming. It should only be applied where it can have the greatest possible benefits and used sparingly, on the most serious transgressions. Otherwise, people will become desensitized, and the shaming will be ineffective. Wherever possible, we should target shaming at institutions, not individuals. Effective shaming focuses on the powerful, not the weak.

Seventh, “Shaming should be scrupulously implemented” Shaming needs to be carried out consistently. The threat can be more useful than the act itself, hence why it may need implementing on a regular basis. For instance, an annual report on the companies guilty of the most pollution is more meaningful than a one-off one. Companies know to anticipate it and preemptively change their behavior. Jacquet explains that “shame’s performance is optimized when people reform their behavior in response to its threat and remain part of the group. . . . Ideally, shaming creates some friction but ultimately heals without leaving a scar.”

To summarize, Jacquet writes: “When shame works without destroying anyone’s life, when it leads to reform and reintegration rather than fight or flight, or, even better, when it acts as a deterrent against bad behavior, shaming is performing optimally.”

***

Due to our negative experiences with shame on a personal level, we may be averse to viewing it in the light Jacquet describes: as an important and powerful tool. But “shaming, like any tool, is on its own amoral and can be used to any end, good or evil.” The way we use it is what matters.

According to Jacquet, we should not use shame to target transgressions that have minimal impact or are the fault of individuals with little power. We should use it when the outcome will be a broader benefit for society and when formal means of punishment have been exhausted. It’s important the shaming be proportional and done intentionally, not as a means of vindication.

Is Shame Necessary? is a thought-provoking read and a reminder of the power we have as individuals to contribute to meaningful change to the world. One way is to rethink how we view shame.

The Inner Game: Why Trying Too Hard Can Be Counterproductive

The standard way of learning is far from being the fastest or most enjoyable. It’s slow, makes us second guess ourselves, and interferes with our natural learning process. Here we explore a better way to learn and enjoy the process.

***

It’s the final moment before an important endeavor—a speech, a performance, a presentation, an interview, a date, or perhaps a sports match. Up until now, you’ve felt good and confident about your abilities. But suddenly, something shifts. You feel a wave of self-doubt. You start questioning how well you prepared. The urge to run away and sabotage the whole thing starts bubbling to the surface.

As hard as you try to overcome your inexplicable insecurity, something tells you that you’ve already lost. And indeed, things don’t go well. You choke up, forget what you were meaning to say, long to just walk out, or make silly mistakes. None of this comes as a surprise—you knew beforehand that something had gone wrong in your mind. You just don’t know why.

Conversely, perhaps you’ve been in a situation where you knew you’d succeeded before you even began. You felt confident and in control. Your mind could focus with ease, impervious to self-doubt or distraction. Obstacles melted away, and abilities you never knew you possessed materialized.

This phenomenon—winning or losing something in your mind before you win or lose it in reality—is what tennis player and coach W. Timothy Gallwey first called “the Inner Game” in his book The Inner Game of Tennis. Gallwey wrote the book in the 1970s when people viewed sport as a purely physical matter. Athletes focused on their muscles, not their mindsets. Today, we know that psychology is in fact of the utmost importance.

Gallwey recognized that physical ability was not the full picture in any sport. In tennis, success is very psychological because there are really two games going on: the Inner Game and the Outer Game. If a player doesn’t pay attention to how they play the Inner Game—against their insecurities, their wandering mind, their self-doubt and uncertainty—they will never be as good as they have the potential to be. The Inner Game is fought against your own self-defeating tendencies, not against your actual opponent. Gallwey writes in the introduction:

Every game is composed of two parts, an outer game, and an inner game. . . . It is the thesis of this book that neither mastery nor satisfaction can be found in the playing of any game without giving some attention to the relatively neglected skills of the inner game. This is the game that takes place in the mind of the player, and it is played against such obstacles as lapses in concentration, nervousness, self-doubt, and self-condemnation. In short, it is played to overcome all habits of mind which inhibit excellence in performance. . . . Victories in the inner game may provide no additions to the trophy case, but they bring valuable rewards which are more permanent and which can contribute significantly to one’s success, off the court as well as on.

Ostensibly, The Inner Game of Tennis is a book about tennis. But dig beneath the surface, and it teems with techniques and insights we can apply to any challenge. The book is really about overcoming the external obstacles we create that prevent us from succeeding. You don’t need to be interested in tennis or even know anything about it to benefit from this book.

One of the most important insights Gallwey shares is that a major thing which leads us to lose the Inner Game is trying too hard and interfering with our own natural learning capabilities. Let’s take a look at how we can win the Inner Game in our own lives by seeing the importance of not forcing things.

The Two Sides of You

Gallwey was not a psychologist. But his experience as both a tennis player and a coach for other players gave him a deep understanding of how human psychology influences playing. The tennis court was his laboratory. As is evident throughout The Inner Game of Tennis, he studied himself, his students, and opponents with care. He experimented and tested out theories until he uncovered the best teaching techniques.

When we’re learning something new, we often internally talk to ourselves. We give ourselves instructions. When Gallwey noticed this in his students, he wondered who was talking to who. From his observations, he drew his key insight: the idea of Self 1 and Self 2.

Self 1 is the conscious self. Self 2 is the subconscious. The two are always in dialogue.

If both selves can communicate in harmony, the game will go well. More often, this isn’t what happens. Self 1 gets judgmental and critical, trying to instruct Self 2 in what to do. The trick is to quiet Self 1 and let Self 2 follow the natural learning process we are all born competent at; this is the process that enables us to learn as small children. This capacity is within us—we just need to avoid impeding it. As Gallwey explains:

Now we are ready for the first major postulate of the Inner Game: within each player the kind of relationship that exists between Self 1 and Self 2 is the prime factor in determining one’s ability to translate his knowledge of technique into effective action. In other words, the key to better tennis—or better anything—lies in improving the relationship between the conscious teller, Self 1, and the natural capabilities of Self 2.

Self 1 tries to instruct Self 2 using words. But Self 2 responds best to images and internalizing the physical experience of carrying out the desired action.

In short, if we let ourselves lose touch with our ability to feel our actions, by relying too heavily on instructions, we can seriously compromise our access to our natural learning processes and our potential to perform.

Stop Trying so Hard

Gallwey writes that “great music and art are said to arise from the quiet depths of the unconscious, and true expressions of love are said to come from a source which lies beneath words and thoughts. So it is with the greatest efforts in sports; they come when the mind is as still as a glass lake.”

What’s the most common piece of advice you’re likely to receive for getting better at something? Try harder. Work harder. Put more effort in. Pay more attention to what you’re doing. Do more.

Yet what do we experience when we are performing at our best? The exact opposite. Everything becomes effortless. We act without thinking or even giving ourselves time to think. We stop judging our actions as good or bad and observe them as they are. Colloquially, we call this being in the zone. In psychology, it’s known as “flow” or a “peak experience.”

Compare this to the typical tennis lesson. As Gallwey describes it, the teacher wants the student to feel that the cost of the lesson was worthwhile. So they give detailed, continuous feedback. Every time they spot the slightest flaw, they highlight it. The result is that the student does indeed feel the lesson fee is justifiable. They’re now aware of dozens of errors they need to fix—so they book more classes.

In his early days as a tennis coach, Gallwey took this approach. Over time, he saw that when he stepped back and gave his students less feedback, not more, they improved faster. Players would correct obvious mistakes without any guidance. On some deeper level, they knew the correct way to play tennis. They just needed to overcome the habits of the mind getting in the way. Whatever impeded them was not a lack of information. Gallwey writes:

I was beginning to learn what all good pros and students of tennis must learn: that images are better than words, showing better than telling, too much instruction worse than none, and that trying too hard often produces negative results.

There are numerous instances outside of sports when we can see how trying too hard can backfire. Consider a manager who feels the need to constantly micromanage their employees and direct every detail of their work, not allowing any autonomy or flexibility. As a result, the employees lose interest in ever taking initiative or directing their own work. Instead of getting the perfect work they want, the manager receives lackluster efforts.

Or consider a parent who wants their child to do well at school, so they control their studying schedule, limit their non-academic activities, and offer enticing rewards for good grades. It may work in the short term, but in the long run, the child doesn’t learn to motivate themselves or develop an intrinsic desire to study. Once their parent is no longer breathing down their neck, they don’t know how to learn.

Positive Thinking Backfires

Not only are we often advised to try harder to improve our skills, we’re also encouraged to think positively. According to Gallwey, when it comes to winning the Inner Game, this is the wrong approach altogether.

To quiet Self 1, we need to stop attaching judgments to our performance, either positive or negative. Thinking of, say, a tennis serve as “good” or “bad” shuts down Self 2’s intuitive sense of what to do. Gallwey noticed that “judgment results in tightness and tightness interferes with the fluidity required for accurate and quick movement. Relaxation produces smooth strokes and results from accepting your strokes as they are, even if erratic.”

In order to let Self 2’s sense of the correct action take over, we need to learn to see our actions as they are. We must focus on what is happening, not what is right or wrong. Once we can see clearly, we can tap into our inbuilt learning process, as Gallwey explains:

But to see things as they are, we must take off our judgmental glasses, whether they’re dark or rose-tinted. This action unlocks a process of natural development, which is as surprising as it is beautiful. . . . The first step is to see your strokes as they are. They must be perceived clearly. This can be done only when personal judgment is absent. As soon as a stroke is seen clearly and accepted as it is, a natural and speedy process of change begins.

It’s hard to let go of judgments when we can’t or won’t trust ourselves. Gallwey noticed early on that negative assessments—telling his students what they had done wrong—didn’t seem to help them. He tried only making positive assessments—telling them what they were doing well. Eventually, Gallwey recognized that attaching any sort of judgment to how his students played tennis was detrimental.

Positive and negative evaluations are two sides of the same coin. To say something is good is to implicitly imply its inverse is bad. When Self 1 hears praise, Self 2 picks up on the underlying criticism.

Clearly, positive and negative evaluations are relative to each other. It is impossible to judge one event as positive without seeing other events as not positive or negative. There is no way to stop just the negative side of the judgmental process.

The trick may be to get out of the binary of good or bad completely by doing more showing and asking questions like “Why did the ball go that way?” or “What are you doing differently now than you did last time?” Sometimes, getting people to articulate how they are doing by observing their own performance removes the judgments and focuses on developmental possibilities. When we have the right image in mind, we move toward it naturally. Value judgments get in the way of that process.

The Inner Game Way of Learning

We’re all constantly learning and picking up new skills. But few of us pay much attention to how we learn and whether we’re doing it in the best possible way. Often, what we think of as “learning” primarily involves berating ourselves for our failures and mistakes, arguing with ourselves, and not using the most effective techniques. In short, we try to brute-force ourselves into adopting a capability. Gallwey describes the standard way of learning as such:

Step 1: Criticize or judge past behavior.

Step 2: Tell yourself to change, instructing with word commands repeatedly.

Step 3: Try hard; make yourself do it right.

Step 4: Critical judgment about results leading to Self 1 vicious cycle.

The standard way of learning is far from being the fastest or most enjoyable. It’s slow, it makes us feel awful about ourselves, and it interferes with our natural learning process. Instead, Gallwey advocates following the Inner Game way of learning.

First, we must observe our existing behavior without attaching any judgment to it. We must see what is, not what we think it should be. Once we are aware of what we are doing, we can move onto the next step: picturing the desired outcome. Gallwey advocates images over outright commands because he believes visualizing actions is the best way to engage Self 2’s natural learning capabilities. The next step is to trust Self 2 and “let it happen!” Once we have the right image in mind, Self 2 can take over—provided we do not interfere by trying too hard to force our actions. The final step is to continue “nonjudgmental, calm observation of the results” in order to repeat the cycle and keep learning. It takes nonjudgmental observation to unlearn bad habits.

Conclusion

Towards the end of the book, Gallwey writes:

Clearly, almost every human activity involves both the outer and inner games. There are always external obstacles between us and our external goals, whether we are seeking wealth, education, reputation, friendship, peace on earth or simply something to eat for dinner. And the inner obstacles are always there; the very mind we use in obtaining our external goals is easily distracted by its tendency to worry, regret, or generally muddle the situation, thereby causing needless difficulties within.

Whatever we’re trying to achieve, it would serve us well to pay more attention to the internal, not just the external. If we can overcome the instinct to get in our own way and be more comfortable trusting in our innate abilities, the results may well be surprising.

Good Science, Bad Science, Pseudoscience: How to Tell the Difference

In a digital world that clamors for clicks, news is sensationalized and “facts” change all the time. Here’s how to discern what is trustworthy and what is hogwash.

***

Unless you’ve studied it, most of us are never taught how to evaluate science or how to parse the good from the bad. Yet it is something that dictates every area of our lives. It is vital for helping us understand how the world works. It might be too much effort and time to appraise research for yourself, however. Often, it can be enough to consult an expert or read a trustworthy source.

But some decisions require us to understand the underlying science. There is no way around it. Many of us hear about scientific developments from news articles and blog posts. Some sources put the work into presenting useful information. Others manipulate or misinterpret results to get more clicks. So we need the thinking tools necessary to know what to listen to and what to ignore. When it comes to important decisions, like knowing what individual action to take to minimize your contribution to climate change or whether to believe the friend who cautions against vaccinating your kids, being able to assess the evidence is vital.

Much of the growing (and concerning) mistrust of scientific authority is based on a misunderstanding of how it works and a lack of awareness of how to evaluate its quality. Science is not some big immovable mass. It is not infallible. It does not pretend to be able to explain everything or to know everything. Furthermore, there is no such thing as “alternative” science. Science does involve mistakes. But we have yet to find a system of inquiry capable of achieving what it does: move us closer and closer to truths that improve our lives and understanding of the universe.

“Rather than love, than money, than fame, give me truth.”

— Henry David Thoreau

There is a difference between bad science and pseudoscience. Bad science is a flawed version of good science, with the potential for improvement. It follows the scientific method, only with errors or biases. Often, it’s produced with the best of intentions, just by researchers who are responding to skewed incentives.

Pseudoscience has no basis in the scientific method. It does not attempt to follow standard procedures for gathering evidence. The claims involved may be impossible to disprove. Pseudoscience focuses on finding evidence to confirm it, disregarding disconfirmation. Practitioners invent narratives to preemptively ignore any actual science contradicting their views. It may adopt the appearance of actual science to look more persuasive.

While the tools and pointers in this post are geared towards identifying bad science, they will also help with easily spotting pseudoscience.

Good science is science that adheres to the scientific method, a systematic method of inquiry involving making a hypothesis based on existing knowledge, gathering evidence to test if it is correct, then either disproving or building support for the hypothesis. It takes many repetitions of applying this method to build reasonable support for a hypothesis.

In order for a hypothesis to count as such, there must be evidence that, if collected, would disprove it.

In this post, we’ll talk you through two examples of bad science to point out some of the common red flags. Then we’ll look at some of the hallmarks of good science you can use to sort the signal from the noise. We’ll focus on the type of research you’re likely to encounter on a regular basis, including medicine and psychology, rather than areas less likely to be relevant to your everyday life.

[Note: we will use the terms “research” and “science” and “researcher” and “scientist” interchangeably here.]

Power Posing

“The saddest aspect of life right now is that science gathers knowledge faster than society gathers wisdom.” ―Isaac Asimov

First, here’s an example of flawed science from psychology: power posing. A 2010 study by Dana Carney, Andy J. Yap, and Amy Cuddy entitledPower Posing: Brief Nonverbal Displays Effects Neuroendocrine Levels and Risk Tolerance” claimed “open, expansive” poses caused participants to experience elevated testosterone levels, reduced cortisol levels, and greater risk tolerance. These are all excellent things in a high-pressure situation, like a job interview. The abstract concluded that “a person can, via a simple two-minute pose, embody power and instantly become more powerful.” The idea took off. It spawned hundreds of articles, videos, and tweets espousing the benefits of including a two-minute power pose in your day.

Yet at least eleven follow up studies, many led by Joseph Cesario of Michigan State University including “’Power Poses’ Don’t Work, Eleven New Studies Suggest,” failed to replicate the results. None found that power posing has a measurable impact on people’s performance in tasks or on their physiology. While subjects did report a subjective feeling of increased powerfulness, their performance did not differ from subjects who did not strike a power pose.

One of the researchers of the original study, Carney, has since changed her mind about the effect. Carney stated she no longer believe the results of the original study. Unfortunately, this isn’t always how researchers respond when confronted with evidence discrediting their prior work. We all know how uncomfortable changing our minds is.

The notion of power posing is exactly the kind of nugget that spreads fast online. It’s simple, free, promises dramatic benefits with minimal effort, and is intuitive. We all know posture is important. It has a catchy, memorable name. Yet examining the details of the original study reveals a whole parade of red flags. The study had 42 participants. That might be reasonable for preliminary or pilot studies. But is in no way sufficient to “prove” anything. It was not blinded. Feedback from participants was self-reported, which is notorious for being biased and inaccurate.

There is also a clear correlation/causation issue. Powerful, dominant animals tend to use expansive body language that exaggerates their size. Humans often do the same. But that doesn’t mean it’s the pose making them powerful. Being powerful could make them pose that way.

A TED Talk in which Amy Cuddy, the study’s co-author, claimed power posing could “significantly change the way your life unfolds” is one of the most popular to date, with tens of millions of views. The presentation of the science in the talk is also suspect. Cuddy makes strong claims with a single, small study as justification. She portrays power posing as a panacea. Likewise, the original study’s claim that a power pose makes someone “instantly become more powerful” is suspiciously strong.

This is one of the examples of psychological studies related to small tweaks in our behavior that have not stood up to scrutiny. We’re not singling out the power pose study as being unusually flawed or in any way fraudulent. The researchers had clear good intentions and a sincere belief in their work. It’s a strong example of why we should go straight to the source if we want to understand research. Coverage elsewhere is unlikely to even mention methodological details or acknowledge any shortcomings. It would ruin the story. We even covered power posing on Farnam Street in 2016—we’re all susceptible to taking these ‘scientific’ results seriously, without checking on the validity of the underlying science.

It is a good idea to be skeptical of research promising anything too dramatic or extreme with minimal effort, especially without substantial evidence. If it seems too good to be true, it most likely is.

Green Coffee Beans

“An expert is a person who has made all the mistakes that can be made in a very narrow field.” ―Niels Bohr

The world of weight-loss science is one where bad science is rampant. We all know, deep down, that we cannot circumnavigate the need for healthy eating and exercise. Yet the search for a magic bullet, offering results without effort or risks, continues. Let’s take a look at one study that is a masterclass in bad science.

EntitledRandomized, Double-Blind, Placebo-Controlled, Linear Dose, Crossover Study to Evaluate the Efficacy and Safety of a Green Coffee Bean Extract in Overweight Subjects,” it was published in 2012 in the journal Diabetes, Metabolic Syndrome and Obesity: Targets and Therapy. On the face of it, and to the untrained eye, the study may appear legitimate, but it is rife with serious problems, as Scott Gavura explained in the article “Dr. Oz and Green Coffee Beans – More Weight Loss Pseudoscience” in the publication Science-Based Medicine. The original paper was later retracted by its authors. The Federal Trade Commission (FTC) ordered the supplement manufacturer who funded the study to pay a $3.5 million fine for using it in their marketing materials, describing it as “botched.”

The Food and Drug Administration (FDA) recommends studies relating to weight-loss consist of at least 3,000 participants receiving the active medication and at least 1,500 receiving a placebo, all for a minimum period of 12 months. This study used a mere 16 subjects, with no clear selection criteria or explanation. None of the researchers involved had medical experience or had published related research. They did not disclose the conflict of interest inherent in the funding source. It didn’t cover efforts to avoid any confounding factors. It is vague about whether subjects changed their diet and exercise, showing inconsistencies. The study was not double-blinded, despite claiming to be. It has not been replicated.

The FTC reported that the study’s lead investigator “repeatedly altered the weights and other key measurements of the subjects, changed the length of the trial, and misstated which subjects were taking the placebo or GCA during the trial.” A meta-analysis by Rachel Buchanan and Robert D. Beckett, “Green Coffee for Pharmacological Weight Loss” published in the Journal of Evidence-Based Complementary & Alternative Medicine, failed to find evidence for green coffee beans being safe or effective; all the available studies had serious methodological flaws, and most did not comply with FDA guidelines.

Signs of Good Science

“That which can be asserted without evidence can be dismissed without evidence.” ―Christopher Hitchens

We’ve inverted the problem and considered some of the signs of bad science. Now let’s look at some of the indicators a study is likely to be trustworthy. Unfortunately, there is no single sign a piece of research is good science. None of the signs mentioned here are, alone, in any way conclusive. There are caveats and exceptions to all. These are simply factors to evaluate.

It’s Published by a Reputable Journal

“The discovery of instances which confirm a theory means very little if we have not tried, and failed, to discover refutations.” —Karl Popper

A journal, any journal, publishing a study says little about its quality. Some will publish any research they receive in return for a fee. A few so-called “vanity publishers” claim to have a peer-review process, yet they typically have a short gap between receiving a paper and publishing it. We’re talking days or weeks, not the expected months or years. Many predatory publishers do not even make any attempt to verify quality.

No journal is perfect. Even the most respected journals make mistakes and publish low-quality work sometimes. However, anything that is not published research or based on published research in a journal is not worth consideration. Not as science. A blog post saying green smoothies cured someone’s eczema is not comparable to a published study. The barrier is too low. If someone cared enough about using a hypothesis or “finding” to improve the world and educate others, they would make the effort to get it published. The system may be imperfect, but reputable researchers will generally make the effort to play within it to get their work noticed and respected.

It’s Peer Reviewed

Peer review is a standard process in academic publishing. It’s intended as an objective means of assessing the quality and accuracy of new research. Uninvolved researchers with relevant experience evaluate papers before publication. They consider factors like how well it builds upon pre-existing research or if the results are statistically significant. Peer review should be double-blinded. This means the researcher doesn’t know who is reviewing their work and the reviewer doesn’t know who the researcher is.

Publishers only perform a cursory “desk check” before moving onto peer review. This is to check for major errors, nothing more. They cannot have the expertise necessary to vet the quality of every paper they handle—hence the need for external experts. The number of reviewers and strictness of the process depends on the journal. Reviewers either declare a paper unpublishable or suggest improvements. It is rare for them to suggest publishing without modifications.

Sometimes several rounds of modifications prove necessary. It can take years for a paper to see the light of day, which is no doubt frustrating for the researcher. But it ensures no or fewer mistakes or weak areas.

Pseudoscientific practitioners will often claim they cannot get their work published because peer reviewers suppress anything contradicting prevailing doctrines. Good researchers know having their work challenged and argued against is positive. It makes them stronger. They don’t shy away from it.

Peer review is not a perfect system. Seeing as it involves humans, there is always room for bias and manipulation. In a small field, it may be easy for a reviewer to get past the double-blinding. However, as it stands, peer review seems to be the best available system. In isolation, it’s not a guarantee that research is perfect, but it’s one factor to consider.

The Researchers Have Relevant Experience and Qualifications

One of the red flags in the green coffee bean study was that the researchers involved had no medical background or experience publishing obesity-related research.

While outsiders can sometimes make important advances, researchers should have relevant qualifications and a history of working in that field. It is too difficult to make scientific advancements without the necessary background knowledge and expertise. If someone cares enough about advancing a given field, they will study it. If it’s important, verify their backgrounds.

It’s Part of a Larger Body of Work

“Science, my lad, is made up of mistakes, but they are mistakes which it is useful to make, because they lead little by little to the truth.” ―Jules Verne

We all like to stand behind the maverick. But we should be cautious of doing so when it comes to evaluating the quality of science. On the whole, science does not progress in great leaps. It moves along millimeter by millimeter, gaining evidence in increments. Even if a piece of research is presented as groundbreaking, it has years of work behind it.

Researchers do not work in isolation. Good science is rarely, if ever, the result of one person or even one organization. It comes from a monumental collective effort. So when evaluating research, it is important to see if other studies point to similar results and if it is an established field of work. For this reason, meta-analyses, which analyze the combined results of many studies on the same topic, are often far more useful to the public than individual studies. Scientists are humans and they all make mistakes. Looking at a collective body of work helps smooth out any problems. Individual studies are valuable in that they further the field as a whole, allowing for the creation of meta-studies.

Science is about evidence, not reputation. Sometimes well-respected researchers, for whatever reason, produce bad science. Sometimes outsiders produce amazing science. What matters is the evidence they have to support it. While an established researcher may have an easier time getting support for their work, the overall community accepts work on merit. When we look to examples of unknowns who made extraordinary discoveries out of the blue, they always had extraordinary evidence for it.

Questioning the existing body of research is not inherently bad science or pseudoscience. Doing so without a remarkable amount of evidence is.

It Doesn’t Promise a Panacea or Miraculous Cure

Studies that promise anything a bit too amazing can be suspect. This is more common in media reporting of science or in research used for advertising.

In medicine, a panacea is something that can supposedly solve all, or many, health problems. These claims are rarely substantiated by anything even resembling evidence. The more outlandish the claim, the less likely it is to be true. Occam’s razor teaches us that the simplest explanation with the fewest inherent assumptions is most likely to be true. This is a useful heuristic for evaluating potential magic bullets.

It Avoids or at Least Discloses Potential Conflicts of Interest

A conflict of interest is anything that incentivizes producing a particular result. It distorts the pursuit of truth. A government study into the health risks of recreational drug use will be biased towards finding evidence of negative risks. A study of the benefits of breakfast cereal funded by a cereal company will be biased towards finding plenty of benefits. Researchers do have to get funding from somewhere, so this does not automatically make a study bad science. But research without conflicts of interest is more likely to be good science.

High-quality journals require researchers to disclose any potential conflicts of interest. But not all journals do. Media coverage of research may not mention this (another reason to go straight to the source). And people do sometimes lie. We don’t always know how unconscious biases influence us.

It Doesn’t Claim to Prove Anything Based on a Single Study

In the vast majority of cases, a single study is a starting point, not proof of anything. The results could be random chance, or the result of bias, or even outright fraud. Only once other researchers replicate the results can we consider a study persuasive. The more replications, the more reliable the results are. If attempts at replication fail, this can be a sign the original research was biased or incorrect.

A note on anecdotes: they’re not science. Anecdotes, especially from people close to us or those who have a lot of letters behind their name, have a disproportionate clout. But hearing something from one person, no matter how persuasive, should not be enough to discredit published research.

Science is about evidence, not proof. And evidence can always be discredited.

It Uses a Reasonable, Representative Sample Size

A representative sample represents the wider population, not one segment of it. If it does not, then the results may only be relevant for people in that demographic, not everyone. Bad science will often also use very small sample sizes.

There is no set target for what makes a large enough sample size; it all depends on the nature of the research. In general, the larger, the better. The exception is in studies that may put subjects at risk, which use the smallest possible sample to achieve usable results.

In areas like nutrition and medicine, it’s also important for a study to last a long time. A study looking at the impact of a supplement on blood pressure over a week is far less useful than one over a decade. Long-term data smooths out fluctuations and offers a more comprehensive picture.

The Results Are Statistically Significant

Statistical significance refers to the likelihood, measured in a percentage, that the results of a study were not due to pure random chance. The threshold for statistical significance varies between fields. Check if the confidence interval is in the accepted range. If it’s not, it’s not worth paying attention to.

It Is Well Presented and Formatted

“When my information changes, I alter my conclusions. What do you do, sir?” ―John Maynard Keynes

As basic as it sounds, we can expect good science to be well presented and carefully formatted, without prominent typos or sloppy graphics.

It’s not that bad presentation makes something bad science. It’s more the case that researchers producing good science have an incentive to make it look good. As Michael J. I. Brown of Monash University explains in How to Quickly Spot Dodgy Science, this is far more than a matter of aesthetics. The way a paper looks can be a useful heuristic for assessing its quality. Researchers who are dedicated to producing good science can spend years on a study, fretting over its results and investing in gaining support from the scientific community. This means they are less likely to present work looking bad. Brown gives an example of looking at an astrophysics paper and seeing blurry graphs and misplaced image captions—then finding more serious methodological issues upon closer examination. In addition to other factors, sloppy formatting can sometimes be a red flag. At the minimum, a thorough peer-review process should eliminate glaring errors.

It Uses Control Groups and Double-Blinding

A control group serves as a point of comparison in a study. The control group should be people as similar as possible to the experimental group, except they’re not subject to whatever is being tested. The control group may also receive a placebo to see how the outcome compares.

Blinding refers to the practice of obscuring which group participants are in. For a single-blind experiment, the participants do not know if they are in the control or the experimental group. In a double-blind experiment, neither the participants nor the researchers know. This is the gold standard and is essential for trustworthy results in many types of research. If people know which group they are in, the results are not trustworthy. If researchers know, they may (unintentionally or not) nudge participants towards the outcomes they want or expect. So a double-blind study with a control group is far more likely to be good science than one without.

It Doesn’t Confuse Correlation and Causation

In the simplest terms, two things are correlated if they happen at the same time. Causation is when one thing causes another thing to happen. For example, one large-scale study entitled “Are Non-Smokers Smarter than Smokers?” found that people who smoke tobacco tend to have lower IQs than those who don’t. Does this mean smoking lowers your IQ? It might, but there is also a strong link between socio-economic status and smoking. People of low income are, on average, likely to have lower IQ than those with higher incomes due to factors like worse nutrition, less access to education, and sleep deprivation. A study by the Centers for Disease Control and Prevention entitled “Cigarette Smoking and Tobacco Use Among People of Low Socioeconomic Status,” people of low socio-economic status are also more likely to smoke and to do so from a young age. There might be a correlation between smoking and IQ, but that doesn’t mean causation.

Disentangling correlation and causation can be difficult, but good science will take this into account and may detail potential confounding factors of efforts made to avoid them.

Conclusion

“The scientist is not a person who gives the right answers, he’s one who asks the right questions.” ―Claude Lévi-Strauss

The points raised in this article are all aimed at the linchpin of the scientific method—we cannot necessarily prove anything; we must consider the most likely outcome given the information we have. Bad science is generated by those who are willfully ignorant or are so focused on trying to “prove” their hypotheses that they fudge results and cherry-pick to shape their data to their biases. The problem with this approach is that it transforms what could be empirical and scientific into something subjective and ideological.

When we look to disprove what we know, we are able to approach the world with a more flexible way of thinking. If we are unable to defend what we know with reproducible evidence, we may need to reconsider our ideas and adjust our worldviews accordingly. Only then can we properly learn and begin to make forward steps. Through this lens, bad science and pseudoscience are simply the intellectual equivalent of treading water, or even sinking.

Article Summary

  • Most of us are never taught how to evaluate science or how to parse the good from the bad. Yet it is something that dictates every area of our lives.
  • Bad science is a flawed version of good science, with the potential for improvement. It follows the scientific method, only with errors or biases.
  • Pseudoscience has no basis in the scientific method. It does not attempt to follow standard procedures for gathering evidence. The claims involved may be impossible to disprove.
  • Good science is science that adheres to the scientific method, a systematic method of inquiry involving making a hypothesis based on existing knowledge, gathering evidence to test if it is correct, then either disproving or building support for the hypothesis.
  • Science is about evidence, not proof. And evidence can always be discredited.
  • In science, if it seems too good to be true, it most likely is.

Signs of good science include:

  • It’s Published by a Reputable Journal
  • It’s Peer Reviewed
  • The Researchers Have Relevant Experience and Qualifications
  • It’s Part of a Larger Body of Work
  • It Doesn’t Promise a Panacea or Miraculous Cure
  • It Avoids or at Least Discloses Potential Conflicts of Interest
  • It Doesn’t Claim to Prove Anything Based on a Single Study
  • It Uses a Reasonable, Representative Sample Size
  • The Results Are Statistically Significant
  • It Is Well Presented and Formatted
  • It Uses Control Groups and Double-Blinding
  • It Doesn’t Confuse Correlation and Causation

Is Vulnerability a Choice?

Being vulnerable is not a choice. It’s a reality of living. What we do with that vulnerability can either open doors to deeper connection, or throw up walls that stifle growth and fulfillment.

***

Vulnerability: the quality or state of being exposed to the possibility of being attacked or harmed, either physically or emotionally.

Given the potential consequences, why would anyone ever choose to be vulnerable? Who wants to risk an emotional or physical attack?

At the basic biological level, it seems to make very little sense to be vulnerable. When we are, we can more easily get hurt. We can get physically maimed or killed by a predator. Emotional attacks can make us afraid of rejection. Since the vast majority of us don’t want to die and instead pass on our genes, avoiding vulnerability seems to make perfect sense. Be tough in order to increase your chances of a long life. Don’t give anyone the opportunity to hurt you.

However, humans usually want to do more than just survive. We focus on the quality of our lives as well. Yes, we want our lives to be long. But we also want them to be good.

Part of a good life is having good relationships. We are social creatures and live longer, healthier lives when we have people around us that we trust and love. We want to be around people who can make us laugh and help us through life’s inevitable hard times. Our lives are less stressful when we have people with whom we can relax and be authentic. Without genuine vulnerability, it’s impossible to build the types of relationships that can provide comfort and increase resilience. The risks of vulnerability may be high, but the rewards of positive, strong relationships are even higher.

The reality is, we are vulnerable in some way at all times. We are vulnerable to viruses and accidents, misunderstandings and the pain caused by our fears and anxieties. Vulnerability is a part of life for all of us. Having close relationships where we can be vulnerable is actually a way to reduce our overall weakness. As Dr. Sue Johnson said on The Knowledge Project, “We need connection with others like we need oxygen. We’re way too vulnerable without it.”

The only choice we really have when it comes to vulnerability is the choice to acknowledge it or not. There is no doubt it can be hard to be vulnerable, especially if we didn’t have positive experiences with it as children. But social connections sustain us, and meaningful social connections are hard to build and maintain without mutual vulnerability.

Some people constantly pretend they have no vulnerabilities. Those people are frustrating to be around. Why? Because everyone is vulnerable in some way, so we know that those who say they aren’t are lying. No one likes to spend time around people who can’t be honest. Furthermore, people who refuse to acknowledge their vulnerabilities (at least to themselves) don’t make great friends or partners because we can’t learn much from them to help us process our own vulnerabilities. Even if it’s hard to pinpoint, we sense something is missing in our interactions with them. They don’t trust us enough to risk hurt.

Someone who goes on about how everything in their life is okay can’t offer much insight into how to deal with things that are most definitely not okay. And someone who thinks they are infallible tends to blame others when things don’t work out. They can’t admit to being wrong, which is another drawback to having them as a friend.

In her TED talk on the subject, author Brené Brown says, “The more afraid we are, the more vulnerable we are, the more afraid we are.” We develop these lists of all the things we won’t do and all the ways in which we won’t engage with people in order to protect ourselves. Our vulnerabilities get registered as something that could be exploited to hurt us. So we put up big buffers of denial and anger because it seems that if we admit we are afraid of something, our whole lives are going to come crashing down as people rush in to take advantage of our weaknesses. Except that isn’t true.

When we allow ourselves to be vulnerable (most often to those we are closest to, but also occasionally to others when the situation would benefit from us putting ourselves out there), we can create amazing reciprocal interactions that empower all parties.

When we are able to say the following: “I don’t know,” “I made a mistake,” “I’m sorry for causing you pain,” “I’m scared,” “I cried last night,” or “I’m struggling with this,” we actually free up energy because we no longer have to put effort into maintaining our buffers and our illusions. When we open up and admit to our vulnerabilities, we give people the opportunity to safely admit to theirs as well. We might hear back: “I make mistakes all the time,” “I’m scared as well,” “I cry too,” or “I also struggle with that.” And in that shared space, we can let go of some of the fear and make room for a deeper connection. When we are vulnerable with someone who doesn’t judge us for it, we can grow stronger. We can become less affected by situations that normally cause us stress.

Most importantly, we strengthen our connection with the people we are sharing with.

Although someone may react by ridiculing you when you admit to a fear, a far more common reaction is respect for your bravery and a sigh of relief over a shared circumstance. Someone doesn’t have to share your particular fear to feel a connection. We’re all afraid of something, and by being honest about your fears, you have signaled that others can share their fears with you in return.

We have written before about the social media prism and how it distorts reality, leading most of us to believe we are the only ones whose lives suck sometimes. The endless posts about career successes and fabulous vacations are really a large-scale representation of the fear of vulnerability. Complex, varied lives become little more than a glittering highlight reel. We never get to see the outtakes.

But coming clean about the downs increases the value of sharing the ups. At the very least, it’s more relatable. We learn more through failure than we do through success. And since we can’t try everything, learning from others’ failures is exceptionally valuable. To just hear the story of the person who made it big and sold their company is not useful. To hear about their multiple failures, their trials, their stops and starts and all the times they doubted themselves—now that’s an insight worth sharing.

Being vulnerable starts with being honest with yourself. How can you get better if you can’t admit that you could be better? How are you going to be a better partner or friend if you can’t admit that sometimes you aren’t a great one? How will you learn from your mistakes if you don’t acknowledge making any?

When we share that vulnerability and find people we can be open with, we form valuable connections. After all, to really trust someone, we need to know if they are going to be there when we are vulnerable. As Dr. Sue Johnson explained on The Knowledge Project, “When you can be vulnerable for a moment, and that person tunes in and cares about your vulnerability, that’s the person to go with.” In this way, vulnerability can also serve as a litmus test for your close relationships. If you can’t be vulnerable with someone, why bother? What can you really get from a relationship in which you can never relax and be yourself?

When we have people with whom we can be vulnerable, we actually reduce our exposure to potential harm and improve the quality of our life. By putting ourselves out there and risking hurt, we often find that we create more meaningful interactions with the people in our lives. When we have people we can trust with our deepest vulnerabilities, we increase our ability to be resilient in the face of chance and change.