Google recently presented a white paper at a digital-security conference in Germany, in which the search giant detailed all the steps it is taking across its various divisions—YouTube, Google News and Google Search—to fight misinformation and disinformation. The company said it is working hard in a number of areas including using quality signals to help surface better content, which involves relying on human search curators to determine whether something is a high-quality result for a specific query. Google also noted that it has been adding more “context” for searches, including links to related information, as well as different ways of notifying users that certain results have been fact-checked by reliable organizations. And the company said it is trying to crack down on trolls and hackers who hijack accounts or pretend to be someone they are not.
These kinds of efforts are clearly worthwhile, given the kind of influence and reach that Google products have. Facebook continues to get the bulk of the press (mostly bad) for its role in helping to weaponize misinformation networks during the 2016 election and elsewhere, but Google’s search and recommendation algorithms arguably have more impact—it’s just not as visible or as obvious as Facebook’s. The search company has taken full advantage of its somewhat lower profile whenever the topic comes up: During multiple congressional hearings into misinformation and the election, Google has pushed the idea that since it isn’t a social network, it doesn’t suffer from the same kinds of problems as Facebook, where social sharing makes misinformation go viral and algorithms exacerbate the problem.
But that’s only partly true. While Google Search may not be subject to those kinds of effects, YouTube is very much a sharing-based social network, and as a result there are very similar social dynamics to the misinformation on that platform—and very similar problems caused by the recommendation algorithms that determine what content users see once they have finished a video. In fact, a number of research and news reports have detailed just how easy it is to get sucked into a rabbit hole of conspiracy theories on YouTube, even after watching something innocuous.
NEW: Jason Leopold discusses controversial BuzzFeed scoop
According to one recent study, a significant number of users wound up being skeptical the the earth is round after watching YouTube videos, even though most of them said they were not interested in such theories before they watched. YouTube said in January that it had made changes to its recommendation algorithm to stop conspiracy theories from being promoted. Part of the challenge with getting rid of this kind of content is hinted at in Google’s security report. As the company put it, “it can be extremely difficult (or even impossible) for humans or technology to determine the veracity of, or intent behind, a given piece of content, especially when it relates to current events.” Also, Google goes on to say: “Reasonable people can have different perspectives on the right balance between risks of harm to good faith, free expression, and the imperative to tackle disinformation.”
The argument, then, is this: Not only is it hard for even the smartest algorithms (and richest companies) to figure out what is capital T true and what is false, there’s also the question of how much speech Google and other platforms should effectively be censoring. If a bunch of freaks on the Internet want to think that the world is flat or that 9/11 was a hoax, so what? Should their juvenile videos be removed from the Internet completely?
That’s not the only thing that makes cracking down on misinformation difficult for a platform like YouTube, however. As a recent New York Times story noted, one of the implications of a crackdown is that the company might actually have to start censoring—or at least down-ranking—videos from YouTube stars like Shane Dawson, who has gained tens of millions of followers in part by posting videos questioning the moon landing and other accepted historical facts. Not only is that going to make YouTube distinctly unpopular with heavy users who rely on it for income, but that in turn could threaten the company’s revenue stream. How strongly will Google continue to push back against misinformation when its own livelihood is at stake? For that answer, check the next white paper…
Here’s more on Google, YouTube and misinformation:
- Brand exodus: A number of leading brands, including Walt Disney Co. and Nestle, have reportedly pulled their ads from YouTube after allegations that pedophiles are sending each other messages in the comments on videos featuring girls exercising or trying on clothes—messages that pinpoint the exact moment in the video when skin or body parts or other sexually suggestive material can be found. As The Verge notes, this isn’t the first time YouTube has faced an advertiser boycott over similar problems.
- Alt influence: In a recent piece for CJR, Zoë Beery wrote about how a loose network of alt-right YouTubers including Daily Wire editor Ben Shapiro and an account called Patriot Nurse create confusion around breaking news stories by circulating fabricated details or pushing the narrative to ridiculous extremes. Misinformation researchers like Becca Lewis of Data & Society call them the “alternative influence network.”
- Anti-vax: A report from BuzzFeed says that search phrases such as “should I vaccinate my kids” often pull up videos that contain dubious scientific claims or describe vaccinations as harmful. For example, a search from a freshly created account for information on vaccines brought up a recommended video called “Mom Researches Vaccines, Discovers Vaccination Horrors and Goes Vaccine Free.”
- Time spent: Guillaume Chaslot, a former Google engineer who helped create the recommendation algorithms used by YouTube and now researches their negative effects, told CJR that the search giant’s number one concern when he worked there was not the accuracy of videos but the amount of time that users spent clicking on them, which inevitably led to recommending more controversial or shocking videos.
Other notable stories:
- Just two weeks before she was arrested by the federal Department of Justice in the Philippines and charged with “cyber libel,” journalist and Rappler founder Maria Ressa spoke with Recode writer and New York Times columnist Kara Swisher about what is happening in her country and how totalitarian regimes pose an ongoing danger to journalism.
- Sarah Schmalbach of the Lenfest Local Lab in Philadelphia writes about how she and her team used text messaging to help inform citizens in that city about election issues. Text messaging is more accessible than news apps for many people, she says, and so-called “open rates” are much higher than other methods such as email.
- On the one-year anniversary of the death of Slovak investigative reporter Jan Kuciak, press freedom advocacy groups including Reporters without Borders, Index on Censorship, and the European Federation of Journalists sent an open letter to the authorities in that country asking why they never investigated death threats against him.
- During a recent discussion at Harvard with law professor Jonathan Zittrain, Facebook CEO Mark Zuckerberg said “we don’t want a society where there’s a camera in everyone’s living room,” at which point Zittrain reminded Zuckerberg that this is exactly what Facebook has done with its Portal video screen device.
- Rutger Bergman, a Dutch historian who gained Internet fame by challenging rich attendees at the exclusive Davos summit, is going viral again for a video clip of an appearance on the Tucker Carlson show that never aired. After Bergman challenges Carlson as “a millionaire funded by billionaires,” the Fox host calls him a “moron” with a “tiny brain,” and then adds “go fuck yourself.”
- USA Today analyzed more than 3 million tweets and thousands of public Facebook posts following the incident involving Covington high-school students in Washington, DC and a native American elder, and found that tweets from suspicious accounts helped trigger what became a “viral storm” of condemnation.
- Lawyers representing Nicholas Sandmann, the Covington student who confronted native elder Nathan Phillips at the demonstration in Washington, have filed a $250 million defamation lawsuit against The Washington Post, alleging that it “wrongfully targeted and bullied Nicholas because he was the white, Catholic student wearing a red ‘Make America Great Again’ souvenir cap.”
- Sam Thielman of the Tow Center at Columbia interviewed BuzzFeed national security reporter Jason Leopold for CJR. Leopold and a colleague wrote a controversial story alleging that former Trump advisor Michael Cohen was told by Trump to lie to Congress about negotiations with the Russian government for a Trump tower in Moscow.
- Researchers for The Alliance for Securing Democracy, a bipartisan initiative backed by the German Marshall Fund, write about a seemingly innocuous online media company called Soapbox, whose output consists of catchy, slickly produced videos on political and cultural topics. It was created by Russian state-backed media.
ICYMI: ‘Mansplaining’ and its offspring