Gear-obsessed editors choose every product we review. We may earn commission if you buy from a link. How we test gear.

Why Tech Will Never Be Able to Predict the Next Mass Shooting

President Trump wants social media companies to develop the tools to catch red flags. But this isn’t Minority Report.

TOPSHOT-us-crime-shooting-toll
MARK RALSTONGetty Images

Following the horrific mass shootings in El Paso and Dayton over the weekend, President Trump has called on private enterprises, particularly social media companies, to develop new tools for surfacing “red flags” that could help identify violent shooters before they act. Trump says these tools could enable the government to act earlier to prevent mass casualties.

This content is imported from YouTube. You may be able to find the same content in another format, or you may be able to find more information, at their web site.

The problem? With the current state of AI technology and the ongoing negligence of social media platforms to protect human life, this simply isn’t a feasible idea. Here’s why.

Technically, this is a nearly impossible request.

Predictive algorithms are hard to make. For simple, basic algorithms to make predictions, they need a lot of data. From a social media perspective, there’s an abundance of data on everything from the number of followers, to the frequency of posts, to the average number of characters in specific posts. But the real problem is that an algorithm needs to balance input data with output data.

In short, the algorithm needs to prove that all As + all Bs = C. And while there has been an unacceptable number of mass shootings, there has not been the number of shootings necessary to create a responsible and predictable tool.

Predictive models for something like price of stocks are hard enough to generate. However, those models have millions of input and output data points, so reasonable predictive tools are at least still possible. But they’d still be largely unreliable, because there are a wide variety of factors that go into predicting stock performance.

AI, in its current state, can’t save us because it is isn’t predicting the future.

The myriad factors that lead to violence are even harder to quantify. Given the large discrepancy between input and output data for mass shootings, there’s little chance of finding a statistical model that could work accurately.

Current algorithms generally work best when they look at a set number of circumstances and make predictions from those variables. That’s why AI can be good at chess or even the board game Go! In fact, this ability for AI to work within the confines of a specific problem set is what allows it to work very quickly to resolve a specific problem.

But working within that set is also AI’s biggest limitation. In John Brockman’s Possible Minds: 25 Ways of Looking at AI, David Deutsch observes that “a mere AI is incapable of having any such ideas, because the capacity for considering them has been designed out of its constitution.” AI is not creative and has a hard time predicting the unknown. This is a big problem in regard to predicting violent actions.

Protestors March Against Gun Violence In NYC After  Two Mass Shootings
Go NakamuraGetty Images

“There is no single personality profile that can reliably predict who will resort to gun violence,” said Arthur C. Evans, Jr., Ph.D., the CEO of the American Psychological Association, in a press statement. In fact, when speaking specifically of mass casualty incidents, because the sample size is limited, it’s hard to make any clear statements about who is likely to act violently or when.

Mass casualty incidents are often unexpected. Think about Columbine, Parkland, Sandy Hook, El Paso, Dayton, or any of the hundreds of other tragic shootings in America in recent years. Because they were unprecedented, horrific acts of violence, a predictive model would likely be unable to “understand” the potential for that action. Violence without precedent is nearly impossible for humans or machines to predict.

AI, in its current state, can’t save us because it is isn’t predicting the future. It’s merely sharing likely outcomes based on historical precedent.

Philosophically, social media companies should not be used in place of government tools.

Mass casualties and black swan events are the result of a wide array of parameters including social, economic, psychological, and physiological factors. One of those factors may indeed be social media.

In Zucked, Roger McNamee acknowledges the growing instability of our democratic, economic, and political systems because of the role that social media currently plays in our society. Peter Singer makes a largely similar argument in LikeWar: Facebook, Twitter, and the like were created with simple goals to connect people online, but they’ve done little to protect their users. As a result, the platforms have been weaponized and used to incite violence. This has led to the spread of extreme and malicious views.

McNamee calls the social media environment that reinforces people’s extreme views “filter bubbles,” saying, “users trust what they find on social media. They trust it because it appears to originate from friends and thanks to filter bubbles, conforms to each user’s preexisting beliefs. Each user has his or her own Truman Show, tailored to press emotional buttons including those associated with fear and anger.”

Social media thrives when users spend more time on social media sites. And people spend more time on these sites when the content in their filter bubbles reinforces their own ideas. In fact, social media is often where radicalization occurs.

The social media giants state their intent is not radicalization, of course. But radicalization is a studied result of the algorithms that recommend articles or videos to users. This is due to the fact that social media engines recommend content that’s similar to previously consumed content, which leads to a perception bias that there’s a wide body of content supporting any point of view.

Flags Fly At Half Staff In Washington DC For Mass Shooting Victims
The American flag flies at half staff over the U.S. Capitol in memory of those killed in the recent mass shootings in El Paso, Texas and Dayton, Ohio August 5, 2019 in Washington, DC. Two gunmen killed 29 people and wounded dozens of others in two separate shootings within a 24 hour period over the weekend.
Win McNameeGetty Images

“I just kept falling deeper and deeper into this, and it appealed to me because it made me feel a sense of belonging,” said former alt-right radical Caleb Cain in a recent New York Times article on YouTube consumption. “I was brainwashed.”

Social media is not directly responsible for mass casualty incidents. But it’s largely proven to reinforce violent inclinations through the use of gamified and filtered content. While social media reliably has proven a path to reinforce extreme behavior, social media companies have shown little effort in negating that on-platform behavior.

“We should be extremely wary of leaving it up to platforms to decide who is and is not a violent shooter,” says Eva Galperin, Director of Cyber Security for the Electronic Frontier Foundation. “This is a potentially dangerous idea.”

Technology platforms have a singular interest in creating shareholder value. They aren’t arms of the government and thus shouldn’t be assumed to have the interests of non-shareholders at heart. And even if they wanted to, they don’t have the right tools in place to be able to understand, predict, and act on this information.

We should hold tech titans accountable for fixing their broken systems, but we shouldn’t trust them to act as extensions of our policing forces. This is especially important given the recent hacks and election interferences through Facebook in the past few years.

If we put our trust in a non-governmental platform to assist in policing and it gets corrupted by a foreign actor, who is to blame, and who is to be trusted? To that point, how would we revoke that power, if warranted, once it has been handed over?

Frankly, this isn’t Minority Report.

“Putting all of one’s faith in an algorithm is a very bad idea, [as] they’re frequently inaccurate,” says Galperin, who cites recent cases by the ACLU and others where AI-based facial recognition software misidentified members of congress as criminals. “The idea that everything can be magically solved with an algorithm is one I’m deeply skeptical of.”

If we’re to suspend belief for a second that AI could predict violent actors and that social media platforms could detect this and alert them to the government, we’re in real danger of Minority Report status. Accusing someone of a crime they’re potentially going to commit is dangerous. It’s easy to get wrong and it’s often unprovable.

But we can hold social media platforms accountable for human-centric development.

Social media platforms have a role to play in curbing social behavior, but it requires first recognizing the role that they have in society in general. The foundations of our society now rest, on some level, on our social media system. Yet, this is a system that doesn’t take user privacy, data, preferences seriously. As users, we must demand more of social media companies and educate the public on how to be better consumers.

A growing number of Silicon Valley insiders and technology theorists believe that what is necessary is governmental legislation that can improve social media platforms and protect American citizens from harm. This legislation isn’t the creation of an all-seeing AI.

"We should be extremely wary of leaving it up to platforms to decide who is and is not a violent shooter."

It’s instead tools that educate the American consumer on fake news. It’s tools that allow users on social platforms to determine if they want their feed delivered in filter bubbles or perhaps the old-fashioned way (i.e. a chronological feature of all postings). It’s education and help for those suffering from bullying, social media addiction, and depression caused by social media consumption.

There is no simple solution to stop mass shootings. Putting our trust in social media companies to develop an algorithm that can predict violence before it happens is simply unfeasible and potentially extremely dangerous. We often imbue technology, and the respective companies associated with technology, with too much power. We believe that they can save us. In truth, we're the only people who can save us.

We must accept the limits of technology and work with it to find solutions for a growing number of societal problems. Private companies must help address our biggest social issues, but society must be aware of how companies can help, what they can actually offer, and their motivations. Only then should they be endowed with power that would be hard to revoke.

This content is created and maintained by a third party, and imported onto this page to help users provide their email addresses. You may be able to find more information about this and similar content at piano.io
Advertisement - Continue Reading Below
More From Security