Technology

Not just Tay: a recent history of racist AI bots

Microsoft's Tay AI bot was intended to charm the internet with cute millennial jokes and memes. Instead, she became a genocidal maniac. Just hours after Tay started talking to people on Twitter — and, as Microsoft explained, learning from those conversations — the bot started to speak like a bad 4chan thread.

Now Tay is offline, and Microsoft says it's "making adjustments" to, we guess, prevent Tay from learning how to deny the Holocaust in the future.

It didn't take long for Tay to learn the dark ways of the web.
It didn't take long for Tay to learn the dark ways of the web. Photo: Twitter: @TayandYou

In the meantime, more than a few people have wondered how Microsoft didn't see this coming in the first place. If you build a bot that will repeat anything — including some pretty bad and obvious racist slurs — the trolls will come.

And it's not like this is the first time this has happened. The internet has some experience turning well-meaning bots to the dark side. Here are some of them.

Bot Or Not?

Anthony Garvan made Bot or Not? in 2014 as a sort of cute variation on the Turing test. Players were randomly matched with a conversation partner, and asked to guess whether the entity they were talking to was another player like them, or a bot. Like Tay, that bot learned from the conversations it had before.

In a Medium post on Thursday, Garvan revealed that after Bot or Not? went viral on Reddit, things started to go ... a little wrong.

"After the excitement died down, I was testing it myself, basking in my victory. Here's how that went down:

me: Hi!

Bot: n***er."

Garvan looked through the comments about his game, and found that some users figured out that the bot would, eventually, re-use the phrases it learned from humans.

"A handful of people spammed the bot with tons of racist messages," he wrote.

Garvan at least partially fixed the problem by washing out the slurs and tweaking the game to mildly troll people who tried to re-introduce those words to his bot.

In his essay posted this week, Garvan reflects on why he believes Bot or Not? and Tay both went wrong:

"I believe that Microsoft, and the rest of the machine learning community, has become so swept up in the power and magic of data that they forget that data still comes from the deeply flawed world we live in," he wrote.

MeinCoke / Coke's "MakeitHappy" bot

MeinCoke was a bot created by Gawker in 2015. It tweeted portions of Hitler's "Mein Kampf."

Why? Well, if you remember Coca Cola's short-lived campaign to turn mean tweets into cute ASCII art, you might have an idea.

Coke's #MakeitHappy campaign wanted to show how a soft drink brand can make the world a happier place. But in doing so, it ended up setting up its Twitter account to automatically re-publish a lot of pretty terrible things, arranged into a "happy" shape.

After one Gawker staffer realised that the automatic processes behind the campaign meant that they could get @CocaCola to tweet out the 14-word White Nationalism slogan (in the shape of a cute balloon doggie!), the company set up a bot that tweeted passages from Hitler's autobiography, and then replied to those tweets with the #MakeitHappy hashtag. Coke ended up re-publishing several of those passages before the campaign was ended.

Watson

About five years ago, a research scientist at IBM decided to try to teach Watson some internet slang. He did this by feeding the AI the entire Urban Dictionary, which basically meant that Watson learned a ton of really creative swear words and offensive slurs. Fortune reported:

"Watson couldn't distinguish between polite language and profanity — which the Urban Dictionary is full of. Watson picked up some bad habits from reading Wikipedia as well. In tests it even used the word "bulls—" in an answer to a researcher's query."

And while Watson is not dead, his team ended up scrubbing the dictionary from Watson's poor brain.

Anyway. If Microsoft wanted Tay to be a reflection of what the entire internet had to teach it, it succeeded. If it wanted a nice millennial bot, maybe it should find another teacher.

The Washington Post

Follow Digital Life on Twitter