Google Test Of AI's Killer Instinct Shows We Should Be Very Careful

If climate change, nuclear weapons or Donald Trump don't kill us first, there's always artificial intelligence just waiting in the wings. It's been a long time worry that when AI gains a certain level of autonomy it will see no use for humans or even perceive them as a threat. A new study by Google's DeepMind lab may or may not ease those fears.

Image: MGM

Google Doesn't Want To Accidentally Make Skynet, So It's Creating An AI Off Switch

Read more

The researchers at DeepMind have been working with two games to test whether neural networks are more likely to understand motivations to compete or cooperate. They hope that this research could lead to AI being better at working with other AI in situations that contain imperfect information.

In the first game, two AI agents (red and blue) were tasked with gathering the most apples (green) in a rudimentary 2D graphical environment. Each agent had the option of "tagging" the other with a laser blast that would temporarily remove them from the game.

The game was run thousands of times and the researchers found that red and blue were willing to just gather apples when they were abundant. But as the little green dots became more scarce, the duelling agents were more likely to light each other up with some ray gun blasts to get ahead. This video doesn't really teach us much but it's cool to look at:

Using a smaller network, the researchers found a greater likelihood for co-existence. But with a larger, more complex network, the AI was quicker to start sabotaging the other player and horde the apples for itself.

In the second, more optimistic, game called Wolfpack the agents were tasked to play "wolves" attempting to capture "prey." Greater rewards were offered when the wolves were in close proximity during a successful capture. This incentivised the agents to work together rather than heading off to the other side of the screen to pull a lone wolf attack against the prey. The larger network was much quicker to understand that in this situation cooperation was the optimal way to complete the task.

While all of that might seem obvious, this is vital research for the future of AI. More and more complex scenarios will be needed to understand how neural networks learn based on incentives as well as how they react when they're missing information.

The most practical short-term application of the research is to "be able to better understand and control complex multi-agent systems such as the economy, traffic systems, or the ecological health of our planet — all of which depend on our continued cooperation."

For now, DeepMind's research is focused on games with strict rules like the ones above and Go, a strategy game which it famously beat the world's top champion. But it has recently partnered up with Blizzard in order to start learning Starcraft II, a more complex game in which reading an opponent's motivations can be quite tricky.

Joel Leibo, the lead author of the paper tells Bloomberg, "Going forward it would be interesting to equip agents with the ability to reason about other agent's beliefs and goals."

Let's just be glad the DeepMind team is taking things very slowly — methodically learning what does and does not motivate AI to start blasting everyone around it.

[DeepMind Blog via Bloomberg]


Comments

    So they're making AI greedy. Isn't it greed that got us where we are now ? Maybe they should be making socialist AI.

      I wouldn't say that they are making it greedy. They were given a goal "Get as many apples for yourself as possible", that is the rules of the game, so it did what it had to do in the same way that a human would (and do). If you merged the 2 scenarios together and the goal was to *generate* as many apples as possible, and they got bonuses when they worked near each other, you would see them do that, just like we would.

    This incentivised the agents to work together rather than heading off to the other side of the screen to pull a lone wolf attack against the prey. So what we're proving / researching here is that different AI agents are already smart enough to figure out that they should cooperate in removing the 'prey' (AKA humans).

    It's going to have to be a very effective 'OFF' switch.

    What I don't get about all this paranoid krap is why people think we won't be able ot design AIs that are anything other than our servants. We have full control over the parameters of their construction, we can decide what kind of consciousness they will have - benevolent or otherwise.

      Use your imagination - perhaps they figure out how to reprogram themselves or manufacture other AI that don't have those limitations or perhaps even some humans decide to reprogram them without these limitations (which I think is quite likely at some point). Perhaps their coding or hardware/firmware has bugs in it.

      We may have control over the initial parameters but if they are true AI and can learn and adapt, who knows what could happen?

Join the discussion!

Trending Stories Right Now