Elon Musk’s research group opens ‘AI gym’ to train robots not to destroy the human race

OpenAI, which will run the gym, was established in December to make sure that artificial intelligence is used to ‘advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return’

Elon Musk’s AI research group has opened a “gym” for robots, to ensure that they can be properly tested.

The new project is an attempt to bring together training for artificially intelligent machines, allowing them to be fairly compared with each other – and avoid any problem results.

The gym has been launched as the first project from OpenAI, a research group that is funded by backers including Mr Musk as well as a range of other tech leaders. The project launched in December and aims to “advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return”.

The gym’s primary function is establishing benchmarks for artificially intelligent systems, so that they can be compared against one another. That is intended as a way of monitoring the progress of such systems – and hopefully ensuring that they don’t go wrong, or have the kinds of effects that Mr Musk and others have repeatedly warned about.

Programmers will be able to submit their AIs to the gym, which will run them through a range of tests and see how they get on. It is not simply one test – like the game Go, which has been used before – but is instead a range of different trials that look to test the artificial intelligence in full.

The problem with specific trials, like DeepMind’s success in Go, is that systems can be created specifically for any given test. They might then be good at only that challenge – making it difficult to compare the powers of systems against one another.

But in the OpenAI Gym, researchers will theoretically be able to share their scores and compare it with others.That will allow the research to be conducted in public and in comparable ways, hopefully avoiding the potential for AI to go wrong.

Comments