Really, it's just a matter of time before Google's "DeepMind" artificial intelligence learns how to wreck us all.
New Scientist reports a Google-owned AI has learned how to beat 49 Atari games, just by spectating them. So it's not, like, out there playing the newest Call of Duty with an army of 13 year-olds or anything, but still! It's impressive, in that scary "how long until Skynet?" sort of way.
What makes this more fascinating is that the AI got so good at some of the games, it actually beat top human scores -- all without actually knowing the rules of the game. Not only that, it organically learned how to use the types of advanced techniques that actual players would use in games like Breakout.
Here's New Scientist, describing how it works:
The software isn't told the rules of the game -- instead it uses an algorithm called a deep neural network to examine the state of the game and figure out which actions produce the highest total score...Deep neural networks are often used for image recognition problems, but DeepMind combined theirs with another technique called reinforcement learning, which rewards the system for taking certain actions, just as a human player is rewarded with a higher score when playing a video game correctly.
While tech similar to this has existed for a while now, the current AI can handle more data than ever before. How these advances will be used remains to be seen, but already, people are coming up with some potential scenarios.
You can watch how the AI improves over time in this video by NPG Press. Initially, it's not very good -- it lets the ball go beyond the paddle. But by the end, the AI is a total pro. It's incredible: