This New AI Plays Space Invaders, And May Put Game Testers Out Of A Job

This New AI Plays Space Invaders, And May Put Game Testers Out Of A Job

An artificial agent that combines machine learning techniques with “biologically inspired” mechanisms can learn how to play 49 classic arcade video games when given only minimal background information.

This discovery paves the way to building artificial intelligence systems that excel at learning a variety of challenging tasks from scratch, including QA testing.

Image: Shutterstock

“Reinforcement learning” describes how artificial agents interact with an environment, selecting actions that maximise some notion of net reward. However, using reinforcement learning in complex, real-world-like situations has proved complicated and applicability tends to be limited to domains in which useful features can be tailored.

Demis Hassabis, Vlad Mnih, Koray Kavukcuoglu, David Silver and colleagues developed a novel artificial agent called a deep Q-network (DQN), which combines reinforcement learning with a type of artificial neural network known as deep neural networks. They tested the DQN using 49 different classic Atari 2600 games including Space Invaders and Breakout.

The agent was given only pixel and score information for each game, but performed at a level comparable to that of a professional human games tester — achieving more than 75 per cent of the human score on more than half the games. The DQN also outperformed the best existing reinforcement learning agents on 43 games.

The games at which the DQN excelled were highly varied in nature, from side-scrolling shooters to boxing and 3D car-racing games, proving that a single architecture can successfully learn optimal strategies in a range of different environments with only minimal prior knowledge.

The work highlights how state-of-the-art machine learning techniques can be combined with biologically inspired mechanisms to create agents that are capable of learning to master a diverse array of challenging tasks. And that human games testers should count themselves lucky they have a more diverse skill set. At the very least, we can all look forward to a reliable co-op buddy in the future.


  • Give the machines all the banal tasks like playing through a craptonne of times with every combination and permutation of options while the human testers can do all the creative play to try and break things. Win win 😛

  • Edit: I did misread and the QA aspect was more about learning the gameplay interactions and paths to run each time to test a piece of functionality. Or at least I’m assuming that’s the implication.

    Original comment:
    Umm, unless I’m misreading the article, that’s nothing to do with testing. Being able to play a game well is completely separate to being able to determine if a game is functioning according to design and meets a wide range of usability and subjective quality criteria.

  • As long as it doesn’t learn how little testers get paid, it’ll be fine and won’t Skynet out on us at all…

Show more comments

Comments are closed.

Log in to comment on this story!