Ever since Marvin Minsky and several collaborators coined the term “artificial intelligence” in 1956, games have served as both a training ground and a benchmark for AI research. At the same time, in many cultures around the world, the ability to play certain games such as chess or Go has long been considered one of the hallmarks of human intelligence. And when computer science researchers started thinking about building systems that mimic human behavior, games emerged as a natural “playground” environment.
Over the last decade, deep learning has driven a resurgence in AI research, and games have returned to the spotlight. Perhaps most significantly, in 2015 AlphaGo, an autonomous Go bot built by DeepMind (an Alphabet subsidiary) emerged as the best player in the world at the traditional board game Go. Since then, the DeepMind team has built bots that challenge top competitors at a variety of other games, including Starcraft.
As games have become a prominent arena for AI, Google Cloud and Unity decided to collaborate on a game-focused AI competition: the Obstacle Tower Challenge. Competitors create advanced AI agents in a game environment. The agents they create are AI programs that take as inputs the image data of the simulation, including obstacles, walls, and the main character’s avatar. They then provide the next action that the character takes in order to solve a puzzle or advance to the next level. The Unity engine runs the logic and graphics for the environment, which operates very much like a video game.
Unity launched the first iteration of the Obstacle Tower Challenge in February, and the reception from the AI research community has been very positive. The competition has received more than 2,000 entries from several hundred teams around the world, including both established research institutions and collegiate student teams. The top batch of competitors, the highest scoring 50 teams, will receive an award sponsored by Google Cloud and advance to the second round.
Completing the first round was a significant milestone, since teams had to overcome a fairly difficult hurdle, advancing past several levels of increased difficulty in the challenge. None of these levels were available to the researchers or their agents during training, so the agents had to learn complex behavior and generalize their behavior to handle previously unseen situations.
The contest’s second round features a set of additional levels. These new three-dimensional environments incorporate brand new puzzles and graphical elements that force contestant research teams to develop more sophisticated machine learning models. New obstacles may stymie many of the agents that passed the levels from first phase.
How Google Cloud can help
Developing complex game agents is a computationally demanding task, which is why we hope that the availability of Cloud credits will help participating teams. Google Cloud offers the same infrastructure that trained AlphaGo’s world-class machine learning models, to any developer around the world. In particular we recently announced the availability of Cloud TPU pods, for more information you can read this blog post.
All of us at Google Cloud AI would like to congratulate the first batch of successful contestants of the Unity AI challenge, and we wish them the best of luck as they enter the second phase. We are excited to learn from the winning strategies.