Startup OpenAI spent years developing an AI that could play the classic 5v5 game Dota 2.
In a remarkable breakthrough for artificial intelligence, a team of AI has defeated the world champions of the competitive video game Dota 2. While this victory over humans isn’t the first for a game-playing AI, given the success of software in playing Go and poker, the Dota-playing AI had to master the art of teamwork to work alongside both other AI and human players.
When Deepmind’s Go-playing AI, AlphaGo, recently defeat the world Go champion, that victory was remarkable because of the sheer number of possible moves and combinations in the game. Go is so complex even a supercomputer can’t calculate good moves by brute force. Instead, AlphaGo had to rely on intuition—or at least the machine learning equivalent, learning the game from scratch and then inventing moves humans never would have considered. But Dota 2 is a different kind of challenge for AI, which tends to struggle with concepts such as abstract reasoning and teamwork, qualities the game has in spades.
In Dota 2, ten players form two teams of five who fight to take objectives on the map. Neither team has a full vision of everything going on at all times. Players must work together to be victorious. These qualities that make Dota 2 a challenging game for many humans also make it an ideal testing ground for next-gen AI.
Startup OpenAI has been developing a Dota-playing AI for a few years now. This weekend, its team of AIs faced the ultimate test: playing a five-on-five match against OG, the Dota 2 reigning world champions, in a best-of-three series. In an exhibition match in San Francisco on Saturday, OpenAI claimed victory with a clean sweep.
Although both games were close, the human players were eventually outmaneuvered. In the human players’ defense, however, OpenAI limited the complexity of the game somewhat, banning several strategies including some that the members OG like to use. In that sense, OG came into this game with a handicap.
Still, this competition isn’t really about who won and who lost. OpenAI’s goal is to build an artificial intelligence that could make judgement calls based on incomplete data and could cooperate with a stranger, the kinds of things humans do all day but which are terribly difficult to teach to a machine. Each of OpenAI’s five bots worked independently, so they might as well have been playing with perfect strangers.
In fact, in some of the matches they were. During another match at the exhibition, OpenAI changed up the sides, allowing two teams, each made up of two humans and three AI, to battle. OpenAI says even their own researchers were surprised at how well the bots were able to work with humans they’d never met.
OpenAI is done with Dota 2 for now, but the lessons learned could to design collaborative AI for all sorts of applications. The AI the company builds will likely have to work alongside humans someday, and they’ll be well-equipped to do so.