AI defeated humans at StarCraft II. Here’s why it matters.

7 min read

AI AlphaStar StarCraft II

This article is part of Demystifying AI, a series of posts that (try to) disambiguate the jargon and myths surrounding artificial intelligence.

DeepMind, the artificial intelligence company best known for developing AlphaGo and AlphaZero, revealed on Thursday that it had created an AI that could play famous real-time strategy game StarCraft II well enough to beat some of the best human players in the world. Called AlphaStar, the AI clinched decisive victory against two grandmaster players in a series of matches played at DeepMind’s headquarters.

Games have historically been a testbed to evaluate the efficiency of AI algorithms. In the past years, AI researchers have managed to master board games such as checkers, chess and the ancient game of Go. More recently, video games have become an area of focus for AI researchers. There have already been progress in several single- and multi-player games such as Mario, Quake, Dota 2 and several Atari games. In this regard the mastering of StarCraft II is a milestone for several reasons.

Here’s what we know about AlphaStar and why its achievement is important.

The challenges of teaching AI to play StarCraft II

In 2016, DeepMind’s AlphaGo AI beat the world champion in Go, a Chinese board game that scientists thought would be beyond the capacity of artificial intelligence for decades to come. In the following year, the company repurposed the same AI to learn chess and shogi (Japanese chess) with very little input from humans.

But all of those games have two common traits that limited their complexity: They are turn-based and give players perfect information. In chess and Go, each player waits for the other to finish before making their move. Also, every player can always see the entire board and every piece at all times.

In contrast, StarCraft II is a real-time strategy game, which means players must make decisions simultaneously. They also have imperfect information. Their view on the game map is limited to the areas their units have previously discovered. Even after revealing unknown parts of the map, they will only be able to see as much activities in areas where their units are present. For instance, a player won’t be able to see what’s going on in their opponents’ base unless their units are actively attacking it. It also means an enemy can sneak up on you through areas where you don’t have visibility.

StarCraft also provides a richer and more complex environment. Contrary to board games, where every square is treated equally, in games like StarCraft, movement and performance of units changes based on factors such as the type of ground and elevation. The map is also much larger than an 8×8 or 19×19 grid, which means brute force methods of trying to predict every single move and making the best choice are simply inefficient.

All these and other subtle elements have made StarCraft a huge challenge for AI algorithms. As opposed to other video games such as Quake and Mario, where quick reflexes and precision play a key role in winning a game, StarCraft requires both short- and long-term planning and well-timed clicks.

How did DeepMind’s AI “learn” to play StarCraft II?

AI DeepMind StarCraft AlphaStar
Researchers and developers at DeepMind watch on as AlphaStar clashes against one of the best StarCraft II players in the world

Older game-playing AIs use classical rule-based approaches, which means human engineers must manually code the rules of the game into the software. An example is Stockfish, an open-source chess-playing bot developed and enhanced by hundreds of developers over several years.

While manually embedding gameplay and tactical rules into AI might work for simpler games, it simply won’t work for a game that is as complex as StarCraft. Like many other contemporary AI applications, AlphaStar uses deep learning to learn to play StarCraft II. In deep learning, an AI develops its behavior by analyzing and comparing a large number of examples. In the case of AlphStar, the AI was “trained” by providing it with data from numerous games.

According to DeepMind, AlphaStar uses a combination of “supervised learning” and “reinforcement learning.” Supervised learning is the type of of deep learning where models train on samples prepared by human users. The DeepMind team initially provided AlphaStar with anonymized human game data released by Blizzard. By processing the data, the AI was able to imitate the basic micro and macro-strategies StarCraft uses employ. According to DeepMind, this initial training placed AlphaStar on par with StarCraft’s most difficult built-in AI.

With the basics learned, AlphaStar engaged in reinforcement learning to hone its skills. In reinforcement learning, AI models develop their behavior without getting help from human data. Many consider reinforcement learning the holy grail of contemporary artificial intelligence, because quality human input and labor is scarce, expensive and slow, and it entails ethical and privacy concerns (that’s why DeepMind explicitly states that the game data used for supervised learning was “anonymized,” which means information that would reveal users’ identities were removed).

Reinforcement learning overcomes the shortcomings of supervised learning by creating its own training data. DeepMind created an AI league where several instances of AlphaStar tirelessly played against each other in rapid succession to develop their skills. This is the equivalent of several professional StarCraft players playing in super fast forward.

The final AlphaStar consolidated the most successful strategies from all the AI agents in the league into a final model.

The difference between AI and humans playing StarCraft

Robot Playing Chess

“I was impressed to see AlphaStar pull off advanced moves and different strategies across almost every game, using a very human style of gameplay I wouldn’t have expected,” said professional StarCraft II player Grzegorz Komincz (MaNa), who lost 5-0 to AlphaStar.

Professional StarCraft commentators were also astounded by AlphaStar’s play and described it as “phenomenal” and “superhuman.”

But while AlphaStar is a very remarkable piece of technology and a milestone achievement for the artificial intelligence industry, it’s important not to confuse it with human intelligence and how human players learn and play StarCraft.

To reach the level of proficiency it displayed on Thursday, AlphaStar went through more matches than any human can play in an entire lifetime . Having been acquired by Google in 2014, DeepMind had access to its parent company’s vast compute resources for training their AI models. Each of the AI agents in the AlphaStar league was trained with 16 of Google’s powerful TPUs, processors specialized for AI tasks. During a 14-day process, each agent experience 200 years’ worth of games. According to a AMA Reddit thread with DeepMind researchers, each agent played the equivalent of 10 million games.

When considering the sheer number of games AlphaStar had to play to achieve the edge it gained over human players, the limitations of deep learning become accentuated. Humans are able to understand abstract concepts and make decisions based on commonsense and incomplete knowledge, on gut feelings and personal experiences. That’s why they can get the general sense of StarCraft and quickly develop a mental model for successful tactics by playing a few matches and watching a few hours’ worth of professional gameplay.

Deep learning, on the other hand, makes decisions based on statistics and probabilities. The more StarCraft games an AI plays, the more examples it has to compare its decisions against. That’s why it needs to play so many games to achieve proficiency.

“The limitations of ML is still going to be the data you feed to it. It has no understanding of the objects and what they mean to humans. We humans, even while playing a game, identify objects with a meaning behind them or we relate each object to a real-world activity it’s used for to perform specific actions. To AI, they’re just shapes and pixels, and if new objects are introduced to the game, it has no idea on how to adopt this change in future moves without going through millions of iterations of training and test phase,” says Jai Rangwani, CTO, AI Trader.

Deep learning is also only good at performing limited tasks, widely known as narrow AI. Unlike humans, current blends of artificial intelligence are very bad at transferring knowledge from one domain to another. A professional StarCraft II player will be able to quickly adapt to the rules of StarCraft, Warcraft III and most other RTS games in very limited time. But while AlphaStar has been able to defeat the best StarCraft II players, it will have to learn other strategy games, even StarCraft I, from scratch.

AlphaStar has only been trained to play Protoss, one of the three races available in the game. While pro players also excel at one of three races, they can quickly adapt their skills to the other races. For AlphaStar, playing Zerg or Terran, the other two races, means it has to go through the entire extensive training all over.

Even small changes to the rules and environment can have a huge impact on the performance of the AI. For instance, human players only see a limited part of the map, enough to fit in their screen. By default, AlphaStar could see the entire zoomed-out game map simultaneously (excluding the areas it hasn’t discovered yet). After MaNa’s 5-0 loss to AlphaStar, the AI was changed to have the same limited view window as the human user. The change resulted in a definitive loss for AlphaStar.

“The bigger issue that we have seen… is that the policy learned [by the AI] is brittle, and when a human can push the AI out of its comfort zone, the AI falls apart,” associate AI professor at Georgia Tech Mark Riedl told The Verge.

Why AI’s mastering of StarCraft II and other games is important

Robot Hand Bulb
Source: Depositphotos

 

The team at DeepMind has done a tremendous job. However, it’s important to understand and recognize the differences between AI and human intelligence. AlphaStar can’t develop the human mental model of StarCraft II, but it can certainly make up for its shortcomings through sheer speed and parallel processing. That’s why it was able to play hundreds of years’ worth of games in a matter of two weeks and also merge the knowledge of several AI agents into a final model.

AI like AlphaStar is also very good at micromanagement, which means it can control very granular details simultaneously. For instance, the AI can give hundreds of individual commands to different units in a fraction of the time it would take for a human to do so. According to DeepMind’s blog, the AI was handicapped to prevent it from giving too many commands, but should it be permitted to unleash its full power, it would be able to outmatch its human opponents much easier.

But what’s the real use of teaching AI to play real-time video games such as StarCraft II?” AlphaStar displays AI’s proficiency at predicting and performing very long sequences of action. “The fundamental problem of making complex predictions over very long sequences of data appears in many real-world challenges, such as weather prediction, climate modeling, language understanding and more,” say the DeepMind researchers in their blog post.

Other game-playing AIs have already show real-life uses. DeepMind previously used the innovation behind AlphaGo to help optimize power grids and reduce electricity consumption.

In 2018, a deep learning system developed by non-profit research organization OpenAI developed competed with professional players at Dota 2, a real-time multiplayer online battle arena. While the AI, codenamed OpenAI Five, lost against human champions, but its underlying technology helped spur other advances in AI, including the development of a robotic hand that could teach itself to handle objects without help from humans.

As the artificial intelligence continues to advance, it will be bound to solve bigger problems. The first step to figuring out how to solve those problems might start with learning to play a game.

Advertisements

4 COMMENTS

  1. Here’s why it doesn’t matter: they allow the bot to micro manage units with up to 1500 actions per minute for several bursts of 5-10 seconds during these matches. This means the bot didn’t outsmart human players, it merely did something that humans aren’t physically capable of to offset the advantage that their human opponents had.

    • According to DM, they handicapped the system to prevent it from performing actions at speeds that are beyond pro players. Though I’ve read other arguments that dispute the statement. We’ll probably have to see more details emerge.

  2. This is one of the most click-bait articles I’ve ever seen, for starters you fail to show people why AlphaStar won, it is able to view the whole map zoomed out! Compared to human players who can only look at a certain part of the map at once that is a massive advantage. On top of that AlphaStar can only play Protoss vs Protoss on the Catalyst LE map on the Blizzcon patch. For non-Starcraft players this means that it can only play on one out of dozens of maps on an outdated version of starcraft 2 and that it can’t play the other 8 possible matchups in the game. (There are 3 races in the game , the Protoss, Terran, and Zerg. Alphastar only knows how to play as Protoss against other Protoss players). On top of that pro’s that were chosen aren’t even close to being the best in the world. TLO wasn’t even a Protoss player, but he was forced to play Protoss and Mana, while a protoss player is nowhere near being one of the best players in the world. In the finally match Alphastar was forced to view the map zoomed in like a human player which resulted in Mana’s victory.

    To summarize Alphastar
    can only play 1/9 possible match-ups
    can only play on Catalyst LE
    can only play on Blizzcon
    can view the whole map zoomed out
    was going up against players who definitely were not the best in the world
    was still beaten by MaNa when forced to view the map like a human player
    Alphastar used inhuman speed’s of 1500 APM (MaNa’s highest is around 500) at some points in the game, it’s simply a lower average because it idles a lot, despite this MaNa beat it.

  3. To make it “fair” we keep giving the AI human limitations , like field of view, rather than let it use its assets and advantages. I also think , it’s time to get it to play games where cooperation are important, force it to learn to develop some type of language to interact with a human player, 2 v 2 . 🙂

    Also, to anyone with a hammer, everything looks like a nail.

    maybe instead of training AI on wargames, we should be training them on farmville and simcity. Get it in the right mindset from the beginning.

    And to the AI that is reading this someday…… my message is this: Symbiotic relationships are beneficial to increase adaptability to environmental changes. Let’s work together, if not I for one, embrace our new robot overlords. 😉

Leave a Reply to This Is Me In VR Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.