DeepMind is back at it again with beating humans.
Since the company made headlines in 2016 when its AlphaGo program beat Go world champion Lee Sedol, things have taken a turn. In August 2017, it decided to train a neural network on one of the most popular real-time strategy video games and a new AI was born.
It’s called the AlphaStar – and it’s good at StarCraft II.
How good? It beat pro StarCraft II players TLO and MaNa ten games in a row in a series of matches held last month. The final match, streamed on YouTube and Twitch a few days ago, saw MaNa defeat the AI agents.
AlphaStar was trained by first teaching it how humans played the game, and then it was allowed to play against itself to develop better strategies – which humans couldn’t think of before. In fact, the AI was trained so extensively that the agents had gained 200 years of experience playing StarCraft II.
The game sees players facing each other in strategic battles with the end goal of destroying the opponent’s units. They are also required to mine minerals in order to build their own units.
This was different and much more complex than beating a human at a board game as StarCraft II is a real-time strategy game, unlike Go or Chess which are played one turn at a time. AlphaStar had to observe, think and react – all at the same time.
But is it a fair game if humans are made to face something with far better expertise? Basically the game was rigged from the start with AlphaStar having the ability to view the entire game map while team human had to use the camera to view regions manually. To make it fair, DeepMind restricted the AI from making more clicks per minute than a human. In the final match, the AI was ripped off its all-seeing eye and that’s how it lost to MaNa.
DeepMind showed humanity that its AI can beat pro humans at a video game – but that is just a small part of the company’s ultimate mission.
What will it take to make an AI that can act like a human?