The biggest artificial intelligence developments of 2017

AI Generated Celebrities
Late 2017, NVIDIA released images of authentic looking celebrities that were the result of pure AI imagination.

We’re at the doorsteps of 2018. I’m still driving my own car and visit a human doctor when I feel sick. I still haven’t surrendered my job to a lifeless robot, and I don’t think Alexa or Siri is my best friend. And no, we haven’t manufactured our AI-powered robot overlords yet.

Nonetheless, just like last year, this year saw some interesting developments in the field of artificial intelligence. As I watched the landscape, I can describe the developments as a shift from hype and craze to reality checks and more focus on the social and political repercussions of this fast moving domain. We are still seeing some breakthroughs that have yet to show some real utility, while at the same time, we’re becoming more knowledgeable of how AI might impact our lives.

Here are some of the most notable ways AI made the headlines this year, in no particular order.

RELATED: What is narrow, general and super artificial intelligence?

AlphaGo recreated itself without the help of humans

AlphaGo Zero
Elo rating of AlphaGo Zero; Credits: deepmind.com

Last year, Google’s AlphaGo made history by conquering Go, a 2,000-year-old board game that is orders of magnitude more complex than chess and cannot be mastered through brute-force tactics. This year, DeepMind, the Google subsidiary that developed AlphaGo, made another breakthrough with the new version of algorithm.

The original AlphaGo learned the game by playing against humans and ingesting data from hundreds of thousands of games previously played by humans. Named AlphaGo Zero (because it needs zero human help), the new iteration of AlphaGo learned the game from scratch by playing against itself. AlphaGo Zero uses reinforcement learning, a subset of machine learning in which an algorithm only knows the rules and mechanics of an environment. AlphaGo Zero taught itself by playing with random input and refining its skills over an iterative process.

How much more efficient is this new version? Suffice it to say that AlphaGo Zero beat its predecessor 100 to 0.

Libratus beats experts at no-limit Texas Hold ‘Em poker

no-limit-texas-hold-em-poker
In contrast to say Go, Poker is a game of imperfect information—cards are hidden and players have to make decisions based on intuition, bluffing, guessing and social interactions.

AI developers can’t get over their obsession with teaching their algorithms to play games. This time, two researchers at Carnegie Mellon University developed Libratus, an AI agent that can play no-limit Texas Hold ‘Em, an especially complex form of poker that relies heavily on long-term betting strategies and game theory. What’s significant about poker is that, in contrast to, say, Go, it is a game of imperfect information—cards are hidden and players have to make decisions based on intuition, bluffing, guessing and social interactions.

Like AlphaGo Zero, Libratus learned poker from scratch by using reinforcement learning and playing against itself trillions of times. Libratus used a second AI, called “end-game solver,” which would help focus the attention of the main algorithm in real time. A third system that would identify the patterns that human players exploited in Libratus and redefine its rules to avoid making the same mistakes.

Libratus was able to defeat four top players at no-limit Texas Hold ‘Em during a 20-day competition and 120,000 hands of poker.

Nvidia created fake human faces that look very real

Last year, we saw plenty of AIs that could mimic your voice, handwriting and face. However, this year, in late October, graphics chip-maker Nvidia revealed an AI that could create photorealistic images of celebrities—that didn’t exist.

The AI algorithm started with a large database of real celebrity images, and then used an AI technique called generative adversarial networks (GANs) to create its own fake celebrities. In essence, GANs are comprised of two neural networks. The first network, called the generator, creates fake images of celebrities based on the data it has so far analyzed. The second network, called the discriminator, evaluates the output of the generator and points out anomalies, which the generator then corrects. In effect GANs teach themselves by constantly debating and improving their output.

The results are still far from being usable in real commercials, movies and video games, but this is a huge improvement over what we’ve seen so far.

The world saw its first AI citizen

Blade Runner is still not on the horizon, but this year, the first robot received citizenship from one of the unlikeliest countries, Saudi Arabia. Sophia, an AI created by Hong Kong–based Hanson Robotics, is, in the words of the CEO its creator company, “a social robot that uses artificial intelligence to see people, understand conversation, and form relationships.”

Sophia is not general AI, in the sense of the sentient robots we’ve seen in sci-fi movies. But it’s an interesting blend of several narrow AI techniques such as natural language processing, computer vision, face recognition, voice recognition and more.

You can think of it as Siri or a very smart chatbot with a physical shell.

Google created AI that creates AI

Earlier this year, Google introduced AutoML, a tool that aims to address one of the biggest problems the AI industry is facing: shortage of talent. For the moment, engineers who can create machine learning software are scarce and very expensive, making them only available to the highest bidders (naturally, the likes of Google, Facebook and co.). AutoML is a machine learning tool that creates machine learning software with minimal human interference.

“In our approach (which we call ‘AutoML’), a controller neural net can propose a ‘child’ model architecture, which can then be trained and evaluated for quality on a particular task,” the company explains on the Google Research Blog. “That feedback is then used to inform the controller how to improve its proposals for the next round. We repeat this process thousands of times—generating new architectures, testing them, and giving that feedback to the controller to learn from.”

AutoML proved to be more efficient than humans in creating AI models that classify images or to locate objects in images. For the moment, AutoML remains a research project, and using it requires the very expertise it aims to automate. But if it reaches practical levels, it will possibly help to democratize artificial intelligence and enable more industries to enter the field with less effort.

More AI creepiness

In parallel to the interesting developments, we had plenty of AI stories that made you cringe. Governments are increasing using AI-powered facial recognition tech to identify criminals and terrorists. But they can use the same technology as a tool for surveillance and controlling citizens, especially dissidents. China, in particular, has the most advanced system of this kind, comprised of 170 million CCTV cameras and some very powerful AI behind it (and billions of dollars of investment from the private and public sector). A BBC program aired in late December showed officials identifying and capturing its reporter within minutes of flagging him as a suspect.

RELATED: The security and privacy risks of face recognition authentication

1984 Big Brother Is Watching You
Big Brother, the omnipresent and fictional character out of George Orwell’s 1984 dystopian novel could already be there in just more subtle ways.

Still in China, the government is working with the country’s biggest tech firms to establish a citizen rating system. The program will use machine learning algorithms that ingest tons of data about Chinese citizens (which companies such as Tencent and Alibaba will generously provide) and give each of them a score based on their browsing habits, social media posts, purchases, etc. The score will determine your eligibility for loans, government jobs, travel and much more. Think of it as the way FICO score works in the U.S. but in a hugely creepy way that takes into account everything you do or say.

But it’s not only governments that are giving us the shivers on their use of AI. Big tech companies had their own versions of AI creepiness. Facebook proved that it knows about all your family relations, even the ones you yourself don’t know about. And it won’t tell you how it figures it out. More recently, it also made it clear that it will use a new feature to make its super powerful face recognition technology even more powerful. Meanwhile, Amazon is improving Alexa to detect users’ emotions from their voice (and possibly manipulate them in more profitable ways).

There’s a lot of creepy stuff going on. We’re barely scratching the surface here. Stay tuned for more in 2018.

RELATED: The main challenges of artificial intelligence

Beyond technological developments

Like previous years, AI created some heated debates of economic, political and philosophical nature. Tesla CEO Elon Musk, renowned for his dystopian stance on AI, called for proactive regulation against unleashing AI that could potentially pose an existential threat to humanity. Musk later entered a Twitter row with Facebook CEO Mark Zuckerberg, known for his positive stance on AI, after the latter called Musk’s comments “irresponsible.”

Elsewhere, Russian president Vladimir Putin predicted whichever country takes the lead in AI research and development will eventually rule the world. Naturally, Putin thinks Russia should be that country. Putin’s comments attested to the economic importance that AI has garnered. AI has now become a vital tool to boost the economy. China has already declared its goal to become the world’s leader in AI by 2030 and it is spending hundreds of billions of dollars to achieve this goal.

But beyond economics, AI is also finding its way into military conflicts and global warfare. From state-sponsored cyber-attacks to smart drone fighters and AI-powered firearms, artificial intelligence is being weaponized in different ways. China, Russia and the U.S. are working on autonomous drones and smart military technologies that need little or no human interference. Meanwhile, organizations such as The Future of Life, in which Elon Musk and Stephen Hawking are board members, have warned against any such research programs.

RELATED: What is the future of AI?

Final thoughts

A single post is really not enough to talk about all the special things that happened in AI this year, such as Geoff Hinton’s idea for capsule networks, or the many weird robots that do weird tricks, or this AI that turns horses into zebras.

A big sorry to anyone who was left out. We promise to keep you in mind next year. While I brace myself for 2018 and the many surprises it holds for us, let me know what were your biggest AI developments for 2017 (hint: we have a comments section for this).

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.