The biggest artificial intelligence developments of 2016

Artificial intelligence - Machine learning

This year was a big one for artificial intelligence, machine learning, deep learning and all the related technologies. Thanks to innovations and breakthroughs, the industry took great leaps both for the better and the worse this year.

Though we’re still a long way from Singularity or Skynet (or Genisys or whatever else you want to call a robot and artificial intelligence invasion), we can all acknowledge that the lines between man and machine became a little bit more blurred in 2016.

Here are some of the hottest things AI had in stock for us in the past year.

RELATED: How to make sense of the confusion surrounding artificial intelligence 

AI conquered more than just another boardgame

When Google DeepMind’s AlphaGo beat world champion Lee Sedol at Go four out of five times, it wasn’t just another chapter in the endless saga of board game matchups between man and machine—it was a watershed moment.

As it goes, Go is orders of magnitude more complex than chess or checkers, and with its huge board, diverse strategies and next to unlimited number of moves, it was never meant to be mastered by anything short of human intelligence.

But AlphaGo not only showed it could play Go in the same manner as top-tier players can, it also proved that it could make moves that would virtually never occur to the human mind.

Using deep learning and reinforced learning algorithms, Go players taught AlphaGo to play the game. And then it started teaching itself. And it ended up teaching its own masters to play the game in a different way.

The victory of AlphaGo over Sedol might portend a bright—or maybe dark—future where machines surpass humans. What happens after that remains to be seen…

RELATED: What’s the difference between AI, machine learning and deep learning?

AI is taking baby steps in fighting cyberattacks

I dwelled in several pieces on the intersection of AI and cybersecurity in the past year (here, here and here) and I made it clear that AI won’t be replacing security analysts anytime yet.

However, we did see some very interesting twists. IBM’s Watson for Cybersecurity project is an effort that attempts to combine the mechanics of security tools, detecting threats from structured data (threat and virus signatures, IoC databases, etc.), with the working model of security analysts, which is to learn about new threats based on unstructured data (whitepapers, blog posts, news, etc.). Watson’s trainers and creators don’t think the technology will be a substitute for professional analysts, but it will be a good complement.

An interesting cybersecurity event was the DARPA Cyber Grand Challenge, the first cybersecurity Capture The Flag competition where contestants were AI algorithms trained to autonomously find and patch security vulnerabilities in friendly software while discovering and exploiting holes in those belonging to other competitors.

While Mayhem, the winner of the CGC, proved to be no match for human hackers when it took part in a CTF tournament in the neighboring DefCon cybersecurity convention, it still shows a glimpse of how AI algorithms are getting ready to take on more complicated tasks.

AI is becoming more capable of imitating humans

Artificial intelligence that is able to analyze and find patterns in human behavior has been around for a while. Some of the interesting use cases are security solutions that employ user behavior analytics to detect possible account compromise (the ugly ones are the ones that unearth your most intimate secrets).

But this year we saw more breakthrough in artificial intelligence imitating human behavior. An AI algorithm developed by a London university proved that not even your handwriting can be trusted anymore. The software, called “My Text In Your Handwriting,” will take a small sample of handwritten script and create text that mimics the handwriting in near-perfect precision.

Later on in the year, Google’s Wavenet project proved that AI can create speech that sounds freakishly convincing and removes the robotics artifacts found in other algorithms.

Add to that the possibilities of machine learning algorithms that learn to chat like other people—dead or alive—by ingesting previous conversation transcripts.

You’ll soon have AIs that can take phone calls, chat and sign off documents in your stead. I could quickly name a couple of useful scenarios off the top of my head, but obviously, there are also evil use cases where such deadly combinations can be put to use.

RELATED: Artificial intelligence has problems to fix

AI becomes the source of more privacy headaches

AI and machine learning privacy problems aren’t new, but 2016 had a new crop of privacy breaching technologies up its sleeve. A face recognition algorithm created by a team of Russian developers proved that pictures you post on the internet (or others post of you, for that matter) can come back and haunt you in uncomfortable ways.

The technology uses deep learning algorithms and social media image databases to identify people in pictures. Vigilantes use it to identify and stalk unfortunate victims, and oppressive regimes can use it to track down and crackdown on dissenters.

And if you think that blurring your picture will protect you, researchers in University of Texas at Austin and Cornell Tech beg to differ. Their machine learning algorithm can circumvent content-masking techniques such as pixelation and blurring.

RELATED: Machine learning’s privacy problem

We’re now closer to general AI

As far as we’ve come, AI technology is only capable of performing limited tasks. General AI that can perform a multitude of sometimes vaguely-related tasks (which is an easy feat for the human brain) is still a dream.

However late in the year, Universe, a project being led by OpenAI, Elon Musk’s nonprofit AI organization, showed that we might be closer to that goal than we previously thought.

The algorithm is capable of playing computer games and browsing the web through web browsers by analyzing the screen display and generating mouse and keyboard input just like human beings.

Google’s “Show and Tell” algorithm also proved that AI is getting better at understanding the relations between different elements in a picture. Once trained, the algorithm can write descriptive captions of fed images with above 90% accuracy.

RELATED: Will general AI lead to robocalypse or utopia?

It’s an exciting time for AI

There’s a lot more to be said about what happened in the AI landscape last year, and so little space (and time) to wrap it all up in one post. I intentionally left out some of the ethical and political developments for the sake of brevity. I believe next year will have even more surprises (both good and bad) for us. In the meantime, I would be happy to hear out what you have to say about the impact of AI in 2016.

 

2 COMMENTS

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.