The limits and challenges of deep learning

neurons

This article is part of Demystifying AI, a series of posts that (try to) disambiguate the jargon and myths surrounding AI.

Deep learning, the spearhead of artificial intelligence, is perhaps one of the most exciting technologies of the decade. It has already made inroads in fields such as recognizing speech or detecting cancer, domains that were previously closed or scarcely available to traditional software models.

Deep learning is often compared to the mechanisms that underlie the human mind, and some experts believe that it will continue to advance at an accelerating pace and conquer many more domains. In some cases, there’s fear that deep learning might threaten the very social and economic fabrics that hold our societies together, by either driving humans into unemployment or slavery.

There’s no doubt that machine learning and deep learning are super-efficient for many tasks. However, they’re not a silver bullet that will solve all problems and override all previous technologies. Beyond the hype surrounding deep learning, in many cases, its distinct limits and challenges prevent it from competing with the mind of a human child.

RELATED: What is narrow, general and super artificial intelligence?

Here’s an example. I played Mario Bros on the legendary NES console for the first time when I was six years old. That first experience, which didn’t last more than a couple of hours, introduced me to the concept of platform games. After that, I could quickly apply the same rules to other games such as Prince of Persia, Sonic the Hedgehog, Crash Bandicoot and Donkey Kong Country. I also had no problem importing my previously earned knowledge to the 3D versions of those games when they made their appearance in the mid-90s.

Meanwhile, I had the inherent power of porting my real-world experiences to the world of gaming. I immediately knew that I had to jump over pits, and that the plants that had sharp teeth would hurt Mario if he ran into them.

By all standards, I was a person of average intelligence, and not even a very good gamer. This is the process that nearly every kid of my generation went through. However, even this simplest of feats proves to be a “deep” challenge for deep learning algorithms. The smartest game-playing algorithms have to learn every new game from scratch.

We humans can learn abstract, broad relationships between different concepts and make decisions with little information. In contrast, deep learning algorithms are narrow in their capabilities and need precise information—lots of it—to do their job.

In a recent paper called “Deep Learning: A Critical Appraisal,” Gary Marcus, the former head of AI at Uber and a professor at New York University, details the limits and challenges that deep learning faces. While I suggest you read the entire paper, here’s a brief summary of the important points he raised.

How deep learning works

Deep learning is a technique for classifying information through layered neural networks, a crude imitation of how the human brain works. Neural networks have a set of input units, where raw data is fed. This can be pictures, or sound samples, or written text. The inputs are then mapped to the output nodes, which determine the category to which the input information belongs. For instance, it can determine that the fed picture contains a cat, or that the small sound sample was the word “Hello.”

Deep neural networks, which power deep learning algorithms, have several hidden layers between the input and output nodes, making them capable of making much more complicated classifications of data.

deep neural networks
Credit: Gary Marcus

Deep learning algorithms need to be trained with large sets of labelled data. This means that, for instance, you have to give it thousands of pictures of cats before it can start classifying new cat pictures with relative accuracy. The larger the training data set, the better the performance of the algorithm. Big tech companies are vying to amass more and more data and are willing to offer their services for free in exchange for access to user data. The more classified information they have, the better they’ll be able to train their deep learning algorithms. This will in turn make their services more efficient than those of their competitors and bring them more customers (some of whom will pay for their premium services).

RELATED: What is machine learning and deep learning?

Deep learning is data hungry

“In a world with infinite data, and infinite computational resources, there might be little need for any other technique,” Marcus says in his paper.

And therein lies the problem, because we don’t live in such a world.

You can never give every possible labelled sample of a problem space to a deep learning algorithm. Therefore, it will have to generalize or interpolate between its previous samples in order to classify data it has never seen before such as a new image or sound that’s not contained in its dataset.

“Deep learning currently lacks a mechanism for learning abstractions through explicit, verbal definition, and works best when there are thousands, millions or even billions of training examples,” says Marcus.

Big Data

So what happens when deep learning algorithm doesn’t have enough quality training data? It can fail spectacularly, such as mistaking a rifle for a helicopter, or humans for gorillas.

The heavy reliance on precise and abundance of data also makes deep learning algorithms vulnerable to spoofing. “Deep learning systems are quite good at some large fraction of a given domain, yet easily fooled,” Marcus says.

Testament to the fact are many crazy stories such as deep learning algorithms mistaking stop signs for speed limit signs with a little defacing, or British police software not being able to distinguish sand dunes from nudes.

RELATED: The biggest AI developments of 2017

Deep learning is shallow

Another problem with deep learning algorithms is that they’re very good at mapping inputs to outputs but not so much at understanding the context of the data they’re handling. In fact, the word “deep” in deep learning is much more a reference to the architecture of the technology and the number of hidden layers it contains rather than an allusion to its deep understanding of what it does. “The representations acquired by such networks don’t, for example, naturally apply to abstract concepts like ‘justice,’ ‘democracy’ or ‘meddling,’” Marcus says.

Returning to the gaming example we visited at the beginning of the article, deep learning algorithms can become very good at playing games, and they can eventually beat the best human players in both video and board games. Just look at the crazy way this AI is playing Mario:

However, this doesn’t mean that the AI algorithm has the same understanding as humans in the different elements of the game. It has learned through trial and error that making those specific moves will prevent it from losing. For instance, if you look at 2:26 where a baseball is thrown at Mario, the most novice gamer would know that they would have to jump. But the AI keeps on running until it gets hit in the back.

Moreover, if you give that same algorithm a new game, such as Super Mario 64 or Mega Man, it will have to learn everything anew.

Marcus’ paper refers to Google DeepMind’s mastering of the Atari game Breakout as an example. According to the designers of the algorithm, the game realized after 240 minutes that the best way to beat the game is to dig a tunnel in the wall. But it neither knows what a tunnel or a wall is. It has just learned through millions of trials and errors that that specific way to play the game will yield the most points in the shortest time possible.

Deep learning is opaque

While decisions made by rule-based software can be traced back to the last if and else, the same can’t be said about machine learning and deep learning algorithms. This lack of transparency in deep learning is what we call the “black box” problem. Deep learning algorithms sift through millions of data points to find patterns and correlations that often go unnoticed to human experts. The decision they make based on these findings often confound even the engineers who created them.

This might not be a problem when deep learning is performing a trivial task where a wrong decision will cause little or no damage. But when it’s deciding the fate of a defendant in court or the medical treatment of a patient, mistakes can have more serious repercussions.

“The transparency issue, as yet unsolved, is a potential liability when using deep learning for problem domains like financial trades or medical diagnosis, in which human users might like to understand how a given system made a given decision,” Marcus says in his paper.

fog road

Marcus also points to algorithmic bias as one of the problems stemming from the opacity of deep learning algorithms. Machine learning algorithms often inherit the biases of the training data the ingest, such as preferring to show higher paying job ads to men rather than women, or preferring white skin over dark in adjudicating beauty contests. These problems are hard to debug in development phase and often result in controversial news headlines when the deep learning–powered software go into production.

RELATED: What’s preventing AI from taking the next big leap?

Is deep learning doomed to fail?

Certainly not. But it is bound for a reality check. “In general, deep learning is a perfectly fine way of optimizing a complex system for representing a mapping between inputs and outputs, given a sufficiently large data set,” Marcus observes.

Deep learning must be acknowledged for what it is, a highly efficient technique for solving classification problems, which will perform well when it has enough training data and a test set that closely resembles the training data set.

But it’s not a magic wand. If you don’t have enough training data, or when your test data differs greatly from your training data, or when you’re not solving a classification problem, then “deep learning becomes a square peg slammed into a round hole, a crude approximation when there must be a solution elsewhere,” in Marcus’ words.

Marcus also suggests in his paper that deep learning has to be combined with other technologies such as plain-old rule-based programming and other AI techniques such as reinforcement learning. Other experts such as Starmind’s Pascal Kaufmann propose neuroscience as the key to creating real AI that will be able to achieve human-like problem solving.

“Deep learning is not likely to disappear, nor should it,” Marcus says. “But five years into the field’s resurgence seems like a good moment for a critical reflection, on what deep learning has and has not been able to achieve.”

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.