This article is part of Demystifying AI, a series of posts that (try to) disambiguate the jargon and myths surrounding artificial intelligence.
Which function of the human brain should artificial intelligence replicate? The answer to that question characterizes one of the debates that is as old as the history of AI itself. Since the early efforts to create thinking machines began in the 1950s, research and development in the AI space has fallen into one of two approaches: symbolist and connectionist AI.
Symbolist AI, also known as “rule-based AI,” is based on manually transforming all the logic and knowledge of the world into computer code. In symbolic AI, every problem must be broken down into a set of “if-else” rules or some other form of high-level software construction.
Connectionist AI, exemplified by machine learning and its more popular subset deep learning, is based on the idea that AI models should learn to develop their behavior through statistical comparison and by finding correlation between different bits if information. For instance, instead of telling an AI model how to detect a cat in a picture, you should give it a thousand pictures of cats and let it find its own way of finding cats in pictures.
For most of field’s six-decade history, symbolism had been the dominant approach toward creating AI systems. But in the past decade, a revolution in artificial neural networks has made deep learning the main highlight of the AI industry.
While proponents of both camps continue to debate which approach yields better results, a group of researchers at the MIT-IBM Watson AI Lab have proven that the key to the next AI breakthrough might hinge on putting aside old rivalries and combininb symbolist AI with neural networks.
In a paper presented at this year’s International Conference on Learning Representations (ICLR), the researchers introduced an AI model that combines neural networks with rule-based artificial intelligence. Called the “Neuro-Symbolic Concept Learner,” this hybrid approach promises to overcome the challenges that each blend of AI is faced with and create something that is more powerful than the sum of its parts.
The limits of symbolic AI
In the past decades, most of the focus had been on creating symbolic AI systems that could mimic the reasoning functionality of the human brain. However, experience has shown that many of the problems that humans solve can’t be broken down into symbolic representations.
“One of the problems with symbolic expression has always been that we can do interesting things with these symbols once we have them, but actually getting the symbols from the real world is much harder than we anticipated,” says David Cox, IBM Director at MIT-IBM Watson AI Lab.
For instance, consider the cat detection example we mentioned at the beginning of this article. We humans use symbols to detect the feature of cats, such as pointy ears, whiskers, long tail, the shape of the mouth and the triangular nose, the paws, etc. Our visual system does a very complicated job of detecting those features from different angles and under different lighting conditions and for different breeds of cats. And we can also fill the gaps when a cat is partially occluded in an image, or when we can only see its silhouette. This is something that any human child can do.
However, transforming those same charactersitics in symbols is extremely difficult. You would have to write huge amounts of code to extract those features in a generalized way from the endless possible ways they can appear in images. The possibilities are nearly endless.
In general, symbolic AI struggles when it must deal with unstructured data such as images and audio. Symbolic AI has also limited application when performing natural language processing tasks, where it has to deal with unstructured textual data, such as articles, books, research papers, doctor’s notes, etc.
A revolution in neural networks
In contrast to symbolic AI, neural networks and deep learning are extremely better at handling what Cox calls the “messiness of the world.” Neural networks carefully analyze and compare vast amounts of annotated examples to find relevant correlations and develop complex mathematical models that can find similarities in data they’ve never seen before. This is called “training.” Deep learning is especially good in areas such as computer vision, speech recognition and machine translation, tasks that are hard to define and tackle with classical AI approaches.
While the concept of deep neural networks as we see them today date back to the 80s, at the time, the AI community dismissed them as impractical because the resources to develop them efficiently weren’t available.
Decades later, when those barriers were torn down, the real power of deep learning manifested itself. Neural networks were pushed back into the limelight when a group of researchers from University of Toronto used them to win the ImageNet computer vision contest by a large margin.
“Neural networks were fundamentally powerful in a way we didn’t understand because we just didn’t have the resources to make them work. It wasn’t until a lot of data and compute came along that they worked, and we realized a lot of the ideas were right, it just wasn’t the right time for them,” Cox says.
Today, deep learning is one of the main areas of focus for large tech companies and research labs. It is helping solve complicated problems such as predicting breast cancer five years in advance and removing human drivers from behind the steering wheel.
Earlier this year, the pioneers of deep learning were awarded the Turing award, the computer science equivalent of the Nobel prize.
However, deep learning and neural networks also have their own set of limits. “There are limitations to neural networks today. One of them is that it depends on huge amounts of data. You need to have alarming amounts of data to train one of these systems, data that is carefully annotated,” Cox says.
Without vast amounts of training examples, deep learning models perform poorly. Unfortunately, in many domains, there’s not enough quality data to train robust AI models, making it very difficult to apply deep learning to solve problems.
Meanwhile, in their current form, deep learning models suffer from other troubles as well. It’s often very difficult to interpret and investigate the output of neural networks. This is especially troublesome for domains where explainability of automated decisions is a legal requirement.
Neural networks can also become the target of adversarial examples, carefully crafted pieces of data that can manipulate the behavior of AI models in erratic or malicious ways.
And while neural networks can perform some very complicated tasks that symbolic AI struggles at, they can fail at relatively simple reasoning problems that rule-based programs can easily solve, such as high-school math.
The benefits of hybrid AI models
The neuro-symbolic concept learner designed by the researchers at MIT and IBM combines elements of symbolic AI and deep learning. The idea is to build a strong AI model that can combine the reasoning power of rule-based software and the learning capabilities of neural networks.
“One of the interesting things with combining symbolic AI with neural networks—creating hybrid neuro-symbolic systems—is you can let each system do what it’s good at. So the neural networks can take care of the messiness and correlations of the real world, and help convert those into symbols that a rule-based AI system can use to be able to operate much more efficiently,” Cox says.
In the hybrid AI model, the symbolic component takes advantage of the neural network’s ability to process and analyze unstructured data. Meanwhile, the neural network also benefits from the reasoning power of the rule-based AI system, which enables it to learn new things with much less data.
“We can create hybrid AI systems that are much more powerful than the sum of their parts,” Cox says.
Is hybrid artificial intelligence CLEVR enough?
To evaluate the NSCL, the researchers tested on CLEVR, a dataset for visual question-answering problems. CLEVR provides a set of images that contain multiple solid objects and poses questions about the content and the relationships of those objects. Visual question answering is an interesting AI challenge because it requires both the processing of image data and reasoning.
“One of the reasons why visual question-answering problems tend to be hard for neural networks to solve is that they’re fundamentally compositional. The real world we live in is combinatorically complex, we can create arbitrary scenes by composing different objects together. And if you want to train a neural network to solve a system like that, to solve a task like that, you can do it, but it requires tremendous amounts of data,” Cox says.
Pure neural network–based approaches at solving CLEVR try to recreate as many scenarios and combinations as possible and train their network on those examples. Even then, the AI will probably fail to respond when it faces edge cases, scenarios that do not fit in the context of the examples it has been trained on.
In contrast, the AI researchers at MIT and IBM proved that the NSCL could achieve 99.8 percent accuracy on CLEVR with much less training data.
“What this boils down to is carving at the right joints of the problem. If the problem is fundamentally compositional, if it fundamentally has to do with the structure of the world and has a symbolic quality to it, then if you confront the problem with the skeleton of that symbolic processing, you can get by with dramatically less data,” Cox says.
How does the neuro-symbolic concept learner work?
To solve a CLEVR scenario, the NSCL first uses a convolutional neural network (CNN) to process the image and create a list of objects and their characteristics such as color, location and size. This is the kind of symbolic representation of the image that a rule-based AI model can easily work on.
A second NLP neural network processes the question and parses it into a list of classical programming functions that can be run on the objects list. The result is a rule-based program that expresses and solves the problem in symbolic AI terms.
While at first glance, it seems that the NSCL is the result of simply stitching the neural network and symbolic AI parts together, Cox explains that the two components are much more entwined. The AI model is trained with reinforcement learning, a branch of machine learning that develops through trial and error based on the rules of the environment it is operating in. Since the goal of the neural networks is to create a symbolic representation of the problem and a structured program, what it learns is fundamentally different from a model that is only composed of neural networks.
“The quality of going from a question to a program that can answer the question really depends on having the system be a unified, hybrid AI model that all works together,” Cox says. “If we do this right, we’re going to see that symbolic systems materially change what neural networks learn and the neural networks materially change how the symbolic systems work.”
Another quality of hybrid AI is explainability. End-to-end deep learning models can solve fascinating problems, but the accuracy and complexity comes at the price of not knowing exactly what happens in the AI. But the programming instructions output by the hybrid AI model can be traced and debugged step by step, giving developers and engineers a clear picture of how the AI is (or isn’t) solving the problem.
“Here you get to see the program and you get to step through it and see what it did. If it got the wrong answer, you can see why it got the wrong answer and where it went astray. If it got the right answer, you can verify if it did so for the right reasons. You can understand and audit what came out,” Cox explains.
What kind of problems will hybrid AI be able to solve? According to Cox, we can use them in most applications where we’re currently using neural networks. But we can also use AI models such as NSCL in settings where we’re not using neural networks because we don’t have enough data and we’re worried about adversarial attacks or explainability issues.
“The most exciting thing about future hybrid AI systems is where we start applying AI to places where we can’t dream of applying it today with neural networks alone,” Cox says.