How to solve AI’s “common sense” problem

brain puzzle ai common sense
Image credit: 123RF

Welcome to AI book reviews, a series of posts that explore the latest literature on artificial intelligence.

In recent years, deep learning has taken great strides in some of the most challenging areas of artificial intelligence, including computer vision, speech recognition, and natural language processing.

However, some problems remain unsolved. Deep learning systems are poor at handling novel situations, they require enormous amounts of data to train, and they sometimes make weird mistakes that confuse even their creators.

Some scientists believe these problems will be solved by creating larger and larger neural networks and training them on bigger and bigger datasets. Others think that what the field of AI needs is a little bit of human “common sense.”

In their new book Machines Like Us, computer scientists Ronald J. Brachman and Hector J. Levesque discuss their view and potential solution to this missing piece of the AI puzzle, which has eluded researchers for decades. In an interview with TechTalks, Brachman discussed what common sense is and isn’t, why machines don’t have it, and how “knowledge representation,” a concept that has been around for decades but has fallen by the wayside during the deep learning craze, can steer the AI community in the right direction.

While remaining in the realm of hypothesis, Machines Like Us provides a fresh perspective on potential areas of research, courtesy of two scientists who have been deeply involved in artificial intelligence since the 1970s.

Good AI systems that make weird mistakes

AI winter broken robot

“Over the last 10-12 years, with the extraordinary enthusiasm people have shown around deep learning, there has been a lot of talk about deep learning–based systems being able to do everything we would have originally wanted an AI system to do,” Brachman said.

In the early days of AI, the vision was to create a self-contained, autonomous system, possibly in the form of a robot, that would go out in the world and do things with little or no human intervention necessary.

“Today, the aperture has narrowed quite a bit because so many people are excited about what deep learning has been able to accomplish,” Brachman said. “And in particular, in industry, where huge amounts of money and talent acquisition have driven a really intense focus on experience-based or example-trained systems, lots of claims are being made that we’re close to artificial general intelligence, or that ‘good old fashioned AI’ (GOFAI) or the symbolic approach is completely dead or unnecessary.”

machines like us book cover
“Machines Like Us” by Hector J. Levesque and Ronald J. Brachman

But what is clear is that, albeit impressive, deep learning systems are suffering from perplexing problems that have yet to be solved. Neural networks are prone to adversarial attacks, where specially crafted modifications to input values cause the ML model to make sudden erroneous changes to its output. Deep learning is also struggling with making sense of simple causal relations and is very poor at composing concepts and putting them together. And large language models, which have become a particular area of focus recently, are found to sometimes make very dumb mistakes while generating coherent and impressive text.

“What you see about these kinds of mistakes is that they look stupid, ignorant, the kinds of mistake that people would rarely—if ever—make. And on top of that it’s kind of unexplainable what led to those errors,” Brachman said.

These mistakes caused Brachman and Levesque to reflect on what’s missing in today’s AI technology, and what would be needed to either complement or supplant example-driven trained neural network systems.

“If you think about it, what’s clearly missing from these systems is what we humans call common sense, which is the ability to see things that to many humans seem obvious, to draw fast and simple, obvious conclusions, and to be able to stop yourself if you decided to do something which you immediately realize is absurd or a bad choice,” Brachman said.

What is common sense?

thinking human statue
Image credit: Depositphotos

The AI community has talked about common sense since its early years. In fact, one of the earliest papers on AI, written by John McCarthy in 1958, was titled, “Programs with Common Sense.”

“This isn’t new, and we didn’t invent it, but the field has lost sight of the centrality of what AI pioneers meant by it,” Brachman said. “And if you look further for guidance on what common sense is about and what it would mean to have it and—importantly for us—how it works and how you would implement it, you find precious little in the psychology literature.”

In Machines Like Us, Brachman and Levesque describe common sense as “the ability to make effective use of ordinary, everyday, experiential knowledge in achieving ordinary, everyday, practical goals.”

Common sense is crucial to survival. Humans and higher animals have evolved to learn through experience and develop routines, and autopilot skills that can handle most of the situations they face every day. But daily life is not just routines that we’ve seen repeatedly. We regularly face novel situations that we’ve never seen. Some of them could be dramatically different from the norm, but most of the time, we see things that are a little different from what we’re used to. This is sometimes called the “long tail” in AI discussions.

“In our view, when you’re interrupted from one of these routines, common sense is really the first thing that gets activated,” Brachman said. “Common sense is what allows you to take a quick look at the new situation, remember something you’ve done before that’s close enough, adapt your memory very quickly, and apply it to the new situation and go forward.”

In a way, common sense is a bit different from the two-system thinking paradigm popularized by psychologist and Nobel laureate Daniel Kahneman. Common sense is not the fast, autopilot system 1 thinking, which carries out most routine tasks we can do without consciously concentrating on them (e.g., brushing your teeth, tying your shoes, buttoning your shirt, driving in a familiar area). It requires active thinking to disrupt your current routine.

At the same time, common sense also isn’t system 2 thinking, which is the slow mode of thinking that requires total concentration and methodical, step-by-step thinking (e.g., planning a six-week trip, designing software, solving complicated equations).

“We can think deep and long to puzzle through a challenge. That kind of thinking is taxing on your brain, and it’s slow. Common sense allows us to bypass that in virtually any common everyday situation where we don’t have to think deeply about what to do next,” Brachman said.

In their book, Brachman and Levesque emphasize that common sense is a “shallow cognitive phenomenon,” operating quickly compared to thoughtful, methodical analysis.

“It’s not common sense if it takes a sizable amount of mental effort to figure it out. We can think of it as ‘reflexive thinking,’ with the ‘reflexive’ being as significant as the ‘thinking,’” they write.

Dangers of AI without common sense

Autonomous cars at an intersection
When AI systems are to be used in applications that always generate novel situations, common sense becomes an absolute necessity (image credit: 123RF)

Common sense entails predictability, trust, explainability, and accountability.

“Most people don’t make bizarre mistakes. We all do stupid things every once in a while. But we can think about it and avoid doing it again,” Brachman said. “People are not perfect. But they are generally predictable to a certain degree, especially people you know. And that allows us to invest trust.”

The challenge for AI systems without common sense is that they will make mistakes when they reach the limits of where they’ve been trained. And those mistakes are completely unpredictable and unexplainable, Brachman says.

“Machines that don’t have common sense don’t have that perspective, don’t have that fallback to stop themselves from doing things that are strange, and they suffer from brittleness,” Brachman said. “They’re amazing at what they’re doing. But when they make mistakes, it doesn’t make any sense at all.”

These mistakes can be harmless, like mislabeling a picture, or super-harmful like driving a car into a lane divider.

“If all a system ever encounters are chessboards, and all it ever has to worry about is winning the game, common sense really adds nothing to the mix,” Brachman and Levesque write in Machines Like Us. “Where common sense will have a role to play is when we venture beyond the chessboard and think of a chess game as an activity that takes place in the real world.”

So as AI systems move into sensitive applications in open domains such as driving cars or co-working with humans, or even becoming engaged in open-ended conversations, common sense will play a very critical role. These are areas where novel situations happen all the time.

“If we want AI systems to be able to deal in a reasonable way with things that happen in the real world quite commonly, we need something beyond an expertise that derives from sampling what has already occurred,” the authors write in Machines Like Us. “Given the overwhelmingly large numbers, predicting the future based simply on seeing and internalizing what has taken place in the past won’t cut it, no matter how brutish the brute force. We need common sense.”

Revisiting symbolic artificial intelligence

common sense knowledge
Image credit: 123RF

Most scientists agree that current AI systems lack common sense. However, opinions diverge when it comes to the solution. One popular trend is to continue to make neural networks bigger and bigger. Evidence shows that larger neural networks continue to make incremental improvements. And in some cases, large neural networks show zero-shot learning skills, where they perform tasks that they have not been trained for.

However, there is also a large body of research and experiments that show that more data and compute do not solve the problems of AI systems that don’t have common sense. They just hide them in a larger and more confusing jumble of numerical weights and matrix operations.

“These systems notice and internalize correlations or patterns. They don’t develop ‘concepts.’ And even when these things interact with language, they are just mimicking human behavior without having the underlying mental and conceptual mechanisms that we believe we have,” Brachman said.

In Machines Like Us, Brachman and Levesque argue for the creation of systems that encode commonsense knowledge and commonsense understanding of the world.

Ron Brachman
Ron Brachman, co-author of “Machines Like Us”

The authors write, “Commonsense knowledge is about the things in the world and the properties they have, mediated by what we called a conceptual structure, a collection of ideas about the kinds of things there could be, and the kinds of properties they could have. Knowledge would be put to use by representing it symbolically and performing computational operations over those symbolic structures [emphasis mine]. The commonsense decisions about what to do would amount to using this represented knowledge to consider how to achieve goals and how to react to what has been observed.”

Brachman and Levesque believe that the field needs to look back and revisit some of the earlier work on symbolic artificial intelligence to bring common sense to computers. They call this the “knowledge representation” hypothesis. The book goes into detail about how a KR system could be structured and how different pieces of knowledge could be brought together and linked to form more complex forms of knowledge and inference.

According to the KR hypothesis, representations of commonsense knowledge will have two parts: “a world model to represent a state of the world, and a conceptual model to represent a conceptual structure—a framework of generalizations that can be used to classify items in the world.”

“Our view is to return to some of the earlier thinking about AI where symbols of some sort and symbol manipulating programs—what people used to call inference engines—can be used to encode and use fundamental knowledge of the world that we would call commonsensical: Intuitive or naïve physics, basic understanding of how people and other agents act and what intentions and beliefs are like, how time and events work, causality, etc. all the knowledge that infants pick up in the first year to two years of their lives,” Brachman said. “Formally represented knowledge of the world can actually have a causal effect on the behavior of a machine and will also take advantage of all the things you can do with manipulating symbols, like compositionality, putting things we’re familiar with in new ways.”

Hector Levesque
Hector Levesque, co-author of “Machines Like Us”

Brachman stresses that what he and Levesque propose in Machines Like Us is a hypothesis that can be disproved in the future.

“Whether in the long run, the path is to prebuild, pre-encode all this knowledge or somehow have a system learn in a different way, that I don’t know. But as a hypothesis and experiment, I think the next step in AI should be to try to build these knowledge bases and have systems use them to deal with surprises in everyday lives, to make rough-and-ready guesses about how to deal with both familiar and unfamiliar situations,” Brachman said.

Brachman and Levesque’s hypothesis builds on top of previous efforts to create large, symbolic commonsense knowledge bases such as Cyc, a project that dates back to the 1980s and has gathered millions of rules and concepts about the world.

“I think we need to go a lot further. We need to focus on how an autonomous decision-making machine uses that stuff in everyday mundane decision contexts,” Brachman said. “That’s the thing that has been missing from all those other projects. It’s one thing to build up factual knowledge and be able to spout back the answers to Jeopardy-type trivia questions. But it’s completely another to operate in the blooming buzzing world, and be able to cope with unforeseeable surprises in a rational and timely way.”

Does machine learning have a role in common sense?

neuro-symbolic AI common sense

Brachman says that machine learning–based systems will continue to play a crucial role in the perceptual side of artificial intelligence.

“I would not push a symbol-manipulation system that uses first-order predicate calculus to process pixels on an artificial retina or to deal with speed-signal processing. These machine learning systems are great at recognition tasks at a low sensory level,” he said. “It’s not clear how high in the cognitive chain that stuff will suffice. But it clearly doesn’t go all the way, because they don’t form concepts and connections between what you see in a scene and natural language.”

The combination of neural networks and symbolic systems is an idea that has become more prominent in recent years. Gary Marcus, Luis Lamb, and Joshua Tenenbaum, among other scientists, are proposing the development of “neuro-symbolic” systems that bring together the best of symbolic and learning-based systems to solve the current challenges of AI.

While Brachman agrees with much of the work that is being done in the field, he also suggests that the current view on hybrid AI needs some adjustment.

“I don’t think any of the current neuro-symbolic systems account for the difference between common sense and the more methodical, deeper kind of symbolic reasoning that underlies math and heavy-duty planning and deep analysis,” he said. “What I would like to see in this hybrid AI world is really accounting for common sense, making machines take advantage of it in the way humans do, and have it do the same incredible things for machines that it does for humans.”

4 COMMENTS

  1. Excellent review. Thank you. I wish I could say the same thing about the ideas being championed by Messieurs Brachman and Levesque but I can’t. Symbolic AI? For real? These guys actually believe that babies use symbolic AI to learn about the world around them? Wow. Do they think the visual cortex uses a bunch of symbols to invariantly perceive an object it has never seen before and in sharp detail? This is laughable, I’m sorry.

    If anyone is waiting for the mainstream AI community to crack AGI, I’m afraid they’ll have to wait another two or three millennia or so. I’ve got bad news for the AI community. Neither symbolic AI nor deep learning will have any role whatsoever to play in cracking this nut other than as examples of what not to do. Neuroscience teaches us that intelligence is strictly about event timing or spikes. They are either concurrent or sequential. That’s the most important key. Complex event detection can be solved by using a mechanism as simple as Hebbian learning combined with a winner-take-all architecture.

    In conclusion, I can’t say that I wish the AI community good luck in their quest. I don’t. In fact, I delight in knowing that they have exactly zero clue. But by all means, carry on. Maybe a few more billion dollars might do the trick.

  2. Please see my theoretical article about quantum-holographic AI possibility in the site Researchgate
    Dr Francisco Di Biase, Grand PhD
    Neurosurgeon, Consciouness Research

Leave a Reply to Rebel ScienceCancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.