To evolve, AI must face its limitations

robot child
Image credit: 123RF

From medical imaging and language translation to facial recognition and self-driving cars, examples of artificial intelligence (AI) are everywhere. And let’s face it: although not perfect, AI’s capabilities are pretty impressive.

Even something as seemingly simple and routine as a Google search represents one of AI’s most successful examples, capable of searching vastly more information at a vastly greater rate than humanly possible and consistently providing results that are (at least most of the time) exactly what you were looking for.

The problem with all of these AI examples, though, is that the artificial intelligence on display is not really all that intelligent. While today’s AI can do some extraordinary things, the functionality underlying its accomplishments works by analyzing massive data sets and looking for patterns and correlations without understanding the data it is processing. As a result, an AI system relying on today’s AI algorithms and requiring thousands of tagged samples only gives the appearance of intelligence. It lacks any real, common sense understanding. If you don’t believe me, just ask a customer service bot a question that is off-script.

AI’s fundamental shortcoming can be traced back to the principle assumption at the heart of most AI development over the past 50 years, namely that if difficult intelligence problems could be solved, the simple intelligence problems would fall into place. This turned out to be false.

In 1988, Carnegie Mellon roboticist Hans Moravec wrote, “It is comparatively easy to make computers exhibit adult-level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility.” In other words, the difficult problems turn out to be simpler to solve and what appear to be simple problems can be prohibitively difficult. 

Two other assumptions which played a prominent role in AI development have also proven to be false:

– First, it was assumed that if enough narrow AI applications (i.e., applications which can solve a specific problem using AI techniques) were built, they would grow together into a form of general intelligence. Narrow AI applications, however, don’t store information in a generalized form and can’t be used by other narrow AI applications to expand their breadth. So while stitching together applications for, say, language processing and image processing might be possible, those apps cannot be integrated in the same way that a child integrates hearing and vision.

– Second, some AI researchers assumed that if a big enough machine learning system with enough computer power could be built, it would spontaneously exhibit general intelligence. As expert systems that attempted to capture the knowledge of a specific field have clearly demonstrated, it is simply impossible to create enough cases and example data to overcome a system’s underlying lack of understanding. 

If the AI industry knows that the key assumptions it made in development have turned out to be false, why hasn’t anyone taken the necessary steps to move past them in a way that advances true thinking in AI? The answer is likely found in AI’s principal competitor: let’s call her Sally. She’s about three years old and already knows lots of things no AI does and can solve problems no AI can. When you stop to think about it, many of the problems that we have with AI today are things any three-year-old could do.

Think of the knowledge necessary for Sally to stack a group of blocks. At a fundamental level, Sally understands blocks or any other physical objects) exist in a 3D world. She knows they persist even when she can’t see them. She knows innately that they have a set of physical properties like weight and shape and color. She knows she can’t stack more blocks on top of a round, rolly one. She understands causality and the passage of time. She knows she has to build a tower of blocks first before she can knock it over.

What does Sally have to do with the AI industry? Sally has what today’s AI lacks. She possesses situational awareness and contextual understanding. Sally’s biological brain is capable of interpreting everything it encounters in the context of everything else it has previously learned. More importantly, three-year-old Sally will grow to become four years old, and five years old, and 10 years old, and so on. In short, three-year-old Sally innately possesses the capabilities to grow into a fully functioning, intelligent adult.

In stark contrast, AI analyzes massive data sets looking for patterns and correlations without understanding any of the data it is processing. Even the recent “neuromorphic” chips rely on capabilities absent in biology. 

For today’s AI to overcome its inherent limitations and evolve into its next phase – defined as artificial general intelligence (AGI) – it must be able to understand or learn any intellectual task that a human can. It must attain consciousness. Doing so will enable it to consistently grow its intelligence and abilities in the same way that a human three-year-old grows to possess the intelligence of a four-year-old, and eventually a 10-year-old, a 20-year-old, and so on.

Sadly, the research required to shed light on what ultimately will be needed to replicate the contextual understanding of the human brain, enabling AI to attain true consciousness, is highly unlikely to receive funding. Why not? Quite simply, no one—at least no one to date—has been willing to put millions of dollars and years of development into an AI application that can do what any three-year-old can do. 

And that inevitably brings us back to the conclusion that today’s artificial intelligence really isn’t all that intelligent. Of course, that won’t stop numerous AI companies from bragging that their AI applications “work just like your brain.” But the truth is that they would be closer to the mark if they admitted their apps are based on a single algorithm – backpropagation – and represent a powerful statistical method. Unfortunately, the truth is just not as interesting as “works like your brain.”

About the author

Charles Simon

Charles Simon, BSEE, MSCs is a nationally recognized entrepreneur and software developer who has many years of computer experience in industry including pioneering work in AI. Mr. Simon’s technical experience includes the creation of two unique Artificial Intelligence systems along with software for successful neurological test equipment. Combining AI development with biomedical nerve signal testing gives him the singular insight. He is also the author of two books – Will Computers Revolt?: Preparing for the Future of Artificial Intelligence and Brain Simulator II: The Guide for Creating Artificial General Intelligence – and the developer of Brain Simulator II, an AGI research software platform that combines a neural network model with the ability to write code for any neuron cluster to easily mix neural and symbolic AI code. You can follow the author’s continuing AGI experimentation at http://brainsim.org or at the Facebook group: http://facebook.com/groups/brainsim.

Previous articleMachine learning has a backdoor problem
Next article8 best tech podcasts
Charles Simon, BSEE, MSCs is a nationally recognized entrepreneur and software developer, and the Founder and CEO of FutureAI. Mr. Simon has many years of computer experience in the industry, including pioneering work in artificial intelligence (AI). His technical experience includes the creation of two unique AI systems along with software for successful neurological test equipment. Combining AI development with biomedical nerve signal testing gives him the singular insight. He is also the author of "Will the Computers Revolt: Preparing for the Future of Artificial Intelligence,” and the developer of BrainSim II, an artificial general intelligence (AGI) research software platform, and Sallie, a prototype software and artificial entity that learns in real-time with vision, hearing, speaking, and mobility. For more information, visit https://futureai.guru/Founder.aspx

3 COMMENTS

  1. Very insightful and nicely written article. I agree with your explanation of what is wrong with AGI research. However, you lost me when you brought in consciousness. Intelligence is purely a mechanical phenomenon. Consciousness is irrelevant to it in my opinion. Besides, no one knows what consciousness really is.

    Today’s AI is, as you wrote, not intelligent. But even worse than that is that AI pundits such as Yann LeCun, Geoffrey Hinton, Gary Marcus and others still believe that deep learning and its underlying gradient-based approach to learning have a role to play in solving AGI. This nonsense is what will keep the AI community chasing a red herring for many decades to come. Humans and animals do not optimize an objective function via gradient-based learning. We can perceive any pattern or object instantly without any prior representation of it in memory. A DNN would be blind to it. The biggest flaw with DL is that a DNN cannot detect an object unless it already has a representation of it in memory. In other words, in DL, there can be no perception without recognition. This is a fatal flaw if AGI is the goal.

    I also don’t see what millions of dollars have to do with it. AGI will be solved, not with truckloads of cash but by some lone wolf with contempt for mainstream AI and a solution. Once that solution is known, AGI can be implemented on a desktop computer. It does not have to be human level. Even honeybees have generalized perception, the sine qua non of true generalized intelligence. Bees have to have generalized perception because they have less than 1 million neurons to work with. Reusability of existing building blocks is the key.

    Thanks for a great read. And thanks also to Ben Dickson for inviting great thinkers to write for TechTalks.

  2. Thanks for your thoughtful comment. The real problem of the “millions” is that the current multi-billion-dollar investments behind deep learning create a lot of inertia and resistance to change. On the consciousness front, it depends on how humanlike you want your AGI to be. You can have an extremely useful general intelligence without consciousness but if you want it to respond like a human, it must act like it is a conscious entity. It’s a completely separate (philosophical) conversation. but I contend that if you provide an AGI with the mechanisms needed to act las though it is conscious, it will, in fact, become conscious.

  3. It’s becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman’s Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with primary consciousness will probably have to come first.

    The thing I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990’s and 2000’s. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I’ve encountered is anywhere near as convincing.

    I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there’s lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.

    My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar’s lab at UC Irvine, possibly. Dr. Edelman’s roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461

Leave a Reply to Rebel ScienceCancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.