To create AGI, we need a new theory of intelligence

considered response artificial intelligence

This article is part of “the philosophy of artificial intelligence,” a series of posts that explore the ethical, moral, and social implications of AI today and in the future

For decades, scientists have tried to create computational imitations of the brain. And for decades, the holy grail of artificial general intelligence, computers that can think and act like humans, has continued to elude scientists and researchers.

Why do we continue to replicate some aspects of intelligence but fail to generate systems that can generalize their skills like humans and animals? One computer scientist who has been working on AI for three decades believes that to get past the hurdles of narrow AI, we must look at intelligence from a different and more fundamental perspective.

In a paper that was presented at the Brain-Inspired Cognitive Architectures for Artificial Intelligence (BICA*AI), Sathyanaraya Raghavachary, Associate Professor of Computer Science at the University of Southern California, discusses “considered response,” a theory that can generalize to all forms of intelligent life that have evolved and thrived on our planet.

Titled, “Intelligence—consider this and respond!” the paper sheds light on the possible causes of the troubles that have haunted the AI community for decades and draws important conclusions, including the consideration of embodiment as a prerequisite for AGI.

Structures and phenomena

“Structures, from the microscopic to human level to cosmic level, organic and inorganic, exhibit (‘respond with’) phenomena on account of their spatial and temporal arrangements, under conditions external to the structures,” Raghavachary writes in his paper.

This is a general rule that applies to all sorts of phenomena we see in the world, from ice molecules becoming liquid in response to heat, to sand dunes forming in response to wind, to the solar system’s arrangement.

Raghavachary calls this “sphenomics,” a term he coined to differentiate from phenomenology, phenomenality, and phenomenalism.

“Everything in the universe, at every scale from subatomic to galactic, can be viewed as physical structures giving rise to appropriate phenomena, in other words, S->P,” Raghavachary told TechTalks.

Biological structures can be viewed in the same way, Raghavachary believes. In his paper, he notes that the natural world comprises a variety of organisms that respond to their environment. These responses can be seen in simple things such as the survival mechanisms of bacteria, as well as more complex phenomena such as the collective behavior exhibited by bees, ants, and fish as well as the intelligence of humans.

“Viewed this way, life processes, of which I consider biological intelligence—and where applicable, even consciousness—occur solely as a result of underlying physical structures,” Raghavachary said. “Life interacting with environment (which includes other life, groups…) also occurs as a result of structures (e.g., brains, snake fangs, sticky pollen…) exhibiting phenomena. The phenomena are the structures’ responses.”

Intelligence as considered response

school of fish

In inanimate objects, the structures and phenomena are not explicitly evolved or designed to support processes we would call “life” (e.g., a cave producing howling noises as the wind blows by). Conversely, life processes are based on structures that consider and produce response phenomena.

However different these life forms might be, their intelligence shares a common underlying principle, Raghavachary says, one that is “simple, elegant, and extremely widely applicable, and is likely tied to evolution.”

In this respect, Raghavachary proposes in his paper that “intelligence is a biological phenomenon tied to evolutionary adaptation, meant to aid an agent survive and reproduce in its environment by interacting with it appropriately—it is one of considered response.”

The considered response theory is different from traditional definitions of intelligence and AI, which focus on high-level computational processing such as reasoning, planning, goal-seeking, and problem-solving in general. Raghavachary says that the problem with the usual AI branches—symbolic, connectionist, goal-driven—is not that they are computational but that they are digital.

“Digital computation of intelligence has—pardon the pun—no analog in the natural world,” Raghavachary said. “Digital computations are always going to be an indirect, inadequate substitute for mimicking biological intelligence – because they are not part of the S->P chains that underlie natural intelligence.”

There’s no doubt that the digital computation of intelligence has yielded impressive results, including the variety of deep neural network architectures that are powering applications from computer vision to natural language processing. But despite the similarity of their results to what we perceive in humans, what they are doing is different from what the brain does, Raghavachary says.

The “considered response” theory zooms back and casts a wider net that all forms of intelligence, including those that don’t fit the problem-solving paradigm.

“I view intelligence as considered response in that sense, emanating from physical structures in our bodies and brains. CR naturally fits within the S->P paradigm,” Raghavachary said.

Developing a theory of intelligence around the S->P principle can help overcome many of the hurdles that have frustrated the AI community for decades, Raghavachary believes. One of these hurdles is simulating the real world, a hot area of research in robotics and self-driving cars.

“Structure->phenomena are computation-free, and can interact with each other with arbitrary complexity,” Raghavachary says. “Simulating such complexity in a VR simulation is simply untenable. Simulation of S->P in a machine will always remain exactly that, a simulation.”

Embodied artificial intelligence

hand

A lot of work in the AI field is what is known as “brain in a vat” solutions. In such approaches, the AI software component is separated from the hardware that interacts with the world. For example, deep learning models can be trained on millions of images to detect and classify objects. While those images have been collected from the real world, the deep learning model has not directly experienced them.

While such approaches can help solve specific problems, they will not move us toward artificial general intelligence, Raghavachary believes.

In his paper, he notes that there is not a single example of “brain in a vat” in nature’s diverse array of intelligent lifeforms. And thus, the considered response theory of intelligence suggests that artificial general intelligence requires agents that can have a direct embodied experience of the world.

“Brains are always housed in bodies, in exchange for which they help nurture and protect the body in numerous ways (depending on the complexity of the organism),” he writes.

Bodies provide brains with several advantages, including situatedness, sense of self, agency, free will, and more advanced concepts such as theory of mind (the ability to predict other the experience of another agent based on your own) and model-free learning (the ability to experience first and reason later).

“A human AGI without a body is bound to be, for all practical purposes, a disembodied ‘zombie’ of sorts, lacking genuine understanding of the world (with its myriad forms, natural phenomena, beauty, etc.) including its human inhabitants, their motivations, habits, customs, behavior, etc. the agent would need to fake all these,” Raghavachary writes.

Accordingly, an embodied AGI system would need a body that matches its brain, and both need to be designed for the specific kind of environment it will be working in.

“We, made of matter and structures, directly interact with structures, whose phenomena we ‘experience’. Experience cannot be digitally computed—it needs to be actively acquired via a body,” Raghavachary said. “To me, there is simply no substitute for direct experience.”

In a nutshell, the considered response theory suggests that suitable pairings of synthetic brains and bodies that directly engage with the world should be considered life-like, and appropriately intelligent, and—depending on the functions enabled in the hardware—possibly conscious.

This means that you can create any kind of robot and make it intelligent by equipping it with a brain that matches its body and sensory experience.

“Such agents do not need to be anthropomorphic—they could have unusual designs, structures and functions that would produce intelligent behavior alien to our own (e.g., an octopus-like design, with brain functions distributed throughout the body),” Raghavachary said. “That said, the most relatable human-level AI would likely be best housed in a human-like agent.”

11 COMMENTS

  1. ‘Meaning’ is the key to intelligence. The problem with AI is that nothing will ever have a meaning to a digital computer. Embodiment will at least create a way for something to have meaning, but that meaning will still be a simulation based on calculations.

    • Hi John, indeed. I equate meaning with experience and understanding – possible via embodiment on account of directly engaging with the world. If the embodiment employs analog processing (eg via neuromorphic hardware), that won’t be a simulation; if it employs digital computation on the other hand, then yes, it will be a simulation [a zombie of sorts].

  2. I think the word we’re looking for is ‘memories’. Without a causal chain of embodied memories, actual learning is not possible. Additionally, that chain of memories is, in part, what creates the ‘self’ which gives the context to link all the different learnings into an awareness or consciousness that becomes the focus of self-directed learning. An autonomous robotic body would provide an ideal platform to achieve all these goals, in my admittedly non-professional opinion.

    • Glenn, so true.

      In the CR paper [https://www.researchgate.net/publication/346786737_Intelligence_-_Consider_This_and_Respond], I discuss memory as playing a central role… in a sense, we are nothing but our memories.

  3. It’s becoming clearer that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman’s Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with primary consciousness will probably have to come first.

    The thing I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990’s and 2000’s. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I’ve encountered is anywhere near as convincing.

    I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there’s lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.

    My intuitively felt advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar’s lab at UC Irvine, possibly. Dr. Edelman’s roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461

    • Future God help us pass this ridge of system over-development and perversions accompanying it!

      I will give you example of brain in a vat in nature: all of them brains in nature are cleverly placed in a VAT commonly named SKULL. With signal wires like any computer to get them embodied, but oooops! disembodied during transfer to the said VAT via electochemical encoding. How is it different from PC???

  4. Interesting. After reading this I realize we can apply this the opposite way. Take TikTok, for example. The algorithm has an easier time understanding user interest because the signal from the user is more clear (algorithm-friendly design). Mainly the algorithm needs to “see” when the user slide-up the page because the entire page is almost just a video. This binary interaction is fitted well with the computational structure of today’s machine learning.

    Based on this I conclude that understanding both fundamental structures is one of the important steps to accomplish appropriate interaction between the two. What do you think?

    • Hi Matakari, indeed. Just like physical structures produce physical phenomena on account of energy input, computational structures also undergo transformations and produce outputs, upon execution – the algorithm runs, data structures get modified, outputs occur. Viewed this way, yes, TikTok runs a particularly clever recommendation engine whose computational structures output pleasing videos for its users.

      The S->P idea most certainly applies to all digital computation, used in today’s AI. It’s just that it’s not enough, we do seem to need embodied machines that do analog computing, in addition (not as a replacement).

      Make sense?

  5. As a lay -man personally I don’t deserve to comment on the subject itself but at least I can say that the intelligence of humans ,as has been exhibited so far, can be imitated to a large extent is called AI .That’s that.Thanks a lot!

  6. Sit me in a room with the most knowledgeable, open minds on the subject and we will solve it. I have a way of thinking that can bring solutions to any context. Sounds weird, but it’s legit.

  7. Thank you very much for sharing this text and thoughts with us. I would like to share mine too:

    The ability to reproduce is generally accepted as a basic irreducible characteristic (motivator) of a person with which he is born. This intention of ours is “protected” by the instinct of self-preservation. Each “input” or stimulus (sensory experience) is evaluated by humans, consciously or unconsciously, with regard to whether or not it represents a threat to life or whether it will affect the ability to reproduce. Memory, as a function of the brain, also serves primarily to increase the chances of surviving, avoiding pain and reproducing. Also intelligence, as the ability to handle knowledge and reflect internal and external limitations and opportunities, serves the same goals. Over the ages, thanks to the storage of knowledge from feedback, man and his brain has developed into the actual form, where we can have “pure mathematics”, which may not necessarily serve to preserve man (although we do not know, how current knowledge will be used in the future, can we Mr. Bohr :). We were able to separate the intelligence from a man and partially simulate it in a virtual environment, which was an amazing achievement. But to create AGI, we will have to put intelligence back into the body.

    The idea that robots need to have some kind of body seems like a step in the right direction to me. Because the next logical step should then be to equip these robots with intention. If we wanted to compare them to humans, we would have to at least equip them with the ability to empathize and with the instinct of self-preservation – ergo with ability to avoid self-destruction (and if sometimes the materials allow it, the ability to reproduce :). There would have to be some sort of feedback system in the body, that would inform the robot, what is going on. As in humans, it is primarily neural system centered in the brain (neural networks). Robots would then not need to have Asimov’s 3 laws of robotics hardcoded into their BIOS – they would figure them out by themselves. And as if by a flick of a magic wand, consciousness would emerge…

    If people were “replaced” by such machines, it would not be a tragedy, but a shift to a higher level.
    Simply, skip the “Ubermensch”, get the Robot! 🙂

Leave a Reply to John j. SmithCancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.