The complicated world of AI consciousness

artificial intelligence singularity
Image credit: Depositphotos

This article is part of “the philosophy of artificial intelligence,” a series of posts that explore the ethical, moral, and social implications of AI today and in the future

 

Would true artificial intelligence be conscious and experience the world like us? Will we lose our humanity if we install AI implants in our brains? Should robots and humans have equal rights? If I replicate an AI version of myself, who will be the real me?

These are the kind of questions we think of when watching science fiction movies and TV series such as Her, Westworld, and Ex Machina. But, as some scientists believe, they will become real-life issues one day. And we need to think about them now.

In her book, Artificial You: AI and the Future of Your Mind, philosopher and cognitive scientist Susan Schneider dives deep into the ethical, philosophical, and scientific issues of superintelligent AI and the implications it can have for humanity. Schneider discusses the pathways to artificial general intelligence, consciousness and AI, singularity, and post-human intelligence. AI consciousness, in particular, stands out as one of the key themes that recurs in Artificial You.

Like everything that has to do with the future of AI, Artificial You is a speculation with a lot of caveats and ifs, because as has been proven a million times, we’re very bad at predicting the future. But it is a much-needed reminder of how technology is reshaping our understanding of the mind, identity, and consciousness.

Is consciousness a requirement for AI?

artificial you book cover
“Artificial You: AI and the Future of Your Mind” by Susan Schneider

Many books on artificial general intelligence discuss the notion of “consciousness,” the quality of being aware of one’s existence and being able to experience the world.

In the context of biological life, intelligence and consciousness emerge in parallel. Beings with more sophisticated minds are capable of having complex experiences. We attribute consciousness to humans and to a lesser degree to animals, but not to plants. We empathize with conscious beings, feel their suffering, and value their lives.

But does and artificial intelligence system need to be conscious? Should AI experience the world as living beings do?

In Artificial You, Schneider lays out the different beliefs on AI and consciousness. On the one end of the spectrum are “biological naturalists,” scientists and philosophers who believe that consciousness depends on the particular chemistry of biological systems. In this respect, even the most sophisticated forms of AI will be devoid of inner experience, biological naturalists believe. So, even if you have an AI system that can engage in natural conversations, perform complicated surgeries, or drive a car with remarkable skill, it doesn’t mean that it is conscious.

On the other end of the spectrum are “techno-optimists,” who believe consciousness is the byproduct of intelligence, and a complex-enough AI system will inevitably be conscious. Techno-optimism pertains that the brain can be broken down into logical components, and those components can eventually be reproduced in hardware and software, giving rise to conscious, general problem–solving AI.

Schneider rejects both views and proposes a middle-of-the-road approach, which she calls, the “Wait and See Approach.”

“Conscious machines, if they exist at all, may occur in certain architectures and not others, and they may require a deliberate engineering effort, called ‘consciousness engineering,’” Schneider writes in Artificial You.

Why general AI might or might not need consciousness

In biological life, consciousness also serves another function: active thinking and problem-solving. But only a small percentage of our mental processing is conscious, argues Schneider makes in Artificial You.

For instance, walking, running, breathing, eating, buttoning shirts, brushing teeth, and many of the other tasks we accomplish do not require active thinking. Basically, routine tasks don’t require conscious thinking. On the other hand, novel tasks and situations require focus and conscious computation. Think about when you were learning to ride a bike or drive a car. Think about driving on a familiar road as opposed to driving in a new neighborhood. Some scientists call these cognitive processes system 1 and system 2 thinking.

So, if there’s an AI that already knows everything, does it require conscious thinking?

“A superintelligent AI, in particular, is a system which, by definition, possesses expert-level knowledge in every domain,” Schneider writes. “These computations could range over vast databases that include the entire Internet and ultimately encompass an entire galaxy. What would be novel to it? What task would require slow, deliberative focus? Wouldn’t it have mastered everything already? Perhaps, like an experienced driver on a familiar road, it can use nonconscious processing.”

There are more reasons to believe that, even if possible, scientists will want to avoid embedding consciousness in their AI systems.

“If an AI company tried to market a conscious system, it may face accusations of robot slavery and demands to ban the use of conscious AI for the very tasks the AI was developed to be used for,” Schneider writes in Artificial You. “Indeed, AI companies would likely incur special ethical and legal obligations if they built conscious machines, even at the prototype stage.”

This can become a serious issue if commercial entities continue to dominate AI research. To avoid legal and ethical complications, companies may make deliberate decisions to create AI systems that lack consciousness.

On the other hand, there are reasons to believe that conscious AI systems can be beneficial. A conscious AI system might be emphatic and humane. “The value that an AI places on us may hinge on whether it believes it feels like something to be us. This insight may require nothing less than machine consciousness. For all we know, conscious AI may lead to safer AI,” Schneider writes.

Another argument in favor of conscious AI is the appeal they can have to consumers. “People will long for genuinely conscious AI companions, encouraging AI companies to attempt to produce conscious AIs,” Schneider says.

But we also don’t know the implications of becoming emotionally attached to an AI system that will outlive us and is built from a totally different substrate.

Testing AI consciousness

Over the decades, scientists have developed various methods to test the level of intelligence in AI systems. These tests range from general evaluations such as the Turing test and the more recent Abstract Reasoning Corpus, to task-specific benchmarks such as image labelling datasets for computer vision systems and question-answering datasets for natural language processing algorithms.

But verifying whether an AI system is conscious or not is easier said than done, Susan Schneider discusses in Artificial You. Consciousness is not a binary, present–not present quality. There are different levels of consciousness that can’t be covered by a single test.

For instance, a chimpanzee or a dog won’t pass a language test, but does it mean that they totally lack consciousness. Likewise, humans with some disabilities might not be able to pass tests that other average humans find trivial, but it would be horrendous to conclude they’re not conscious. Is there any reason to treat advanced AI any differently?

“There cannot be a one-size-fits-all test for AI consciousness; a better option is a battery of tests that can be used depending on the context,” Schneider writes.

In her book, Schneider describes several tests, which can give a decent answer to the AI consciousness problem. She also stresses that the work is not over, and we still need to research and develop AI consciousness tests and think about middle-way scenarios and their possible implications.

“A precautionary stance suggests that we shouldn’t simply press on with the development of sophisticated AI without carefully gauging its consciousness and determining that it is safe, because the inadvertent or intentional development of conscious machines could pose existential or catastrophic risks to humans—risks ranging from volatile superintelligences that supplant humans to a human merger with AI that diminishes or ends human consciousness,” Schneider warns.

The conversation doesn’t end here

Robot human hands touching

Artificial You touches on many more topics, including human-machine mergers, AI singularity, and the control problem. These are issues that are slowly moving from the realm of fiction into reality.

It will be a refreshing read for anyone who wants to look past current focus on optimizing machine learning algorithms and take a glimpse into the bigger picture of the philosophy of IA.

“At the heart of this book is a dialogue between philosophy and science,” Schneider writes. “The science of emerging technologies can challenge and expand our philosophical understanding of the mind, self, and person. Conversely, philosophy sharpens our sense of what these emerging technologies can achieve: whether there can be conscious robots, whether you could replace much of your brain with microchips and be confident that it would still be you, and so on.”

1 COMMENT

  1. It’s becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman’s Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with primary consciousness will probably have to come first.

    The thing I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990’s and 2000’s. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I’ve encountered is anywhere near as convincing.

    I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there’s lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.

    My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar’s lab at UC Irvine, possibly. Dr. Edelman’s roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.