
This article is part of “the philosophy of artificial intelligence,” a series of posts that explore the ethical, moral, and social implications of AI today and in the future
Would true artificial intelligence be conscious and experience the world like us? Will we lose our humanity if we install AI implants in our brains? Should robots and humans have equal rights? If I replicate an AI version of myself, who will be the real me?
These are the kind of questions we think of when watching science fiction movies and TV series such as Her, Westworld, and Ex Machina. But, as some scientists believe, they will become real-life issues one day. And we need to think about them now.
In her book, Artificial You: AI and the Future of Your Mind, philosopher and cognitive scientist Susan Schneider dives deep into the ethical, philosophical, and scientific issues of superintelligent AI and the implications it can have for humanity. Schneider discusses the pathways to artificial general intelligence, consciousness and AI, singularity, and post-human intelligence. AI consciousness, in particular, stands out as one of the key themes that recurs in Artificial You.
Like everything that has to do with the future of AI, Artificial You is a speculation with a lot of caveats and ifs, because as has been proven a million times, we’re very bad at predicting the future. But it is a much-needed reminder of how technology is reshaping our understanding of the mind, identity, and consciousness.
Is consciousness a requirement for AI?

Many books on artificial general intelligence discuss the notion of “consciousness,” the quality of being aware of one’s existence and being able to experience the world.
In the context of biological life, intelligence and consciousness emerge in parallel. Beings with more sophisticated minds are capable of having complex experiences. We attribute consciousness to humans and to a lesser degree to animals, but not to plants. We empathize with conscious beings, feel their suffering, and value their lives.
But does and artificial intelligence system need to be conscious? Should AI experience the world as living beings do?
In Artificial You, Schneider lays out the different beliefs on AI and consciousness. On the one end of the spectrum are “biological naturalists,” scientists and philosophers who believe that consciousness depends on the particular chemistry of biological systems. In this respect, even the most sophisticated forms of AI will be devoid of inner experience, biological naturalists believe. So, even if you have an AI system that can engage in natural conversations, perform complicated surgeries, or drive a car with remarkable skill, it doesn’t mean that it is conscious.
On the other end of the spectrum are “techno-optimists,” who believe consciousness is the byproduct of intelligence, and a complex-enough AI system will inevitably be conscious. Techno-optimism pertains that the brain can be broken down into logical components, and those components can eventually be reproduced in hardware and software, giving rise to conscious, general problem–solving AI.
Schneider rejects both views and proposes a middle-of-the-road approach, which she calls, the “Wait and See Approach.”
“Conscious machines, if they exist at all, may occur in certain architectures and not others, and they may require a deliberate engineering effort, called ‘consciousness engineering,’” Schneider writes in Artificial You.
Why general AI might or might not need consciousness
In biological life, consciousness also serves another function: active thinking and problem-solving. But only a small percentage of our mental processing is conscious, argues Schneider makes in Artificial You.
For instance, walking, running, breathing, eating, buttoning shirts, brushing teeth, and many of the other tasks we accomplish do not require active thinking. Basically, routine tasks don’t require conscious thinking. On the other hand, novel tasks and situations require focus and conscious computation. Think about when you were learning to ride a bike or drive a car. Think about driving on a familiar road as opposed to driving in a new neighborhood. Some scientists call these cognitive processes system 1 and system 2 thinking.
So, if there’s an AI that already knows everything, does it require conscious thinking?
“A superintelligent AI, in particular, is a system which, by definition, possesses expert-level knowledge in every domain,” Schneider writes. “These computations could range over vast databases that include the entire Internet and ultimately encompass an entire galaxy. What would be novel to it? What task would require slow, deliberative focus? Wouldn’t it have mastered everything already? Perhaps, like an experienced driver on a familiar road, it can use nonconscious processing.”
There are more reasons to believe that, even if possible, scientists will want to avoid embedding consciousness in their AI systems.
“If an AI company tried to market a conscious system, it may face accusations of robot slavery and demands to ban the use of conscious AI for the very tasks the AI was developed to be used for,” Schneider writes in Artificial You. “Indeed, AI companies would likely incur special ethical and legal obligations if they built conscious machines, even at the prototype stage.”
This can become a serious issue if commercial entities continue to dominate AI research. To avoid legal and ethical complications, companies may make deliberate decisions to create AI systems that lack consciousness.
On the other hand, there are reasons to believe that conscious AI systems can be beneficial. A conscious AI system might be emphatic and humane. “The value that an AI places on us may hinge on whether it believes it feels like something to be us. This insight may require nothing less than machine consciousness. For all we know, conscious AI may lead to safer AI,” Schneider writes.
Another argument in favor of conscious AI is the appeal they can have to consumers. “People will long for genuinely conscious AI companions, encouraging AI companies to attempt to produce conscious AIs,” Schneider says.
But we also don’t know the implications of becoming emotionally attached to an AI system that will outlive us and is built from a totally different substrate.
Testing AI consciousness
Over the decades, scientists have developed various methods to test the level of intelligence in AI systems. These tests range from general evaluations such as the Turing test and the more recent Abstract Reasoning Corpus, to task-specific benchmarks such as image labelling datasets for computer vision systems and question-answering datasets for natural language processing algorithms.
But verifying whether an AI system is conscious or not is easier said than done, Susan Schneider discusses in Artificial You. Consciousness is not a binary, present–not present quality. There are different levels of consciousness that can’t be covered by a single test.
For instance, a chimpanzee or a dog won’t pass a language test, but does it mean that they totally lack consciousness. Likewise, humans with some disabilities might not be able to pass tests that other average humans find trivial, but it would be horrendous to conclude they’re not conscious. Is there any reason to treat advanced AI any differently?
“There cannot be a one-size-fits-all test for AI consciousness; a better option is a battery of tests that can be used depending on the context,” Schneider writes.
In her book, Schneider describes several tests, which can give a decent answer to the AI consciousness problem. She also stresses that the work is not over, and we still need to research and develop AI consciousness tests and think about middle-way scenarios and their possible implications.
“A precautionary stance suggests that we shouldn’t simply press on with the development of sophisticated AI without carefully gauging its consciousness and determining that it is safe, because the inadvertent or intentional development of conscious machines could pose existential or catastrophic risks to humans—risks ranging from volatile superintelligences that supplant humans to a human merger with AI that diminishes or ends human consciousness,” Schneider warns.
The conversation doesn’t end here
Artificial You touches on many more topics, including human-machine mergers, AI singularity, and the control problem. These are issues that are slowly moving from the realm of fiction into reality.
It will be a refreshing read for anyone who wants to look past current focus on optimizing machine learning algorithms and take a glimpse into the bigger picture of the philosophy of IA.
“At the heart of this book is a dialogue between philosophy and science,” Schneider writes. “The science of emerging technologies can challenge and expand our philosophical understanding of the mind, self, and person. Conversely, philosophy sharpens our sense of what these emerging technologies can achieve: whether there can be conscious robots, whether you could replace much of your brain with microchips and be confident that it would still be you, and so on.”