Wisdom beyond AI doom and acceleration

Image generated with Bing Image Creator

We need more business- and policy-centric books on artificial intelligence rather than hype and fantasy. So, I was excited to read Henry Kissinger, Eric Schmidt, and Daniel Huttenlocher’s description of The Age of AI: And Our Human Future as “an essential roadmap to our present and our future.” Despite high hopes and a cadre of high-minded authors, The Age of AI sponsors some troubling questions using doubtful philosophical explanations.

I know what you must be thinking. Philosophical explanations sound boring and usually are. However, this analysis is necessary because the authors desperately try to convince readers that AI puts human identity at risk. (p. 147) They ask, “If AI thinks, or approximates thinking, who are we?” (p. 20) However, to contemplate such a hypothetical, one must suppose that AI could think or that “approximating thinking” means something. This kind of equivocation does not seem to bother the authors. Still, the difference between “thinking” and “approximating thinking” is like the difference between running a marathon and just wearing the 26.2 sticker on your car. Even if you accept that AI thinks or approximates thinking, the conclusion “who we are?” does not logically follow, nor is it explained. Such a red herring may provide the authors with a purpose to save us. However, discussing such a risk is deranged under such vague auspices.

To answer the question, “If AI thinks… who are we?” the book uses the victory of AlphaZero and adds that “no human has ever beaten it [AlphaZero].” (p. 11). While the accomplishment is impressive, any claim that AI performs better than humans on an arbitrary task is a reductio ad absurdum because humans aren’t (in this case) chess-playing programs. I dislike reducing humans to tasks because it obscures what humans are and disguises the limitations of artificial intelligence. By emphasizing the intersection over the union between humans and machines, we overlook the unique attributes, contributions, and limitations of both. Besides, describing machines in human terms is not evidence that machines have human characteristics like thinking. Ludwig Wittgenstein wrote about the confusion caused by describing machines in human terms in The Blue and Brown Books. He famously noted that asking if a “machine thinks seems somehow nonsensical. It is as though we had asked, ‘Has the number 3 a color?’”

The age of AI

According to the authors, we are reasoning minds. (p. 20) The idea that humans are reasoning minds dates back to Plato. Plato’s ambitious view was that an intellectual must subordinate emotions to reason. Similarly, philosophers like René Descartes believed truth could only be obtained by reason. However, it is inaccurate to say without qualification that humans are “reasoning minds.” Certainly, we cannot expect any random person from the street to perform normatively correct reasoning consistently. Jonathan Haidt’s research shows that 90% of human thought is rationalization, and 10% is rationality. Martin Heidegger believed that the notion that humans are reasoning minds, while influential in the history of philosophy, overlooks crucial aspects of our existence. Heidegger argued that defining humans merely as reasoning entities narrows our understanding of the human condition. For him, we are not just detached minds or rational agents; we are deeply immersed in the world, with experiences, emotions, and a sense of being that rationality cannot fully capture. 

Reasoning is important, but it does not motivate one to act. As G.K. Chesterton noted, the merely rational man will not marry, and the merely rational soldier will not fight. The force that propels one to action — whether love, compassion, anger, pride, envy, fear, or desire — is passion.

David Hume famously said, “Reason is the slave of passions.” The point is that we are not merely reasoning minds. We have goals, passions, and bodies. AI will never supplant the emotional provocations unique to humans that motivate us to reason, solve problems, seek justice, acquire knowledge — including the desire to create AI — and connect with others. 

Besides, machine learning does not reason at all. Machine learning discovers patterns in data. It is exclusively inductive and interpolating signals in the training data. Even large language models cannot reason. Friends of AI will disagree: “AI is not perfect, but neither are humans.” Or, “Humans make mistakes. Does that mean that humans lack reasoning abilities?” I hate this argument. It is a kind of whataboutism. Formally, it is an example of tu quoque. Tu quoque is a form of the ad hominem fallacy. It means “you too,” a response to allegations by saying, in essence, “you do the same thing” by ignoring the alleged flaws. If humans cannot reason (and many cannot), it would not be a defense for machine learning. Human reasoning is sometimes sloppy, but it is impressive when developed and perfected.  

Unsubstantiated claims do not slow the authors. They claim that the “AI age needs its own Descartes” to explain “what is being created and what it will mean for humanity.” The authors deliver on this metaphorical statement by literally offering the reader the philosophical work of René Descartes. (pp. 177–178) Unfortunately, Descartes died nearly 400 years ago. He will not tell us anything about what AI means for humanity or what is being created. One gets a sinking feeling that the literary device is poorly executed by design and that the authors want to fill in the void.

rene descartes i think therefore i am

It is unsurprising to find Descartes summoned by pundits. After all, artificial intelligence is effectively a cartesian research project. However, Descartes, familiar with the automata and mechanical toys of the 17th century, would not have considered AI capable of thinking. The “I” in Descartes’s Cogito argument (I think, therefore I am) requires thinking to have a subject, treating human thought as non-mechanical and non-computational. In other words, Descartes is a strange bedfellow as he is an unlikely advocate for AI, which is only partially related to his death.

Descartes extended his Cogito argument to God. He thought what is more perfect could not arise from what is less perfect, and his faith was put there by someone more perfect than him: God. This self-congratulatory argument is known as the conceivability argument and tells us more about the person conceiving than about reality. Just because Descartes can reason something more perfect than himself does not make that thing real or perfect. The Age of AI falls into the same solipsistic trap by making a modified conceivability argument. Like much of AI orthodoxy, the authors assume that a solution with intelligence that is more perfect than human intelligence must exist because it can be thought into existence. The authors even gaslight nonbelievers, calling those that reject AI “like the Amish and the Mennonites.” (p. 154) Ouch.

Strangely, the authors explicitly rejected the only “values” discussed. For example, the authors claim Descartes meant to undermine religion by “disrupting the established monopoly on information, which was largely in the hands of the church.” (p. 20)

This is true; Descartes did continue a movement that began during the Renaissance toward a human-centered perspective of the world rather than one dominated by god(s). However, Descartes also meant to support the existence of God. He was a devout Christian and believed that reason and faith could coexist. As far as I can tell, the book does not prefer any value (or culture) over any other and promotes only reason to ask the self-aggrandizing question, “Who are we?” While reasoning is necessary for civilized society, it is not a value. Reasoning is a process. 

The irony is that despite nineteen references to the “Enlightenment,” the authors construct an anti-Enlightenment argument: one away from humanity. The Age of AI  suggests all human potential is inert and at risk from artificial intelligence; it asks, “Who are we?” thus denying that humans are exceptional. The book embraces what it warns against by reorienting us toward a machine and a zero-sum fallacy. We must embrace the truth that humans are unique and not transfer all uniqueness and potential to AI. We don’t need a Descartes. We need Friedrich Nietzsche. Nietzsche was famous for his worship of human potential and for encouraging individuals. We must elevate human reasoning, not assume that humans have no potential. Nietzsche was also renowned for his absolute loathing of mediocre things. We should focus on developing better tools, including artificial intelligence, rather than organizing international conferences (or writing books) to investigate threats to humanity. 

Friedrich Nietzsche Beyond Good and Evil

The truth is that machine learning is just a tool. Humans possess a unique ability to use tools. Benjamin Franklin is well-known for his observations on human ingenuity and tool usage. In cognitive science and psychology, it is well-established that humans can see objects as tools with multiple uses and apply them in novel ways. Louis Leakey emphasized the importance of tool use in human evolution. Anthropologists have highlighted the role of tool-making and tool-using in distinguishing humans from other species, emphasizing the ability of our ancestors to invent and repurpose tools as a critical factor in human development. Thomas Hobbes suggested that animals start with a desired effect and discover a sufficient instrument, whereas humans view everything as a potential tool and imagine all the effects, which is the key. AI only makes ambiguous prescriptions about what it should solve and requires human inquiry to determine what is good and what needs solving. AI is a contemporary extension of the human tradition of using tools to extend our capabilities, thus amplifying human potential in processing information, solving complex problems, and generating new ideas. Integrating AI into various domains is not about replacing human effort but rather enhancing it, allowing for more efficient workflows, deeper insights, and the exploration of previously inconceivable solutions.

The belief that AI shares essential human qualities is so widespread nowadays that it is no longer viewed as an assumption requiring skepticism. We use sentience and consciousness to describe software while creating metaphorical contrivances like “the brain is like a computer” or “the mind as the software of the brain.” But humans are not computers, and computers are not human. We must avoid treating computers in ways they will never earn and treating humans in ways they will never deserve. This doesn’t mean abandoning technology or human potential, only the zero-sum fallacies that invariably follow this thinking. 

“If AI thinks, or approximates thinking, who are we?” is a question we should not take seriously. The most significant problem with the question is that an imagined threat justifies any action when framed in such a manner. The most significant hazard to humanity is not “AI” but the power we give someone else to answer the question, “Who are we?”

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.