By Rich Heimann
How different would we think about artificial intelligence if AI pioneers Allen Newell and Herbert Simon had won support for the seemingly less hype-prone term of “complex information processing,” rather than “artificial intelligence,” which was ultimately adopted by the field?
On the surface, this thought experiment is interesting because it asks if artificial intelligence is intrinsically hyped. That is, is the word alone enough to get us in trouble? This was the focus of a recent Wall Street Journal article where columnist Christopher Mims asks experts in artificial intelligence whether the name alone produces confusion and hype?
Mims quotes Melanie Mitchell, a professor at the Santa Fe Institute, who quips, “What would the world be like if it [AI] was called that [complex information processing] instead?” Unfortunately, Mims uses Mitchell’s thought experiment as a punchline at the very end of the article, not as a counterfactual. Therefore, we will explore what facts surround the rhetorical question and answer whether the world would be better without artificial intelligence.
Newell and Simon were the only two participants at the Dartmouth Workshop (considered to be the founding event for artificial intelligence as a field) in 1956, who had developed a rudimentary “intelligent” machine. Their experience is viewed as grounded in practice, and consequently, they are often seen favorably as being more pragmatic than their peers. Complex information processing, as an alternative to artificial intelligence, seems to be viewed through this prism. Perhaps it is true that Newell and Simon understood how tricky “intelligence” would be to define or solve. To be sure, artificial intelligence is hyped because of its connection to natural intelligence and research on tasks performed by humans.
Whether the world would be a better place with complex information processing still requires us to determine if complex information processing was envisaged any differently than artificial intelligence and if artificial intelligence was treated differently by these two men than their peers. Alas, it does not appear that Newell and Simon thought complex information processing was less hype-prone or treated artificial intelligence differently from their peers.
Simon summarized his research goals in aspirational terms such as “to make us [humans] a part of nature instead of apart from it.” Aside from beautiful alliteration, Simon’s goal was to discover and ultimately build an intelligent machine that would exceed human intelligence and replace the capacities of the human brain. To Simon, solving intelligence was not merely an intellectual challenge but a spiritual pursuit.
The philosopher, and professor at the University of California, John Searle, explains how Simon would say that we already have machines that can literally think. According to Searle, there was no question of waiting for some future machine because Simon celebrated existing computers already having thoughts in the same sense that humans do.
Newell and Simon were computationalists and true believers at that. They believed that intelligence is information processing. They did not believe in the metaphor of a computer being like a brain, or aspects of cognition involving information processing, but literally thought that cognition is information processing.
Newell and Simon accepted technical descriptions of a computer as philosophical arguments. Simon believed that neurological functions of the brain and the functions of computers could be shown to process information in the same way. He was wrong. Simon also explained how computers have properties of the mind and “thoughts.” However, “thoughts” are what computers do not have.
Complex information processing was meant to inspire a view of the brain as a computer, very much like artificial intelligence. This view expresses a falsehood: mainly that any differences between humans and machines are small and temporary. The counterfactual question of what the world would be like if AI was called that complex information processing is fallacious because complex information processing is just another inimical theory of the mind.
One thing is sure; the Dartmouth Workshop produced copious amounts of hype that still permeates today. Perhaps the reason for the hype is that funding was required. Funding required drumming up public and political support which required an exciting vision that Simon and the other founders did. Perhaps hype was a consequence of the very human desire for personal prestige and naming the field was an extension of that desire. Perhaps unlocking the mystery of human intelligence is inherently hyped. Perhaps AI hype is both a bug and a feature. Perhaps we should stop caring so much and focus more on solving problems that are of interest to people not in AI.
The answer to what the world would be like if Newell and Simon had won support for complex information processing is impossible to know. What makes thought experiments like this one banal is that such a counterfactual does not matter. What makes history uniquely historic is that it does not occur differently. In other words, a different name in the past will not produce different results in the present. While I am not convinced that complex information processing would have been any less hyped, I am also not convinced that less hype would have been good for the field. The most we can hope for are better results in the future with the names we already have.
This thought experiment highlights something else, which is the power insiders believe names contain. Names represent an important part of the culture and values of insiders, who argue about the name of the field, the names of solutions, whether a solution deserves a name, and ultimately whether the field deserves a better name in the past. Perhaps such a thought experiment should include the counterfactual that encourages us to stop naming solutions. After all, arguing about names is not helpful for solving problems. Our solutions do not require names to impact the lives of others.
About the author
Rich Heimann is Chief AI Officer at Cybraics Inc, a fully managed cybersecurity company. Founded in 2014, Cybraics operationalized many years of cybersecurity and machine learning research conducted at the Defense Advanced Research Projects Agency. Rich is also the author of Doing AI, a book that explores what AI is, is not, what others want AI to become, what you need solutions to be, and how to approach problem-solving. Find out more about his book here.