This article is part of Demystifying AI, a series of posts that (try to) disambiguate the jargon and myths surrounding AI.
AI is frequently explained using the categories artificial narrow intelligence (ANI), artificial general intelligence (AGI), and artificial super-intelligence (ASI). Despite this strange conceptual framework providing nothing of real value, it finds its way into many discussions. If unfamiliar with these categories, consider yourself lucky and move on to another, more consequential article. If you are unlucky, I invite you to keep reading.
First and foremost, bemoaning categorizations — as I am about to do — has limited value because categories are arbitrarily similar and distinct, depending on how we classify things. For example, the Ugly Duckling Theorem demonstrates that swans and ducklings are identical if we wish to manipulate the properties for comparisons. All differences are meaningless unless we have some prior knowledge about those differences. Alas, this article will unpack these suspicious categories from a business perspective.
Artificial narrow intelligence (ANI) is often conflated with weak artificial intelligence. John Searle, philosopher and professor at the University of California, explained in his seminal 1980 paper, “Minds, Brains, and Programs,” that weak artificial intelligence would be any solution that is both narrow and a superficial look-alike to intelligence. Searle explains that such research would be helpful in testing hypotheses about segments of minds but would not be minds. ANI reduces this by half and allows researchers to focus on the narrow and superficial and ignore hypotheses about minds. In other words, ANI purges intelligence and minds and makes artificial intelligence “possible” without doing anything. After all, everything is narrow, and if you squint hard enough, anything is a superficial look-alike to intelligence.
Artificial general intelligence (AGI) is the idealized solution many conceive when thinking about AI. While researchers work on the narrow and superficial, they talk about AGI, which represents the single story of AI, dating back to the 1950s, with a revival in the past decade. AGI implies two things about a solution that should not apply to business-centric problem-solving. First, a program has the general aptitude for human intelligence (perhaps all human intelligence). Second, an AGI is a general problem solver or a blank slate meaning any knowledge of a problem is rhetorical and independent of a strategy to solve that problem. Instead, the knowledge depends on some vague, ill-defined aptitude relating to the multidimensional structure of natural intelligence. If that sounds ostentatious, it’s because it is.
Artificial super-intelligence (ASI) is a by-product of accomplishing the goal of AGI. A commonly held belief is that general intelligence will trigger an “intelligence explosion” that will rapidly trigger super-intelligence. It is thought that ASI is “possible” due to recursive self-improvement, the limits of which are bounded only by a program’s mindless imagination. ASI accelerates to meet and quickly surpass the collective intelligence of all humankind. The only problem for ASI is that there are no more problems. When ASI solves one problem, it also demands another with the momentum of Newton’s Cradle. An acceleration of this sort will ask itself what is next ad infinitum until the laws of physics or theoretical computation set in.
The University of Oxford scholar Nick Bostrom claims we have achieved ASI when machines have more intelligent than the best humans in every field, including scientific creativity, general wisdom, and social skills. Bostrom’s depiction of ASI has religious significance. Like their religious counterparts, believers of ASI even predict specific dates when the Second Coming will reveal our savior. Oddly, Bostrom can’t explain how to create artificial intelligence. His argument is regressive and depends upon itself for its explanation. What will create ASI? Well, AGI. Who will create AGI? Someone else, of course. AI categories suggest a false continuum at the end of which is ASI, and no one seems particularly thwarted by their ignorance. However, fanaticism is a doubtful innovation process.
Part of our collective problem when talking about AI is that we entrench our thinking in prevalent but useless dichotomies. False dichotomies create an artificial sense that there is an alternative. ANI, AGI, and ASI suggest some false balance among various technologies by presenting multiple sides of an argument that don’t exist. Even if we accept the definition of ANI and ignore its triviality, there is nothing persuasive about AGI or ASI. Mentioning something that will not exist to evaluate today’s technology uttered with a catchier name like ANI is odd. We do not compare birds to griffins, horses to unicorns, or fish to sea serpents. Why would we compare (or scale) computation to human intelligence or the intelligence of all humans?
Any explanation that includes AGI or ASI distorts reality. Anchoring is a cognitive bias in which an individual relies too heavily on an initial piece of information (known as the “anchor”) when making decisions. Studies have shown that anchoring is challenging to avoid, even when looking for it. Even if we recognize AGI and ASI as significantly wrong or misplaced, they can still distort reality and create misalignments. We must not be fooled by a false dichotomy and a false balance.
AI is not three things. It is not something that scales by “intelligence” or fits neatly into three bins. These categories do not delineate specific technologies, highlight research areas, or capture some continuum where one starts by working on ANI and finishes with ASI. They’re nonsense. AI is one thing: a singular and unprecedented goal to recreate intelligence ex nihilo. However, this goal is permanently misaligned with business.
Business goals cannot be totalized and absorb everything around them because corporate communication, which includes all strategies, is only effective when it can’t be misunderstood. Unless you plan to align your business with AI’s singular and unprecedented goal, you must be mindful when calling your goals AI since you cannot say “AI” nowadays if you ever want to be understood. As we call more and more things “AI,” the task of communicating purpose and direction becomes even more difficult. However, saying ANI, AGI, or ASI does not help matters. It hurts communication. The best advice for technical leaders is to avoid false continuums, false dichotomies, and false balance. As media critic Jay Rosen explains, borrowing a phrase from American philosopher Thomas Nagel, “false balance is a ‘view from nowhere.‘”
 e.g., Tim Urban, “The AI Revolution: The Road to Superintelligence,” Wait But Why, September 7, 2017, waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html.
 I’ve seen too many corporate and government presentations begin by explaining AI by trying to explain these categories. It never helps.
 J. Searle, “Minds, Brains, and Programs,” Behavioral and Brain Sciences 3, no. 3 (1980): 417–24. https://doi.org/10.1017/S0140525X00005756.
 The AI pioneer, Marvin Minsky, recalls his approach to AI in a biography by The New Yorker in 1981, saying, “I mustn’t tell the machine exactly what to do. That would eliminate the problem.” Elimination of a problem is a problem in AI research, which is also a problem for anyone seriously considering an AGI strategy.
 Bostrom, N. (1998). “How Long Before Superintelligence?” International Journal of Futures Studies, 2. http://www.nickbostrom.com/superintelligence.html
 As a rule of thumb, the number of categories erected to explain an area of research is inversely proportional to the available knowledge about that research.
 In one study, Professors Fritz Strack and Thomas Mussweiler at the University of Würzburg gave participants an obviously wrong anchor. Participants were asked whether Mahatma Gandhi died before or after age nine or before or after age 140. Neither of these options can be correct, but when the two groups were asked to suggest when they thought he had died, they guessed significantly different average ages of fifty versus sixty-seven. Fritz Strack and Thomas Mussweiler, “Explaining the Enigmatic Anchoring Effect: Mechanisms of Selective Accessibility,” Journal of Personality and Social Psychology 73, no. 3 (1997): 437–46, https://doi.org/10.1037/0022-35126.96.36.1997.