The truth about the AI alphabet soup (ANI, AGI, ASI)

artificial intelligence alphabet soup

This article is part of Demystifying AI, a series of posts that (try to) disambiguate the jargon and myths surrounding AI.

AI is frequently explained using the categories artificial narrow intelligence (ANI), artificial general intelligence (AGI), and artificial super-intelligence (ASI).[1] Despite this strange conceptual framework providing nothing of real value, it finds its way into many discussions.[2] If unfamiliar with these categories, consider yourself lucky and move on to another, more consequential article. If you are unlucky, I invite you to keep reading.

First and foremost, bemoaning categorizations — as I am about to do — has limited value because categories are arbitrarily similar and distinct, depending on how we classify things. For example, the Ugly Duckling Theorem demonstrates that swans and ducklings are identical if we wish to manipulate the properties for comparisons. All differences are meaningless unless we have some prior knowledge about those differences. Alas, this article will unpack these suspicious categories from a business perspective.

Artificial narrow intelligence (ANI) is often conflated with weak artificial intelligence. John Searle, philosopher and professor at the University of California, explained in his seminal 1980 paper, “Minds, Brains, and Programs,” that weak artificial intelligence would be any solution that is both narrow and a superficial look-alike to intelligence. Searle explains that such research would be helpful in testing hypotheses about segments of minds but would not be minds.[3] ANI reduces this by half and allows researchers to focus on the narrow and superficial and ignore hypotheses about minds. In other words, ANI purges intelligence and minds and makes artificial intelligence “possible” without doing anything. After all, everything is narrow, and if you squint hard enough, anything is a superficial look-alike to intelligence.

Artificial general intelligence (AGI) is the idealized solution many conceive when thinking about AI. While researchers work on the narrow and superficial, they talk about AGI, which represents the single story of AI, dating back to the 1950s, with a revival in the past decade. AGI implies two things about a solution that should not apply to business-centric problem-solving. First, a program has the general aptitude for human intelligence (perhaps all human intelligence). Second, an AGI is a general problem solver or a blank slate meaning any knowledge of a problem is rhetorical and independent of a strategy to solve that problem.[4] Instead, the knowledge depends on some vague, ill-defined aptitude relating to the multidimensional structure of natural intelligence. If that sounds ostentatious, it’s because it is.

Artificial super-intelligence (ASI) is a by-product of accomplishing the goal of AGI. A commonly held belief is that general intelligence will trigger an “intelligence explosion” that will rapidly trigger super-intelligence. It is thought that ASI is “possible” due to recursive self-improvement, the limits of which are bounded only by a program’s mindless imagination. ASI accelerates to meet and quickly surpass the collective intelligence of all humankind. The only problem for ASI is that there are no more problems. When ASI solves one problem, it also demands another with the momentum of Newton’s Cradle. An acceleration of this sort will ask itself what is next ad infinitum until the laws of physics or theoretical computation set in.

The University of Oxford scholar Nick Bostrom claims we have achieved ASI when machines have more intelligent than the best humans in every field, including scientific creativity, general wisdom, and social skills.[5] Bostrom’s depiction of ASI has religious significance. Like their religious counterparts, believers of ASI even predict specific dates when the Second Coming will reveal our savior. Oddly, Bostrom can’t explain how to create artificial intelligence. His argument is regressive and depends upon itself for its explanation. What will create ASI? Well, AGI. Who will create AGI? Someone else, of course. AI categories suggest a false continuum at the end of which is ASI, and no one seems particularly thwarted by their ignorance. However, fanaticism is a doubtful innovation process.

Part of our collective problem when talking about AI is that we entrench our thinking in prevalent but useless dichotomies.[6] False dichotomies create an artificial sense that there is an alternative. ANI, AGI, and ASI suggest some false balance among various technologies by presenting multiple sides of an argument that don’t exist. Even if we accept the definition of ANI and ignore its triviality, there is nothing persuasive about AGI or ASI. Mentioning something that will not exist to evaluate today’s technology uttered with a catchier name like ANI is odd. We do not compare birds to griffins, horses to unicorns, or fish to sea serpents. Why would we compare (or scale) computation to human intelligence or the intelligence of all humans?

Any explanation that includes AGI or ASI distorts reality. Anchoring is a cognitive bias in which an individual relies too heavily on an initial piece of information (known as the “anchor”) when making decisions. Studies have shown that anchoring is challenging to avoid, even when looking for it.[7] Even if we recognize AGI and ASI as significantly wrong or misplaced, they can still distort reality and create misalignments. We must not be fooled by a false dichotomy and a false balance.

AI is not three things. It is not something that scales by “intelligence” or fits neatly into three bins. These categories do not delineate specific technologies, highlight research areas, or capture some continuum where one starts by working on ANI and finishes with ASI. They’re nonsense. AI is one thing: a singular and unprecedented goal to recreate intelligence ex nihilo. However, this goal is permanently misaligned with business.

Business goals cannot be totalized and absorb everything around them because corporate communication, which includes all strategies, is only effective when it can’t be misunderstood. Unless you plan to align your business with AI’s singular and unprecedented goal, you must be mindful when calling your goals AI since you cannot say “AI” nowadays if you ever want to be understood. As we call more and more things “AI,” the task of communicating purpose and direction becomes even more difficult. However, saying ANI, AGI, or ASI does not help matters. It hurts communication. The best advice for technical leaders is to avoid false continuums, false dichotomies, and false balance. As media critic Jay Rosen explains, borrowing a phrase from American philosopher Thomas Nagel, “false balance is a ‘view from nowhere.‘”

[1] e.g., Tim Urban, “The AI Revolution: The Road to Superintelligence,” Wait But Why, September 7, 2017, waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html.

[2] I’ve seen too many corporate and government presentations begin by explaining AI by trying to explain these categories. It never helps.

[3] J. Searle, “Minds, Brains, and Programs,” Behavioral and Brain Sciences 3, no. 3 (1980): 417–24. https://doi.org/10.1017/S0140525X00005756.

[4] The AI pioneer, Marvin Minsky, recalls his approach to AI in a biography by The New Yorker in 1981, saying, “I mustn’t tell the machine exactly what to do. That would eliminate the problem.” Elimination of a problem is a problem in AI research, which is also a problem for anyone seriously considering an AGI strategy.

[5] Bostrom, N. (1998). “How Long Before Superintelligence?” International Journal of Futures Studies, 2. http://www.nickbostrom.com/superintelligence.html

[6] As a rule of thumb, the number of categories erected to explain an area of research is inversely proportional to the available knowledge about that research.

[7] In one study, Professors Fritz Strack and Thomas Mussweiler at the University of Würzburg gave participants an obviously wrong anchor. Participants were asked whether Mahatma Gandhi died before or after age nine or before or after age 140. Neither of these options can be correct, but when the two groups were asked to suggest when they thought he had died, they guessed significantly different average ages of fifty versus sixty-seven. Fritz Strack and Thomas Mussweiler, “Explaining the Enigmatic Anchoring Effect: Mechanisms of Selective Accessibility,” Journal of Personality and Social Psychology 73, no. 3 (1997): 437–46, https://doi.org/10.1037/0022-3514.73.3.437.

7 COMMENTS

  1. In AI research, we talk quite differently about these ideas. After reading hundreds of papers, I don’t think I have read a single paper that uses these three terms.

    The categorization of specialized and general does come up, though. It is a prominent theme in Legg-Hutter intelligence, a defintion developed by one of the DeepMind’s confounders (https://arxiv.org/abs/0712.3329).

  2. Thanks for the comment. Sadly, I can’t evaluate the papers you’ve read, which would only be a small minority of the material produced yearly. Still, my critique is more on the third culture in a Snow and Brockman sense and meant for a business audience, not directed toward academically-minded researchers.

    That said, you seem to concede the point. These ideas exist even if they exist by different names. Still, the names aren’t my issue. My issue is with the ideas, which is why I steer business leaders away from these terms. Using your example, Legg and Hutter collate seventy-odd definitions of intelligence, yet their description provides little information about intelligence because they’re behaviorists. According to behaviorism, there is nothing to know about intelligence. Legg and Hutter believe real intelligence is in the environment, not the mind. Strange for a company named DeepMind. Not only is behaviorism a defunct theory of mind it is also a strange goal for business-minded leaders (my audience). Thanks again.

  3. I completely get your point, and there is no clean definition, but AGI was founded (and the term coined) in an attempt to “buck the trend” away from AI that only works within a well constrained domain. It used to be taken as a given that AI has to be significantly constrained and narrowed to have a hope of working. The trend worked to some extent as AI has become more –> ~AGI. So when people say “AGI” it should mean “not limited to a well constrained domain.” It’s hard to clearly define something that is changing rapidly.

  4. Thanks, Mike.

    “It’s hard to clearly define something that is changing rapidly.” ~or~ is it easy to change something that is poorly defined?

    I acquiesce that “AGI” deals with the vacuous nature of contemporary “AI” and distinguishes it from “ANI.” However, I reject the notion that it “bucks the trend.” Instead, it is the trend. AI vacillates between too many different and conflicting goals to lower and raise expectations.

  5. Just to clarify, I distinctly remember Goertzel (who most people cite as the coiner of the term) writing somewhere that AGI was supposed to “buck the trend” away from narrow AI — but that was written more than 20 years ago, so that it IS now the trend (AGI) is good although it came mostly on the heels of deep learning, not on progress within the relatively small group of AGI researchers. When DL was really gaining momentum, AGI researchers were somewhat disdainful of it.

    My definition of AGI is that I think most people expect there to be some form of cognitive architecture to count as AGI, at least with the expectation that the AGI structure could eventually scale up to ~cog architecture (there is a good paper with requirements published in the AGI 22 conference, see Kralik’s work). Also, definitions for understanding, reasoning, and judging should be clear in my opinion to count as AGI (less hand waving and statistical magic, less open ended claims that “my AGI thinks” or “understand” without qualification!).

    But, ultimately, in line with your general attitude: it is just a matter to argue about…definitions. People only really care whether or not it appears human like.

  6. I forgot to mention that I think the hallmark of AGI is that there is no cheat hidden in the design just just for one domain. If it at least gets that far the AI is on the way to AGI. An acid test of AI vs AGI is whether or not it only works in some domains. But, that is not a sufficient test, just a start.

  7. Yeah, Goertzel wrote a lot of nonsense and he didn’t coin the term AGI. Legg suggested it to him; Goertzel merely popularized it. That’s beside the point. He didn’t buck a trend; he merely recapitulated the fluctuations of AI research between weak and strong artificial intelligence, representing constantly moving the goalposts to lower and raise expectations. The past sixty years of research have been underwritten by the movement of research interest between these poles.

    Lastly, while people may care whether or not “AI” appears human-like, they shouldn’t because such a position is behaviorism and doesn’t explain intelligence.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.