A Tale of Two “AI” Companies

4 min read
brain half mechanical
Image credit: 123RF (with modification)

This tale begins with a detour to the 1970s. At that time, Terry Winograd was at MIT and rapidly becoming a central figure in artificial intelligence. Winograd was programming computers to understand language. His program SHRDLU was considered a successful demonstration of artificial intelligence by translating words into short programs to guide virtual objects within a virtual space.[1]

SHRDLU used rules about grammatical structure to model language and facts about its microworld to move boxes and triangles.[2] For example, SHRDLU would move virtual objects according to commands such as “put the blue pyramid on the red block.” The problem with microworlds like SHRDLU is that they ignore the complexity of the real world. Therefore, they do not generalize to other worlds, much less the real one. Baseball and Lunar were similar solutions at the time that failed to generalize. They answered questions about baseball games, not baseball players, and moon rocks, not all rocks.[3], [4]

Conventional wisdom believed that to scale these answering systems, researchers would have to provide solutions with additional context about the world. In addition to rules about language and facts, these solutions would require the correct collection of axioms to describe countless situations. The reason is that if “facts” are represented inside a machine, but the world outside of the machine changes, how does a program know what facts are valid or what knowledge needs to be changed? In 1969, John McCarthy (one of the field’s founding members) called this the frame problem.[5]

At the time, Marvin Minsky, the unofficial artificial intelligence leader, believed researchers should focus on developing “frames” to mitigate the frame problem.[6], [7] Consequently, frames became a standard data structure used by researchers to divide knowledge into relevant substructures by representing “stereotyped situations.” The problem with this approach was that to recognize the relevant frame, one had to know the correct situation, which requires knowing something about the world, not frames or facts. This is the issue of using first-order logic about the world instead of understanding the world directly. Such logic assumes the world is fixed and unchanging. Without directly interacting with the real world, the problem becomes an infinite regress of arbitrary frames.

Despite these durable obstacles, Winograd would continue to work within the field of artificial intelligence for a brief time after moving to Stanford. However, he would soon realize that frames were a dead-end, and artificial intelligence goals were misplaced.[8] Winograd realized that the success of “communication depends on the real intelligence on the part of the listener,” not a computer. He believed “that there are many other ways of communicating with a computer that can be more effective, given that it doesn’t have intelligence.”[9]

Since his exodus, Winograd has focused on what technology means for people, not what a machine can do. He steers PhD candidates to formalize relevance to solve real-world problems rather than concentrating on the frame problem to relevance. For example, one of the doctoral students Winograd advised was Larry Page, who focused on formalizing relevance on the internet.[10] Page, and Sergey Brin, developed the algorithms that later became Google, a tiny startup with a paltry current market capitalization of $1.5 trillion.[11]

Despite the explosion of internet companies during the dot-com boom, Page and Brin had no strong desire to build an internet company. More than anything, they found the problem of relevance based on search queries interesting for its own sake and believed beneficial connections were awaiting discovery and organization. In time, Google found a better way to search the Web that would benefit everyone. Their first mission statement, “to organize the world’s information and make it universally accessible and useful,” was high-minded. However, it inspired and directed staff toward a problem of societal significance.

Consider Webmind, which was launched about the same time as Google. Their founder was interested in creating “an intelligent digital mind that knows its digital and uses the peculiarities of its digitality to its best advantage.”[12] Unlike Google, Webmind was much more interested in the ripe investment environment of the dot-com boom to facilitate their foolish devotion to a research project. However, focusing on what a machine can do (rather than what technology means for people) is an idealized goal to its detriment. The point is simple. Technical leaders cannot value what a machine can do over what technology means for people. They must focus on solving a market-relevant problem. Despite what Ash Fontana advocates for in his book The AI-First Company, you cannot be solution first and value technology over people.[13] That is what puzzles me about Google’s new “AI-First” strategy. It is revisionist since it began eighteen years after the company was incorporated. Naturally, companies need to do their best to adapt. Changing a mission statement is one way to change the company’s direction. However, Google’s strategy is slippery because it lacks the problem-specific direction that Google’s rich intellectual history, which dates to Winograd, contains.


[1] SHRDLU is a sequence of keys on a linotype.

[2] While modern systems (e.g., GPT, BERT, LaMDA) model language directly rather than using rules about grammatical structure they still attempt to take text and make it into a reality rather than understanding the world directly.

[3] Green, B., Wolf, A., Chomsky, C., & Laughery, K. (1961). Baseball: An Automatic Question-Answerer. In Papers Presented at the May 9-11, 1961, Western Joint IRE-AIEE-ACM Computer Conference (pp. 219–224). Association for Computing Machinery.

[4] Woods, W. A. (1978). Semantics and Quantification in Natural Language Question Answering

[5] Some Philosophical Problems from the Standpoint of Artificial Intelligence.

[6] Despite working with Minsky at MIT, Winograd believed frames were the wrong approach to relevance.

[7] Proposed by Minsky in a 1974 article titled, “A Framework for Representing Knowledge.”

[8] https://www.youtube.com/watch?v=oW5la8ZZJCE

[9] Moggridge, Bill. Designing Interactions. Cambridge (Mass.): MIT Press, 2007.

[10] In 2002, Winograd took a sabbatical and spent time at Google as a visiting researcher.

[11] Dreyfus, H.L. A History of First Step Fallacies. Minds & Machines 22, 87–99 (2012). https://doi.org/10.1007/s11023-012-9276-0

[12] https://www.goertzel.org/benzine/Bankruptcy.htm, https://www.goertzel.org/papers/SingularityPath.htm

[13] Fontana adds, “AI-First companies were the first—and are still the only—trillion-dollar companies.” (pg. 1) Unfortunately, none in the four-comma club were AI-First companies, and it is revisionist to make any claim to the contrary. They may be AI-Next companies, but if they are not careful, they could be AI-Last companies.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.