Futurism was an Italian art movement that began in the early twentieth century. The avant-garde movement exalted speed and action while reorienting society toward machines. The issue for most Italians was that there was no opting out of futurism. If you were not promoting it, you were against it. Yet, today’s futurism is not fascistic. While there are similarities in verbiage (i.e., speed, movement, rejection of the past, and disruption) between Italian futurism and today’s futurism, today’s futurists want a credulous audience and more Twitter followers. However, the question is same, do they deserve it?
Charles Perrault perfectly captures the emphasis futurists place on movement and speed. Perrault is one of the leaders of the so-called AI Speedometer for the AI Index project. While the AI Speedometer cannot explain what AI is or how to solve it, it tries to convince you that the rate of progress within artificial intelligence is significant enough to result in bigger achievements. However, the value of measuring speed when you don’t know the destination or how to get there is never worth the time and effort spent calculating it. Before speed can be essential, friends of artificial intelligence need to agree on where they are, where they are going, and how good a given solution is for the disputed problem of intelligence. After all, knowing the speed of something is useless if you don’t know where you are, where you are going, the distance to travel, and the time that one must reach a destination—going fast or faster means next to nothing if you don’t know where to go, or how or when to arrive. When distance and time are known and fixed, but speed varies due to research hurdles, speed alone will not be enough to accomplish a goal no matter how fast one goes. Like futurism, the AI Speedometer is, in effect, saying, “We’re lost, but we’re making good time!”
Instructional frivolity aside, scientific knowledge doesn’t require a speedometer. Thomas Kuhn perfectly describes how scientific knowledge is developed, dispelling the progress myth. Scientific knowledge is not based on movement, speed, action, accumulation, or spawned on a specific day. Scientific discovery does not happen in straight lines or on paved roads. It only sometimes moves forward, and it is futile to identify the exact date when a paradigm shift occurs. We don’t track progress toward solving the Riemann hypothesis or P=NP in computing because progress is zero until a breakthrough is achieved. Futurism distorts our senses by distorting our perception of time and progress.
The most popular futurist is Ray Kurzweil. His 2005 New York Times bestseller, The Singularity is Near, promises readers an optimistic vision of the future by ameliorating any anxiety about death. This unimportant work of philosophical fiction hinges on the author’s emphasis on speed and the rapid advance toward artificial intelligence (AI), his remedy for immortality. Kurzweil doesn’t know how to solve AI despite predicting with a preciseness that––at the speed we’re going––by 2029, someone else will. He expects artificial general intelligence to produce a singularity by 2045.
Kurzweil refers to his prediction as a technological singularity. According to the author, technological singularity is an emergent property of artificial general intelligence, multiplying intelligence a billion-fold.
He adds that the singularity is near the “knee of the curve” of a significant trend growing exponentially. The knee of the curve is a point in time when “events erupt at an increasingly furious pace.” However, there is no such thing as a “knee” to an exponential function. If a knee were to exist, it still would only be apparent after, not before, such growth.
The so-called trend is an aggregate of many movements lumped together and labeled the Law of Accelerating Returns, not something singular with a known relationship with a singularity. The “law,” which is no law at all, is based on the charitable assumption that everything supports the author’s predictions. However, one cannot lump all progress together and proclaim that everything is part of the same underlying trend toward the same goal. Philosopher and mathematician Yehoshua Bar-Hillel described this careless methodology as the “first step fallacy.” He claimed that “ever since our first work on computer intelligence we [friends of artificial intelligence] have been inching along a continuum at the end of which is AI so that any improvement in our programs no matter how trivial counts as progress.” The fallacy is that the last step is inevitable, and everything––no matter how trivial––counts as progress while ignoring any breakthroughs required. However, limited success is not a “valid basis for predicting the ultimate success of one’s project.”
The most critical input influencing The Law of Accelerating Returns is Moore’s Law which predicts the doubling of computer processing at periodic intervals. Unfortunately, progress toward artificial intelligence is not in the same category as extra computation or more data. We know real AI is not a question of more computational power or data since it ignores all conceptual, theoretical, philosophical, ontological, and technological breakthroughs needed to achieve real artificial intelligence (if such a thing is possible). This fact shows the real winner of Moore’s Law to be the author because Moore’s Law is visible. Without it, the author would be pointing to nothing. However, the observable stuff is only a tiny fraction of what matters. The visible stuff is easy to overanalyze, and the invisible is easy to ignore. Today, “Moore’s law hence AI” is standard but accidentally underscores why predicting the future is so tricky.
Worse, the technological singularity is entirely unoriginal; it filches from I. J. Good, the British mathematician who concocted the “intelligence explosion.” Good described the
technological singularity intelligence explosion as a “machine could design even better machines; there would then unquestionably be an “intelligence explosion,” and the intelligence of man would be left far behind.” Of course, this recapitulation doesn’t make either argument especially cogent since neither explains how to solve AI, and both require emergence to do all the heavy lifting. A future forecast lacks predictive power if it can only predict something given emergent properties rather than analyzing the component trends.
Kurzweil isn’t alone. The first wide-scale popularizer of the technological singularity was science fiction author Vernor Vinge, who predicted in 1993 that the singularity would occur in 2030, with a time horizon of thirty-seven years and a maturation age of eighty-seven years old. In 2011, Kurzweil predicted singularity in 2045, with a time horizon of thirty-four years and a maturation age of ninety-seven. In 2016, Masayoshi Son, chairman and CEO of SoftBank, predicted the singularity in 2047, with a time horizon of thirty-one years and a maturation age of eighty-nine years old.
The fact that futuristic predictions reside within the lives of those who make them––even though forecasts with time horizons beyond ten years are rarely better than coin flips––highlights how self-serving futurism is for the prognosticators. These predictions reflect a desire of the futurist to create the world they seek, not because they can influence or understand that world directly but because it makes futurists feel good and less helpless. This is why claims are always in the fifteen- to twenty-five-year futurist sweet spot. When predictions are outside the futurist sweet spot, they stay within the prognosticator’s lifetime. The Maes–Garreau point is “the latest possible date a prediction can come true and remain in the lifetime of the person making it.”,
Not only do futurists know nothing more than the rest of us about the future, they possess poorer judgment. Remember that all futurists must make oversized claims: it’s in the job description (if one had to apply for such a job rather than merely adding it to their Twitter profile). Suppose a futurist makes claims that are moderate and reasonable. In that case, their observations will seem self-evident in the present, and over some arbitrary amount of time, they will be seen as absurdly conservative. Consequently, claims are deliberately extreme and unreasonable. Otherwise, the futurist is less of a futurist. If Kurzweil (or any futurist) proves to be correct, it will not reflect good judgment because to predict such a rare event with confidence, one would have to ignore available information and rely on vanity to make such an extreme and rare prediction. We must realize that such claims reflect poor judgment even if they are true.
The boldest prediction is that in the future, someone will still be trying to predict the future, only less effectively. As the author and statistician Nassim Nicholas Taleb notes—building on Benoit Mandelbrot’s Lindy Effect—power laws can express expectations for the future. The longer we wait for something, the longer we expect to wait. In other words, we will be at least fifteen- to twenty-five years away from the singularity in fifteen to twenty-five years.
The technological singularity is meant to grab attention and ultimately create value for the prognosticator and their secular religion by creating the illusion of predictability. For Kurzweil, AI is a god with a plug, and the singularity is a second coming signaled by the ascension of artificial intelligence, multiplying intelligence a billion-fold. This is not good science or business because Kurzweil is a spiritual leader, not a scientist or business leader. More so, the view that all change is progress is totalizing. It becomes difficult to see technology in any other way. We find Make-a-Video or DALL-E and believe the event horizon is on the other side of these tools. While these discoveries are novel, they are not part of some larger trend that concludes with real artificial intelligence. While futurists seek to live in the future, we should always be leery of following them. Futurism is code for we don’t have to do anything today. Future predictions project something tangible on an otherwise empty horizon and create a culture where everyone is safe from action. But the future is not certain, and progress is not guaranteed.
 The Age of AI gaslight nonbelievers calling those that reject AI “like the Amish and the Mennonites.” (p. 154) https://bdtechtalks.com/2022/01/13/the-age-of-ai-ism/
 Disruption was a popular idea in Italy too, which evidently included breaking things like history and democracy.
 The author defines AI as “the art of creating machines that perform functions that require intelligence when performed by people,” thereby channeling Marvin Minsky (co-founder of the field of AI and the Massachusetts Institute of Technology AI laboratory), who defined AI as the “science of making machines capable of performing tasks that would require intelligence if done by humans.” Minsky and Kurzweil think AI can be defined based on the difficulty of the problem being solved. It probably can’t.
 Kurzweil, Ray. The Age of Spiritual Machines (p. 24). Penguin Publishing Group. Kindle Edition.
 Yamamoto, Wayne. “We Don’t Understand Exponentials.” Kazabyte, 19 Dec. 2011, http://www.kazabyte.com/2011/12/we-dont-understand-exponential-functions.html.
 A review of ninety-five predictions from 1950 have routinely produced results predicting AGI in the next twenty years. Yet, you will not see futurists acknowledge this predictive history because futurists hate history. See Stuart Armstrong and Kaj Sotala, “How We’re Predicting AI—or Failing to,” in J. Romportl, E Zackova, J. Kelemen (eds.) Beyond Artificial Intelligence, (Switzerland: Springer International, 2015).
 S. Charbonneau, S. Fye, J. Hay, and C. Mullins, “A Retrospective Analysis of Technology Forecasting,” American Institute of Aeronautics and Astronautics, August 13, 2012, https://doi.org/10.2514/6.2013-5519.
 The Maes–Garreau law relates to predictions of a technological singularity or other radical future technologies. Kevin Kelly: The Maes–Garreau Point March 14, 2007
 Kurzweil needs the singularity to be true because he wants to live forever. He is a transhumanist who writes, “[the] singularity will allow us to transcend these limitations of our biological bodies and brains.” He adds, “there will be no distinction, post-Singularity, between human and machine.” Transhumanists are hopelessly dualists and believe that our brains are not also our bodies. Yet another category mistake. The Oxford philosopher, Gilbert Ryle, argues that dualism is wrong because of an ontological error that he calls a category mistake. Ryle’s most famous example of a category mistake is a thought experiment of a tour of Oxford University. The university tour includes the numerous colleges, libraries, labs, and faculty offices before a visitor asks, “Can I see the University?” The visitor overlooks that she has already visited the university. The category mistake results from considering the brain, like the university, as something distinct from the body.