What is the AI winter?

AI winter
Credit: Depositphotos

This article is part of Demystifying AI, a series of posts that (try to) disambiguate the jargon and myths surrounding AI.

In October, Rethink Robotics, a Boston-based robotics company, shut down after dealing with poor sales and an acquisition deal that fell through. The news came as a bit of disappointment for the artificial intelligence industry since Rethink was one of the icons of the AI and robotics space. Founded in 2008 by Rodney Brooks, one of leading voices of the artificial intelligence and robotics industry, Rethink was the manufacturer of Baxter and Sawyer, two robots that were slated to usher in a new era of cooperation between humans and smart robots.

Rethink Robotics CEO Scott Eckert said that the company’s closure is not indicative of wider troubles in the industry. But the shutdown of the company is not an isolated event. In August, Mayfield Robotics, the manufacturer of the Kuri home robot, shut down after failing to find a workable business model.

In September, The New York Times ran a piece in which it disclosed some inner details about Boston Robotics, a company that has created a lot of excitement with videos of its robots. While Boston Robotics is very forward in publicizing the tricks and achievements of its robots, what the NYT piece shed light on was the sheer human support that the robots’ AI needed to perform those feats. Also, it’s still not clear what real-world problems the expensive robots will eventually solve.

In other areas of the artificial intelligence industry, we’ve seen stories where companies have created a lot of hype around the capabilities of their AI and failed to deliver on their promises.

Since its inception, the AI industry has gone through many hype cycles. In previous decades, overpromising and underdelivering on the prospects and capabilities of artificial intelligence lead to what became known as the “AI winter,” a period where interest and funding in the field slumped. The question is, are we headed for another AI winter?

The previous AI winters

There are different accounts as to how many AI winters have happened and when they took place. But there are two main periods during which funding and interest in the AI industry declined, which have widely become known as the first and second AI winters.

The first AI winter started in early 1970s, a little more than a decade after the term artificial intelligence was coined. At the time, leading scientists in the field had made all sorts of promises that super-efficient AI was just around the corner. The thinking went that AI would soon outmatch humans in checkers and chess and automate translations of written text in different languages was. None of those promises materialized until decades later. AI beat the world chess champion in 1997. Even now, machine translation is still struggling to perform on par with human translators.

There was also fear that AI would soon replace all human jobs. That too didn’t happen.

Eventually, the Advanced Research Projects Agency (ARPA), the research arm of the U.S. Defense Department (later renamed to DARPA) and the primary funder of AI research and development, cut its funding of AI researchers because they had failed to deliver on most of their promises. At the time, the dominating form of creating software was still rule-based programming, in which developers explicitly specify all rules that define the behavior of a computer program. That kind of AI became known as “good old-fashioned AI” or GOFAI.

The second AI boom was triggered in the 1980s by the advent of “expert systems.” Expert systems were a move in the right direction, bringing AI closer to taking on realistic challenges. Instead of trying to emulate the general problem-solving functionalities of the human brain, expert systems were focused on solving domain-specific problems.

Expert systems attracted interest and funding not only from the public sector, but also from tech companies, driven again by promises and hype as well as fear of the threat of a fast-growing tech sector in Japan. But expert systems again failed to deliver on their promises. And Japan’s economy collapsed and never came to dominate the tech industry.

In 1984, Marvin Minsky, one of the pioneers of the AI industry and a survivor of the first AI winter, warned against the threat of another bust in the industry. Three years later, the first signs of a new AI winter appeared as the market for specialized expert systems computers collapsed. In the 1990s, expert systems became another term that—like artificial intelligence—had become quasi-synonymous to unmet promises and wasted investment.

The current hype surrounding deep learning

neurons
Source: Flickr (Penn State)

The new AI boom was triggered with advances in machine learning and deep learning in the late 2000s and early 2010s. The idea of neural networks, which had been around for decades, started to become reality as data and compute resources became available in abundance and at very affordable prices.

Deep learning helped computers perform tasks that were hard to codify with static rules, such as computer vision and voice recognition. Soon, deep learning and machine learning became the focus of many large tech companies and triggered another hype cycle for the AI industry. Many thought leaders started talking about the advantages of deep learning in ways that were a bit misleading, triggering both excitement and fear about the future of AI. Discussions of general AI, technology that could emulate human-level problem solving, resurfaced.

Soon, even the term “artificial intelligence,” which had been vilified after decades of undelivered promises, started becoming popular again. Many companies started using AI, deep learning and machine learning interchangeably to serve their marketing needs. Media outlets started publishing AI articles with pictures of crystal balls, killer robots and sentient AI agents, enrobing the entire industry in a shroud of mystery. Hollywood played its part with movies like Her and Blade Runner 2049.

But the limits of deep learning have become more and more highlighted as the technology has found its way into an increasing number of domains. Robots are still struggling to mimic the dexterity of a human child. Despite the arrays of sensors, cameras, radars and other high-tech gear, self-driving cars are still making stupid mistakes. Humans, with their limited field of view and slower reflexes, are still much better than driverless cars in understanding their surrounding environment and responding to different situations.

Chatbots, which were supposed to engage in meaningful conversations, are only good at performing narrow tasks and quickly become disastrous when given broad-ranging tasks.

Many of the companies that overhyped the capabilities of their AI in processing and generating natural language have failed to deliver on their promises. Instead they have been forced to hire humans to make up for the shortcomings of their AI, what has become known as the “Wizard of Oz” effect.

Are we headed for another AI winter?

Winter

The re-emergence of excitement surrounding the AI industry has drawn criticism from some of the leading voices in the field, who have taken to social media and news outlets to debunk the myths surrounding artificial intelligence. Others are warning that the third AI winter is already underway and the sentiment surrounding deep learning and AI in general has cooled off in the past couple of years. Even Geoffrey Hinton, the father of modern deep learning, has voiced suspicions about the efficiency of “back propagation,” one of the main innovations behind neural networks.

We have yet to see if we’re headed for an AI winter in the sense that companies and scientists start distancing from the field. Contrary to the previous booms and busts of the artificial intelligence industry, in this cycle, AI has not remained limited to research labs and niche industries. Today, deep learning has become an integral and critical part of many of the applications that we use in our daily lives. And although it has not delivered on many of its promises, it is solving real-world problems.

Perhaps, instead of a winter, it’s more precise to say that the AI industry is headed for a reality check. For one thing, we know that deep learning and neural networks are nowhere near capable of replicating the functions of the human brain. We know that they will not be replacing radiologists or teachers, or doctors and many other professions that rely on human interactions and require socializing. But we do know that human labor will be disrupted and we’ll need to reskill workers and professionals for the jobs of future.

We’ve also learned that AI and human intelligence perform best when their capabilities are combined, not when they compete. That’s why many industry leaders prefer to use the term “augmented intelligence” to describe AI, which implies that AI is meant to complement human capabilities instead of replacing them.

There will be adjustments to the expectations and sentiments—and perhaps funding—regarding deep learning and the artificial intelligence industry. Other innovations and technologies might complement, enhance or outmaneuver deep learning. But what we’ve achieved so far is here to stay. That, you can count on.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.