What is the future of artificial intelligence?

Future of artificial intelligence

This article is part of Demystifying AI, a series of posts that (try to) disambiguate the jargon and myths surrounding AI.

Where will humans fit in a world where robots outsmart them? This is the focus of a heated debate between thought leaders and tech billionaires.

Some believe we’re steadily meandering toward an AI apocalypse, where humans are either obliterated or enslaved by robots, and we must act quickly to prevent it. Others will tell you that artificial intelligence will always be the subservient best friend of mankind, even when it outwits its creators, and we should move ahead with developing AI at full speed.

At every industrial revolution, there’s a dystopian and a utopian vision of the future. And there’s no arguing that we are at the doorsteps—or in the midst—of the biggest technological revolution of mankind’s history, where everything is being connected to the internet, data is in abundance and AI algorithms are permeating every domain.

What will eventually unfold is hard to tell. The future always holds surprises. But here’s the best- and worst-case scenarios.

WALL-E: AI utopia

The AI utopia camp has some very strong advocates, including DeepMind’s Demis Hassabis and Google’s Peter Norvig. Norvig addresses one of concepts that sits at the heart of the AI debate: general and super artificial intelligence, when AI’s capabilities to reason and decide become rival or become superior to those of human beings.

This is a development that can be both exciting and frightening. Some think it will happen before the turn of the century. Some, like Norvig, believe we shouldn’t even be discussing it.

“We know how to build real intelligence—my wife and I did it twice, although she did a lot more of the work,” he said in an interview with Forbes. “We don’t need to duplicate humans. That’s why I focus on having tools to help us rather than duplicate what we already know how to do. We want humans and machines to partner and do something that they cannot do on their own.”

Here’s a hypothetical account to how the AI utopia will develop:

As AI becomes expands into every industry, more and more tasks that previously required human effort and thinking will become semi- or fully automated. Thanks to AI-powered tools, humans will become more productive and be able to perform a multitude of complicated tasks with less effort and education.

As AI becomes smarter and more efficient, the role of humans will become less and less significant, and they will eventually be fully replaced by AI algorithms. As humans are removed, costs will gradually drop, and (hopefully) with them prices. For instance, the cost much to maintain a self-driving taxi is much less than the wages paid to cab drivers.

At one point, when we achieve a fully automated and self-sustaining economy with little or no human effort involved, costs will vanish altogether. Then the value and meaning of money will change forever because people won’t need to work or make money at all to make a living.

What will humans do then? The Disney animation WALL-E paints a picture of the AI utopia, where humans are being pampered by robots. That might be a bit extreme. Demis Hassabis, the scientist from DeepMind, believes AI will answer that question. “We could one day attain a better understanding of what makes us unique, including shedding light on such enduring mysteries of the mind as dreaming, creativity and perhaps one day even consciousness,” Hassabis wrote in an op-ed recently published in Financial Times.

But the road to the AI utopia is rocky. Advances in AI will cause a major disruption in the employment landscape, displacements that will challenge the very foundations that are running the global economy. Until such day where money becomes a thing of the past, the needs of the people who will lose their jobs, the cab and truck drivers, the McDonalds employees, accountants, lawyers (and the president maybe?),etc, will have to be addressed, lest we head for the disaster scenario.

Matrix and Terminator: AI dystopia

The AI dystopia camp also has some very strong supporters, even in the tech industry, including the likes of Elon Musk, Bill Gates and Stephen Hawking.

At a recent gathering of U.S. governors, Musk stressed that governments need to regulate AI before it spins out of control. “I keep sounding the alarm bell, but until people see robots going down the street killing people, they don’t know how to react, because it seems so ethereal,” Musk said.

Speaking at Cambridge University last year, Hawking warned that while AI can be pivotal to solving problems such as disease and poverty, it can also be the greatest disaster in human history if it is not properly managed.

There are varying predictions to how AI dystopia will develop, but here is a scenario that you’ll probably read about a lot:

For the time being, advances in AI are still reliant on human scientists and engineers. But at some point, humans will crack the code of artificial general intelligence, the self-conscious AI that will be able to develop and advance itself without the help of humans.

AGI will develop artificial super intelligence, the AI that is orders of magnitude stronger and more capable than the human brain. At that point, humans will probably become a nuisance to AI, and will either be turned into their slaves as in the movie Matrix, or their enemy as in the movie Terminator.

In fact, Musk is already planning for the day when humans will no longer be able to live among robots.

That’s a bit over the edge, but there are more realistic ways that AI can lead to apocalypse. One is the wedge that AI is driving through social classes. As AI algorithms generate more revenue, conquer more jobs and put more people out of work, a limited number of people, those who are running giant tech corporations, will grow richer while the lives of the rest will spiral down into poverty. This will eventually lead to the doomsday scenario that super-rich are fearing, when the pitchforks come knocking at their doors.

Another would be the political weaponization of artificial intelligence, which is much more dangerous than its military uses. We’re heading toward an era where AI algorithms and prescriptive analytics can affect your every action and decision. We’re already seeing how the algorithms that are determining what is displayed on social media feeds and search results are affecting public opinion. What if a malicious actor, such as a despotic regime, decided to employ “big data nudging” and alter all these algorithms to its advantage. Then it would be able to manipulate the the masses without the need to use force and incite rebellion. What would it mean for democracy?

The list goes of dystopian scenarios goes on.

Ironically, there are too many ways that artificial intelligence can go wrong and too few that it can go right. My personal opinion: The future will be much more different from what we expect, but I’m leaning on the positive side and hoping for the best. How do you think things will unfold?

2 COMMENTS

  1. I read David Deutsch’s piece. Let’s see:

    Deutsch > no brain on Earth is yet close to knowing what brains do in order to achieve any of that functionality

    That was true in 2012 when this was written. It is no longer true.

    Deutsch > The enterprise of achieving it artificially — the field of ‘artificial general intelligence’ or AGI — has made no progress whatever during the entire six decades of its existence.

    That’s true — until a few years ago.

    Deutsch > Yet it has also been one of the most self-confident fields in prophesying that it will soon achieve the ultimate breakthrough.

    That’s true. So, the well-worn ideas had to scrapped, assumptions discarded, and research had to dig deeper.

    Deutsch > Despite this long record of failure, AGI must be possible.

    I agree.

    Deutsch > And that is because of a deep property of the laws of physics, namely the universality of computation.

    No, sorry. That is one of the old, common; but false ideas and one of the reasons for the lack of progress.

    Deutsch > So, Babbage designed a mechanical calculator, which he called the Difference Engine.Here was a cognitive task that only humans had been able to perform.

    That was computational, not cognitive.

    Deutsch > So, in this case, and actually in all other cases of programming genuine AGI, only an algorithm with the right functionality would suffice.

    Actually, this is false. You can’t do it with any algorithm.

    Deutsch > Such a theory is beyond present-day knowledge. What we do know about epistemology implies that any approach not directed towards that philosophical breakthrough must be futile.

    Well, sort of. The problem with this claim is that it isn’t a philosophical breakthrough at all. It’s science.

    Deutsch > Perhaps the reason that self-awareness has its undeserved reputation for being connected with AGI is that …

    Is that you really have no idea what you are talking about. You criticize others for making assumptions. These are your assumptions..

    Deutsch > So in one respect I can agree with the AGI-is-imminent camp: it is plausible that just a single idea stands between us and the breakthrough.

    It isn’t plausible at all. There have been several scientific breakthoughs in the right direction since this article was written and AGI is still not possible.

    Deutsch > But it will have to be one of the best ideas ever.

    It takes more than one.

Leave a Reply to hoosiersailorCancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.