What is Narrow, General and Super Artificial Intelligence

artificial-intelligenceThis article is part of Demystifying AI, a series of posts that (try) to disambiguate the jargon and myths surrounding AI.

If you’re living on this planet, you probably hear a lot about Artificial Intelligence these days. It’s conquering every industry and domain, performing tasks more efficiently than humans, will put humans out of work, and may one day force humans into slavery. You might have also heard about narrow, general and super artificial intelligence, or about machine learning, deep learning, reinforced learning, supervised and unsupervised learning, neural networks, Bayesian networks and a whole lot of other confusing terms.

That’s a lot of jargon to cover in one post, and we’ll leave the learning stuff for another day. In this post, we’ll try to define what AI is (and isn’t maybe) and what are the three main known flavors of AI.

What is Artificial Intelligence?

It’s hard to define. Google the term and you’ll get a lot of different (and sometimes conflicting) results. Wikipedia gives it the shortest and most ambiguous definition of “intelligence exhibited by machines.” But then it gives a more understandable definition of machines that mimics cognitive functions such as problem solving and learning.

The truth is that AI is hard to define, because intelligence is hard to define in the first place. But what we all agree on is that Artificial Intelligence is not natural, such as what you find in human beings and some species of animals. It’s man-made. Yet it can reason, make decisions that take into account various factors, such as how the human brain functions in general.

RELATED: What is the difference between artificial intelligence, machine learning and deep learning?

Let’s also make it clear that the term “robot” is not a synonym for AI, even though it is sometimes used in that way (including by myself). Artificial Intelligence is a reference to the software that manifests intelligence, whereas robots infer a physical element, a shell which carries out the decisions made by the AI engine behind it. Not every AI needs a robot to carry out its functions, and retrospectively, not every robot needs true AI to power its functionality.

As we move forward and technology progresses and evolves, our definition of AI changes. In fact, John McCarthy, who coined the term “Artificial Intelligence” in 1956, stressed that “as soon as it works, no one calls it AI anymore.” Today, many of the rules- and logic-based systems that were previously considered Artificial Intelligence are no longer classified as AI. In contrast, systems that analyze and find patterns in data (machine learning) are becoming the dominant form of AI.

But regardless of the mechanisms and behind the scenes, the level of AI can be broken down into three main categories.

Artificial Narrow Intelligence

Narrow AI is the only form of Artificial Intelligence that humanity has achieved so far. This is AI that is good at performing a single task, such as playing chess or Go, making purchase suggestions, sales predictions and weather forecasts. Computer vision, natural language processing are still at the current stage narrow AI. Speech and image recognition are narrow AI, even if their advances seem fascinating. Even Google’s translation engine, sophisticated as it is, is a form of narrow Artificial Intelligence.

Self-driving car technology is still considered a type of narrow AI, or more precisely, a coordination of several narrow AIs.

RELATED: Albeit smart, AI algorithms have distinct challenges to overcome

In essence, narrow AI works within a very limited context, and can’t take on tasks beyond its field. So you can’t expect the same engine that transcripts audio and video files to, say, order pizza for you. That’s the task of another AI.

Narrow AI is sometimes also referred to as “weak AI.” However, that doesn’t mean that narrow AI is inefficient. In contrary, it is very good at routine jobs, both physical and cognitive. It’s narrow AI that is threatening to replace (or rather displace) many human jobs. And it’s narrow AI that can ferret out patterns and correlations from data that would take eons for humans to find.

But it’s still not human-level AI. That’s the stuff of Artificial General Intelligence.

Artificial General Intelligence

General AI, also known as human-level AI or strong AI, is the type of Artificial Intelligence that can understand and reason its environment as a human would. General AI has always been elusive. We’ve been saying for decades that it’s just around the corner.

RELATED: The biggest AI developments of 2016

But the more we delve into it, the more we realize that it’s hard to achieve—and the more we come to appreciate the miracle that is behind the human brain. It’s really hard to define what a human-level artificial intelligence would be. You just need to look at how you perceive things, juggle between multiple unrelated thoughts and memories when making a decision. That’s very hard for computers to achieve.

Humans might not be able to process data as fast as computers, but they can think abstractly and plan, solve problems at a general level without going into the details. They can innovate, come up with thoughts and ideas that have no precedence. Think about the invention of the telephone, ships, telescopes, concepts such as mail, social media, gaming, virtual reality. It’s very hard to teach a computer to invent something that isn’t there.

Some say we’ll see general AI before the turn of the century. Some, such as Google’s Peter Norvig and IBM’s Rob High believe we don’t even need human level AI.

Artificial Super Intelligence

According to University of Oxford scholar and AI expert Nick Bostrom, when AI becomes much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills, we’ve achieved Artificial Super Intelligence.

ASI is even more vague than AGI at this point.

By some accounts, the distance between AGI and ASI is very short. It’ll happen in mere months, weeks, or maybe the blink of an eye and will continue at the speed of light. What happens then, no one knows for sure. Some scientists such as Stephen Hawking see the development of full artificial intelligence as the potential end of humanity. Others, such as Google’s Demis Hassabis, believe the smarter AI gets, the better humans will become at saving the environment, curing diseases, explore the universe, and at understanding themselves.

RELATED: Will general AI lead us to apocalypse or utopia?

As for me, for the moment, I’m pestered by these smart online ads that seem to be shadowing my every move on the internet, and these YouTube suggestions that are all worth watching. Narrow AI is getting on my nerves. I guess General AI will drive me crazy, if I live to see it.

5 COMMENTS

  1. Maybe we need a new way for ai to discern self recognition, as it seems to be the frontal bias used by most zoos and top researchers in the prediction game of self recognition/awareness. There is nothing more self aware by any logic if you can get even ai to recognize the constrictions of reality and to have ai recognize it is in fact fallible and mortal like any of us. I can’t imagine a better predictor than those simple basics. Problem is just because you can get a chimp to recognize an image of itself in a mirror, doesn’t mean an ai given the same experiment actually perceives it in similar sense. This is the logic problem in my view.

    I honestly don’t think we will ever see human level intelligence in ai, we are well just not intelligent enough our selves to get there yet. Japanese fembots will mostly likely be the only future past the 2050’s for some time though.

  2. […] Narrow artificial intelligence is now prevalent, which means programs are better than humans at performing specific tasks. Perhaps the most famous example is IBM’s Deep Blue defeating Garry Kasparov, the world champion of chess at the time — in 1997. Today, complex algorithms outperform humans at driving and analyzing lab results, among many other things. […]

  3. I somehow doubt we will see AGI in our lifetimes… In order to develop it we might just need to increase our own intellect first through what Elon Musk calls Human Intelligence and Neurallink. Before then, we won’t be smart enough to develop it, and/or it will just take centuries…

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.