“I actually don’t like the word AI.” It’s odd hearing those words from the CEO of a company that sits atop one of the biggest artificial intelligence platforms in the world. But those were the words that Ginni Rometty, the chief executive of IBM, said during an interview with CNN’s Fareed Zakaria on the sidelines of the 2017 World Economic Conference in Davos.
“AI said replacement of people, it carries some baggage with it,” Rometty said. “And that is not what we’re talking about.”
In line with what Rometty mentions, over the decades, artificial intelligence has garnered a negative reputation as a force that will eventually destroy humanity. And part of that has to do with the nomenclature, which tacitly defines AI as something that is on-par with the human mind.
That statement couldn’t be farther from the truth. Only general artificial intelligence can match human intelligence, and by most estimates, general AI is at least decades away. Powered by deep learning and machine learning, current AI technologies and applications have produced impressive results, and they can often outperform humans in specific tasks such as classifying images or transcribing speech. But the limits and challenges of deep learning prevent it from making trivial abstract decisions that even the simplest human mind can do.
That’s why we call it narrow AI, or better yet, “These are technologies to augment human intelligence,” as Rometty said during her interview. “By and large we see a world where this is a partnership between man and machine and this is in fact going to make us better and allow us to do what the human condition is best able to do.”
Why do we need augmented intelligence?
First, we must understand that we’re in a point in time that technological advances are having a very deep impact on our lives. Rometty points to the rise of cloud computing, the rise of data and mobility as the three trends driving this change. (I would personally add the internet of things (IoT) as another relevant trend in this regard.)
Cloud computing makes it possible to access highly powered compute and storage resources on and across many devices, and mobility makes it possible to take those computing resources anywhere with you. Meanwhile, IoT makes it possible to connect the cloud to a wide range of devices. These trends have collectively led to an explosion of valuable data becoming available in various domains.
“You have so much information… so much that if you don’t do something, your brain, us as humans, can deal cognitively with all of that,” Rometty said. As an example, 8,000 medical papers are being published every day, an insane amount of information for any single doctor to read or review. And medical data is doubling every 60 days.
“You need a system that can understand all this data, could reason over it, and could learn, which means they do become more powerful with time” Rometty said. This is the basic description of how machine learning and deep learning algorithms work. Instead of embedding static logic (if..else statements) into them, you provide them with samples and data from a specific non-deterministic domain and you teach them to come up with their own rules for running applications. These can be anything ranging from simple pictures to social media posts, news articles and other unstructured data. The data goes through “neural networks,” a software structure that roughly mimics the human brain, and transforms into an intricate set of rules that maps inputs to outputs.
“I think these systems have the opportunity to do for some of what have been the world’s most unsolvable problems, to find solutions to them,” Rometty said. For instance, deep learning applications can analyze unstructured text documents and glean valuable insights and information almost in the same way that a human would, but with superior speed.
Examples of augmented intelligence applications
IBM has already applied the concept of augmented intelligence in various fields. An example is oncology where IBM’s experts fed Watson, the company’s AI platform, with millions of text books, journals and papers, and trained it with some of the best cancer centers in the world. This is called “supervised learning,” in which subject matter experts become involved in the training of an AI algorithm. In the case of Watson’s healthcare platform, experts first had to teach it the language of medicine and technical terms involved in the diagnosis and treatment of cancer.
The experts then challenge the AI application by asking it questions and correcting the answers when it makes mistakes. For instance, you can provide a trained model with a set of X-rays or medical records that have been previously checked by trained experts and ask it to make a diagnosis and correct its mistakes. Over time, the neural networks behind the system become adept at making right decisions or coming up with relevant and useful suggestions.
“You get to a point where you could diagnose, and you could offer treatments,” Rometty said. This means that doctors will have a digital assistant that can take care of the tedious and repetitive parts of their work with accuracy and speed and make them much better at their work.
In the case of healthcare, this means that doctors will be able to treat much more patients. For instance, in India, there’s one doctor for every 1,600 cancer patients. Even in the U.S., 15 percent of people with cancer receive treatment at a real cancer center. The other 85 percent will have to make do with whatever their local hospitals offer.
There’s a similar pattern in other fields, such as education, where thanks to technological advances, there’s a lot of information available on how learners interact with courses and knowledge. But teachers won’t be able to put the information to productive use without the help of algorithms that can analyze all the data and provide insights on where their students are struggling and how they can help them overcome those challenges.
In cybersecurity, AI algorithms can analyze a company’s data and find anomalies, such as a change of habit in how an employee is logging into a system or how they’re accessing data. This can help security analysts in pinpointing data breaches and insider threats in the sea of data that every company generates daily before the damage is done.
Another interesting domain is music, arts and other creative activities, where AI can help artists come up with new ideas and learn from how people react to their works of art. IBM collaborated with award-winning producer and composer Alex Da Kid to create a song that took into account insights Watson’s AI had gleaned from years of top ten chart data and five years’ worth of social media conversations. When the song was published, it topped the charts on Spotify and iTunes in record time.
But if the computers can do all this, what will the humans do? “It’s why it’s augmented intelligence,” Rometty said, underlining that in every case, the person that cooperated with the AI plays a pivotal role in the outcome. The combination of man and machine creates something that none of them can do alone.
“What is important is you find a way to insert these abilities to help in someone’s workflow,” Rometty said. “That’s the real opportunity.”
The ethics of augmented intelligence
In the same interview, Zakaria challenged Rometty on the ethical implications of AI algorithms. “You can essentially predict people’s behavior, perhaps to an extent that they don’t even recognize it themselves,” he said. “Because they’re looking at 5 million cases and not just me, [computers] will notice if I listen to certain songs on Spotify and if I buy certain things on Amazon, I’m likely to vote for candidate X in the elections and my next purchase is likely to be Y.”
Isn’t it slightly scary that a computer can predict what you’re going to do? Zakaria asks. And that is a question that is perhaps more relevant in the wake of the Facebook/Cambridge Analytica scandal that came to light not too long ago, in which user data and AI algorithms were used to manipulate people by showing them targeted advertisement on the social media platform.
“That is true. You can do that,” Rometty said. But having a choice is what matters. “We’re going to enter a world that has to be in your hands. Whether or not you want to live and work in that way. Some people will trade that off for convenience.” For instance, you can give away your location to get directions or the locations of restaurants near you. But that has to be a conscious decision that you have to make, not the company running the service you use.
The principles of augmented intelligence
“When you introduce powerful technologies into this world, you have a responsibility that they are introduced in the right way,” Rometty said toward the end of the interview, in which she laid out three main principles for the cognitive era, aka the principles for AI.
First, any technology implementing AI should be clear of its purpose, and this purpose should be in service of mankind. “They are in service of humans, they are here to extend what you and I can do and to extend the human capability,” Rometty said. (But in all fairness, the same technologies can be used to extend the capabilities of evil actors, so the humans using them will be a defining factor as well.) Rometty made it clear that her company doesn’t advocate for self-aware AI, and in this regard, she joins other experts and visionaries who believe we need to focus on augmented intelligence (or narrow AI).
The second principle is transparency. Everyone should know when they’re using artificial intelligence. You should also know how these systems were trained and who trained them and what data was used to train them. “All data and algorithms should not be concentrated in one company,” Rometty said. “This is not a good thing.” So if a company has decades’ worth of knowledge, it should not be forced to turn it over to some AI platform provider to be able to run it through algorithms. “When the algorithms are trained, that insight belongs to you,” Rometty said. Interestingly, a year later, also speaking at the World Economic Forum, Dr. Yuval Noah Harari underlined the need to avoid concentrating data in the control of a few entities. (This is a view that I also support.)
The third principle is around skills. While the argument that robots will drive humans into unemployment is a bit exaggerated, there’s no arguing that AI and automation are disrupting human labor. That’s why we need to guide and build the skills in the world to effectively use this technology, safely use this technology and put it in the right service, and be sure the right jobs are created, Rometty underlined.
“We’re just at the beginnings of this era,” Rometty said.
Watch the full interview here: