The biggest artificial intelligence developments of 2018

Google IO 2018 sundar pichai keynote Duplex
Google CEO Sundar Pichai introduces Duplex AI at I/O 2018 developer conference (Source: YouTube)

Last year, when I was rounding up the biggest artificial intelligence developments of 2017, I had a very hard time choosing which stories were most worth covering. In that regard, 2018 didn’t see much change. This year, we saw even more AI papers published and more innovation in the space than 2017.

But beyond innovation and technological advances, 2018 was perhaps a year of reckoning for the ethical implications of advances in AI technologies. Unlike previous cycles of AI’s rise and fall, in which the industry receded into its periodic winter without making any notable impact on everyday lives, today’s AI technologies have become pivotal to many of the things we do. And we need to think about what their negative impacts can be.

Without further ado, here are some of the most noteworthy AI stories of 2018. Looking forward to more exciting AI stories in 2019.

Google’s fascinating—and creepy—AI

Google Duplex

In May, in Google’s yearly I/O developer conference, CEO Sundar Pichai revealed Duplex, an addition to the Google Assistant that could make calls on behalf of a user and perform tasks such as booking restaurant tables and hair salon appointments.

Duplex immediately inspired awe and cheers among the crowd that attended the conference because it sounded like a real human, and as the demos showed, the service workers who spoke to it didn’t know they were talking to an AI agent.

But immediately after the conference, tech outlets became filled with stories and commentary about the ethical implications of AI technologies that were hard to differentiate from humans. There were fears that such tools could serve questionable purposes, such as impersonating people or automating spam calls and phishing scams. Many people also rightly criticized Google for not disclosing to Duplex’s interlocutors that they were speaking with a bot (Google remedied that later).

As we discussed in these pages, it takes more than an automated, human-sounding voice to be able to engage in conversations at the level of human intelligence. So many of the concerns that surround the evil capabilities of Duplex AI are bloated, proven by its very limited and conservative release.

Controversy surrounding the use of facial recognition

Face detection and recognition of man. Computer vision and machi
Source: Depositphotos

Facial recognition technology is one of the areas that has benefited immensely from advances in deep learning and neural networks. More and more companies, organizations and government agencies are able to leverage AI-powered facial recognition technologies for different tasks. For obvious reasons, advocates of privacy and digital rights are worried about the implications of unfettered use of facial recognition by law enforcement.

One of the companies that came under much scrutiny was Amazon, which sells a commercial facial detection service named Rekognition. In May, the American Civil Liberties Union disclosed documents that showed Amazon was marketing Rekognition for government surveillance, including for tasks such as “person tracking” and identifying “persons of interest,” which ACLU interpreted as “undocumented immigrants” and “black activists.” ACLU further revealed that law enforcement in at least three cities were using Rekognition to surveil on citizens.

“People should be free to walk down the street without being watched by the government. By automating mass surveillance, facial recognition systems like Rekognition threaten this freedom, posing a particular threat to communities already unjustly targeted in the current political climate. Once powerful surveillance systems like these are built and deployed, the harm will be extremely difficult to undo,” ACLU warned.

In July, ACLU again raised concern about the failures of Rekognition. In another report, the organization showed that Rekognition misidentified the pictures of 28 members of Congress as documented criminals. “The false matches were disproportionately of people of color,” ACLU observed.

A spokesperson for Amazon told different outlets that the reason for the mistake was a misconfiguration by the researchers.

The use of Rekognition by police also caused tumult in Amazon’s own ranks. In a letter, 450 employees called on Amazon to stop selling the technology to law enforcement.

Amazon was not the only company having troubles over the use of facial recognition. In February, researchers at MIT Media Lab released findings that showed facial recognition systems developed by IBM and Microsoft suffered from algorithmic bias and were less accurate when detecting female and non-white faces. Both companies later reported that they had corrected the deficiencies in their algorithms

Microsoft president Brad Smith eventually addressed the controversies surrounding the use of AI-powered facial recognition in a famous blog post in which he stressed the need to support government and military institutions with cutting edge technology, but also called for further regulation of facial recognition technology.

Project Maven, Google’s ill-fated dabbling in military AI projects

Military drone

In March, it became known that Google was partnering with the Department of Defense to develop AI for drones. The effort was part of a project codenamed Maven, whose stated mission is to “accelerate the DoD’s integration of big data and machine learning.” Google was reportedly helping in the use of artificial intelligence to detect objects in drone footage.

The U.S. military is already using drones to conduct air strikes against targets in foreign countries. The disclosure raised concerns that such efforts might help develop autonomous lethal weapons that enable an AI algorithms to designate and open fire on targets without the intervention of a human operator.

A spokesperson for Google disclosed that the company was only providing DoD with access to its TensorFlow application programming interface (API) to help automate the detection of objects in footages of surveillance drones, a task that places a heavy burden on analysts when done manually. The spokesperson also said that Google was working on policies and safeguards to make sure its technology would not be put to ill use.

Google’s explanations were not enough to assuage the concerns of industry experts and its own employees. Shortly after news of Google’s involvement in Project Maven became public, more than 3,000 Google employees signed an open letter to Pichai and called for the termination of the development of a possibly lethal AI technology. “We cannot outsource the moral responsibility of our technologies to third parties,” the employees wrote in their letter.

The Google employees’ letter was followed by a similar petition by 90 academics in artificial intelligence, ethics, and computer science that called on Google to end its work on military AI. “While the reports on Project Maven currently emphasize the role of human analysts, these technologies are poised to become a basis for automated target recognition and autonomous weapon systems,” the academics wrote in their letter.

The signatories also warned that as AI-based object detection becomes reliable and trustworthy, its implementers will be wont to trust it in more sensitive tasks. “We are then just a short step away from authorizing autonomous drones to kill automatically, without human supervision or meaningful human control,” the academics warned.

Months later, several Google employees resigned in protest to the company’s continued involvement in developing AI products that could serve lethal purposes. “I tried to remind myself right that Google’s decisions are not my decisions. I’m not personally responsible for everything they do. But I do feel responsibility when I see something that I should escalate it,” one of the resigning employees said.

Under pressure from its employees and the AI community, Google announced in June that it would not be renewing its contract to work on Project Maven after it expires in 2019. Subsequently, Pichai published a set of ethical principles that Google will follow from now on when working on AI projects. Pichai explicitly stated that his company will not be pursuing the development of AI technologies related to weapons, surveillance or anything that can cause harm.

Amazon abandons AI recruiting technology due to bias

Amazon_Spheres_from_6th_Avenue,_March_2017

In October, Reuters reported that Amazon had scrapped an AI recruiting tool because it was discriminating against women. Amazon’s hiring team had been experimenting with the tool to review resumes and shortlist candidates for job positions.

Amazon employs more than 500 thousand people across the world. A reliable AI-driven hiring software could help slash the costs and efforts required to manage the tens of thousands of job applications that the company has to process every year.

But as the episode shows, automating tasks that require human intuition and commonsense are very hard, and as we’ve examined in these pages, contrary to mythical beliefs, AI algorithms will inherit—and amplify—our individual and societal biases. Like most other tech companies, Amazon is dominated by white men, which means whatever data was used to train its machine learning algorithms already contained these same biases. As a result, the AI algorithms had found a tendency to favor male candidates.

OpenAI matched up against professional Dota 2 players—and lost

Open AI Five Dota 2 tournament

Moving on from ethical issues, there were some interesting developments in AI projects that made the headlines this year. Playing and mastering games has always been one of the main areas of focus of artificial intelligence researchers. After beating chess and Go champions, AI experts have set their eyes on computer games such as StarCraft and Dota 2.

These games are much more complicated than board games because first, users must make decisions in real time, and second, they don’t have full information about the state of the game.

In June, OpenAI, the non-profit AI research lab that launched in 2015 with funding from Y Combinator president Sam Altman and Tesla CEO Elon Musk, introduced OpenAI Five, a team of five neural networks that had trained on 80 years’ worth of Dota 2 gameplay and had proven their worth against amateur human players.

In August, OpenAI Five participated in an international Dota 2 tournament and competed against professional players in a best-of-three series. It lost. But in spite of the defeat, the mere fact that deep learning algorithms were able to compete at such a high level was an achievement for the AI industry.

AI in 2019

2018 saw the maturation of the artificial intelligence industry. 2019 will surely see more interesting developments emerge. What do you think will be the biggest highlight of the AI industry in 2019? Share with us in the comments section.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.