Artificial Intelligence has problems to fix

5 min read

Artificial intelligence machine learning

While much of what you read and hear about Artificial Intelligence will turn out to be hype (that’s the case with practically every new, disruptive technology), there’s no denying that AI and machine learning will have an important role to play in how different industries and aspects of life and society around us will take shape in the coming years.

The trends and facts are certainly in favor of the AI fever. 2016 saw an explosion of funding and acquisition of AI startups. During the same year, Artificial Intelligence mastered many skills that were supposedly the exclusive domain of human intelligence, including a complicated board game, fighting (and causing) cyberthreats, and playing computer games—among others.

But perhaps one of the most failed perceptions about AI is that it is a silver bullet that will wipe away the failures of human nature and make our lives smarter and more comfortable without any tradeoffs. While we get ready to embrace (or resignedly accept) AI as an inherent part of possibly everything we do in the future, we must also acknowledge that it has its own shortcomings and will introduce its own set of challenges that the tech community will have to address.

Here are four domains that AI has problems to deal with.

Economic challenges

Due to its disruptive nature, Artificial Intelligence (and automation in general) is causing socioeconomic upheavals, destroying thousands of jobs while creating thousands of others, and threatening the middle class like never before.

Self-driving trucks can soon put millions of  truck drivers out of work, news-writing robots might replace journalists, cashiers will have to watch smart and frictionless self-checkouts (or no-checkouts) do their jobs, and algorithms might do your taxes instead of accountants—among others. And those Rust Belt jobs are never coming back, thanks in huge part to automation and AI.

Meanwhile, millions of job positions are being creating in the tech sector. Software engineers and data scientists are needed in the thousands to develop and deal with all this new software, services and data that is eating up the world, and cybersecurity and IT experts will have to manage and protect the infrastructure that will be supporting those services.

The problem is, the people who are being put out of work do not have the required skills to take up the new responsibilities. So those posts remain vacant while the unemployed—and disenchanted—population continues to grow.

AI companies and the tech community in general will have to get the population ready to transition to this new shift in the working structure. This will require several efforts to be led in tandem, including teaching new skills to people whose jobs are threatened by technological advances, using Artificial Intelligence itself and other technologies to assist in complicated tasks and lower the bar for entry into tech sector jobs, and considering initiatives such as Universal Basic Income to start getting for the day where robots will take over almost all of our jobs.

Ethical challenges

We’ve long imagined that Artificial Intelligence would put an end to human bias, but it turns out that creating unbiased AI is almost as challenging as creating unbiased humans.

From fighting crime to creating talkative chatbots, determining trending news topics, recognizing human faces in pictures, and making job propositions to site visitors, AI and Machine Learning have dished out quite a few fiascos of late.

At the heart of the problem is the human amount of data needed to train  ML algorithms. The diversity and quality of that data will determine the “personality” of the ML algorithm. If the data is biased, the resulting algorithm will be biased as well.

This is a real challenge that AI is currently facing, and it is being labeled as the white guy problem (because the tech sector is largely run by white males).

So when you feed too many pictures of white people to a beauty contest algorithm, the contest’s result will obviously be skewed in favor of white applicants. The same thing happens when you feed too many white-sounding names to an algorithm that is supposed to rate how nice a name sounds.

For the moment, much of this might be humourous or not specifically harmful and is the result of innocent mistakes. But what happens when a person’s fate in court is decided by an algorithm? How about if some organization in control of a powerful algorithm intentionally injects biased data into the program in hopes of altering the course of an important event?

Unbiased ML can only be achieved if the involved institutions take the necessary steps to make sure the data they feed into their algorithms is properly balanced.

Firms and tech corps can do several things to fight ML bias, including the establishment of shared databases that are in the possession of no single entity. The use of blockchain (the distributed ledger underlying Bitcoin and other cryptocurrencies) might also be a viable option to create transparent and tamper-proof databases that are auditable by independent bodies.

Legal challenges

In the pre-AI era, when software or hardware went awry, it was relatively easy to find the culprit and determine whether the user or the developer/manufacturer was at fault. The same can’t be said about AI-driven technologies, especially where Machine Learning is concerned.

Machine Learning algorithms are meant to be fed data and figure out for themselves how to decide in different situations. A lot of how an algorithm functions will depend on that data, and even then, many things remain murky.

Many ML engineers will tell you that they themselves can’t clearly explain why a certain algorithm of their making has made a certain decision. That might be acceptable when you’re dealing with labeling photos or determining the winner of a beauty contest, but what happens when AI is confided complete control of critical tasks such as driving cars or treating patients?

Who will be at fault when a self-driving car makes an objectionable decision? The driver? The car manufacturer? The team in charge of training the decision making algorithm? Each of those can easily shrug off responsibility and dodge accountability by laying the blame on some other party.

Extrapolating from that, there are many other imaginable situations where the legal boundaries of AI become vague and complicated and new regulations must be set in place to make sure that we can deal with the challenges lying ahead.

Privacy challenges

As companies shift towards becoming focused on AI and ML, they’ll grow a thirst for user data, the main factor to improving their services and offerings. This can cause them to trek into unwanted territories and cross privacy borderlines, such as when a retail store found out about a teenage girl’s pregnancy before her parents did, or when UK’s National Health Service shared patient data with Google’s DeepMind to help make predictions about diseases.

The enhanced power of ML and AI can also be put to ill use, as was shown by a face recognition app that can be used by oppressive regimes to identify dissidents or an ML algorithm that can peek behind masked content and blurred pictures.

And a collection of handwriting imitation, voice replication and conversation generation technologies can completely impersonate you in the future and deceive their counterparts, making them think they are actually interacting with you. I don’t even want to speculate on the evil implications of that.

Such cases will make users skeptical in sharing their information with companies, which in turn will make it harder to develop ML algorithms. Companies should be more responsible in how they collect and share user information, and provide clear guidelines, explanations and opt-outs for their data collection programs.

Also, anonymizing data and avoiding to collect unneeded information is a good practice that every data- and AI-oriented company should adopt. Users should be wary of those EULA and T&Cs that they easily ignore. Sometimes, you find out a tad bit late about the data you’ve been giving away to an online service.

Of course, there are dark sides to the benefits of any technology and innovation, including Artificial Intelligence. But this doesn’t make AI and ML dark arts per se. What is important though is that the tech community must acknowledge that it has a collective responsibility to identify these challenges and risks, and mitigate them in a timely and coordinated fashion to make sure that we can take full advantage of this exciting technology without suffering the tradeoffs.

The robots are coming. Let’s make sure they come in peace.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.