Rose: AI needs greater representation from the humanities

Doug Rose data scientist

As Artificial Intelligence evolves from myth and buzzword into a reality that permeates every aspect of our lives, much speculation is made over what the future challenges will be. Among them is the effect of Artificial Intelligence on employment, or specifically how it will displace jobs and the workforce.

Many of the jobs that are performed by humans today are under the looming threat of being overtaken by robots. But we’re still a long way from general and super AI, the technology that will rival or surpass human intelligence and reasoning.

This means there are still plenty of jobs for humans to perform, if only they require different skill sets. However, we need to view the displacement of work from a different perspective, some experts believe.

Among them is Doug Rose, Data Science Author and LinkedIn Learning Instructor, who says there’s too much focus on the engineering side of AI and less on the social and human side. He shared his views in an exclusive interview with Tech Talks.

AI is more than an engineering problem

“I think we continue to live in a golden age of quantitative thinking,” Rose says, when asked about the shortcomings in current approaches to AI development. “Engineers run some of the largest and most well-respected companies. These are the biggest players in artificial intelligence. They view AI as purely an engineering problem. This will create blind spots that will make AI less useful in the short-term.”

Rose refers to the Florida police’s crime prevention solution as a case study. The machine learning algorithms powering the software used by the police were supposed to predict the risk of repeat criminal offenders. But they ended up yielding results that were improperly biased toward labeling African Americans offenders more often as high-risk criminals as their white counterparts.

“The engineers didn’t make any technical errors,” Rose explains. “The training set for the machine-learning algorithm reflected the data from the police department and jurors in Broward County, Florida. These artificial intelligent agents were reflecting the reality of their human counterparts.”

The Florida police story is only one of the many cases that underline bias, one the several social, political and economic problems that Artificial Intelligence is facing down the road.

“Will it be a Machine Learning expert who tries to ‘correct’ for bias?” Rose asks, then answers, “I would argue that they’re not the best person for the job.”

A number of scientists and thought leaders have warned against the adverse effects that AI might have. Others have tried to tackle them in diverse ways, such as pushing for more transparency in algorithms or establishing partnerships.

But Rose believes we must start thinking outside the box and look toward other domains. “These questions will be a much bigger part of artificial intelligence than what’s currently being discussed,” he explains. “It’s all focused on engineering and not the consequences of hard coding human behavior. That’s why you need greater representation from the humanities. They can help interpret human behavior into something that can be more useful. Much of our behavior shouldn’t be blindly quantified.”

What happens when human sciences are left out

Rose reflected on this issue in an article he recently wrote, in which he studied Microsoft’s failed teenage Tay chatbot experiment, which turned pro-Nazi within a day of its launch.

“I was surprised that the engineers were surprised by this outcome,” Rose says, when I ask him what was the lesson to be learned from the episode. “Tay.AI was designed to parrot back what people were saying on Twitter. She did this as a way to seem intelligent. It’s a bit like what you see in elementary school. Kids will repeat the stuff that the cool kids say in an effort to seem acceptable.”

However, Rose points out, in contrast to chatbots, even at this age, kids have a pretty good background in the humanities, they understand culture and have a sense of acceptable norms. “Tay didn’t have this so it only took a few hours for her to become a Nazi.”

Again, the problem was that the engineers were solely focused on the capability. “They wanted to create a bot that could analyze massive data sets in real time,” Rose says. “Then they would take the ‘trending’ parts of that data set and rebroadcast them with some natural language processing.”

How could the disaster been prevented? “An anthropologist or a cultural expert would’ve easily predicted the outcome of remixing and rebroadcasting Twitter trends,” Rose says. “As far as I know they didn’t have a seat at the table. It was an engineering project and so it got a heavily engineered result. There is nothing human about Tay because there was no one in the room who studied human behavior.”

Errors can become more critical down the road

The Tay chatbot didn’t cause any real damage. But AI is entering more critical domains, such medicine and self-driving cars, where mistakes can prove disastrous and fatal.

What are the implications? “Right now there are software engineers quantifying deeply ethical decisions,” Rose says. “Google’s self-driving cars were originally designed to collide with the smaller of two objects when presented with only two choices. Presumably this was an effort to protect the driver. After that they tweaked their algorithm to avoid hitting smaller moving objects. Again this was presumably to protect pedestrians and cyclists. The reason I used ‘presumably’ with both of these is that these discussions have been internal to the company.”

We’ve come to recognize and accept that the human decisions are not absolute and are flawed. But hard coding those decisions takes on on a whole new ethical weight, Rose points out. “When will our self-driving car decide that we need to ‘take one for the team?’ Or will our cars be designed to protect the driver at the expense of others in our society?”

These are not engineering decisions and yet because AI is seen as an engineering problem these ethical decisions are falling in the laps of the wrong experts, Rose reasons.

How to fix the problem

How can the industry involve more people that have a background in humanities as opposed to engineering skills?“Many organizations are going to run into this problem on their own,” Rose answers. “When they do, they’re going to look for humanities and cultural experts to help them update their AI to more closely mimic human behavior. There are also consulting companies like ReD Associates that are working to get these ideas into the boardroom.”

The current belief is that in order to have a good job in the new economy you have to be able to provide some technical capability. This mindset has set a big push in past decades to get everyone into science, technology, engineering and math (STEM).

Rose believes the idea has run its course. “If you’re interested in STEM then that’s great, but if you’re not then there’s no reason to accept that you’ve missed the bus.”

Why’s that? Because the next steps in artificial intelligence will be about mimicking human behavior. “STEM will help those who are solely interested in engineering this capability,” Rose explains. “They’ll help set the table, but they’re not the ones you want cooking the meal. If you’re in a field that’s about better understanding human behavior then there’ll be a place for you when more of these AI agents find themselves perplexed.”

I ask Doug what skills would be more in demand. “Again you want to find skills that attempt to better understand human behavior,” he says, reminding me again of how different humans and machines are. “These could be language specialists, cultural experts, anthropologists and people who specialize in arts and literature.”

He compared the situation to an old joke: “An optimist will tell you the glass is half-full. The pessimist says it’s half-empty. The engineer will tell you the glass is twice the size it needs to be.”

“The engineers are making a lot of progress in building the glass,” Rose explained. “Now we need our optimists and pessimists to decide how we fill it. Remember that the AI capability in itself is not the core value proposition. The real value is upsizing and mimicking human behavior. Not just our raw data, but our aspirations, purpose and intent.”

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.