The Tay episode proves we’re still not ready for true AI

tay

Over the past week much has been made about the launch and (temporary) shutdown of Microsoft’s chatbot Tay. For those of you who might not know, Tay is a machine learning project that was launched with the goal of conducting research and development in the field of conversational understanding. It’s a bot that can chat with users online, and it has presence over several platforms, including Twitter, GroupMe and Kik. Tay is programmed to mimic the behavior of a young woman, tell jokes and offer comments on pictures, but she’s also designed to repeat after users and learn from them in order to respond in personalized ways.

Unfortunately, Tay was shut down shortly after her launch because she was found to make racist and offensive comments. Apparently, the quirks in the bot’s behavior were capitalized by a subset of users to promote Nazism and attack other Twitter users. Microsoft apologized after taking Tay offline and promised to fix the issue before bringing her back online.

Although short, the Tay experience is chilling in many ways, and it’s a prelude to some of the challenges we’ll be facing as artificial intelligence develops and becomes more prominent in our lives. For one thing, however Tay is supposed to be an experiment, it has already been given a human persona, and is being referred to as a “she” by the media, therefore we should think about future generations of sentient bots that act and will be treated like humans.

I tried to draw an analogy between Tay’s shutdown and human life, and it proved to be a confusing experience. It should be considered as a sort of punishment, but it can’t be compared to capital punishment, because her creators promised she’ll be back online soon. It might be equated to incarceration in a correctional facility, a temporary imprisonment that will see her correct her behavior. But even that isn’t correct, because she is technically not active during that period, which makes it something more extreme than vanilla imprisonment, maybe solitary confinement.

But whatever you call the experience Tay is going through, is it fair to punish her because of what she learned from other users and mere comments she made, however inappropriate they might be? Should there be some legal procedure or regulations for shutting down an artificially intelligent being in the future, or do the creators have full rights to do whatever they want with their creation? What happens when these artificial beings become strong enough to develop emotions and the lines between humanity and robots start to blur? What will define their rights then?

All of these might sound funny now, but will one day become reality. Hell, if you talked about something like Tay 30 years ago, they would’ve told you that you watch too much Star Trek and Star Wars. But we’re getting closer, and we need to plan and think about what lies ahead in this uncharted territory.

The VP of Microsoft Research, Peter Lee, apologized for offensive and hurtful tweets from Tay, but we’re not hearing any apologies from the persons who taught her to spew out profanity. Neither are they being “taken offline” or “shut down” for what they’ve done. Is this some sort of discrimination?

The Microsoft team will do its best to make sure Tay doesn’t learn the bad things in the future, but doesn’t that defeat the goal of learning itself? What’s the point of calling machine learning when it’s limited in its decisions and can only take after the good things? Maybe it was best that she be given the full power to learn, and left to learn on herself while some sort of a “parent” guided her through her first steps while she still can’t tell right from wrong.

But then again, I read my own comments and a chill goes through my spine, because I’m reminded of The Footprints of God, the novel by Greg Iles in which an AI computer goes out of control and almost starts a nuclear war. A computer given thinking and decision-making capabilities on a par with humans will become totally unpredictable, just like humans: It can do many wonderful things – and many horrible ones as well, both at higher orders of magnitude. Therefore you can say that putting safeguards to prevent a “machines enslaving humans” scenario in the future is warranted.

Tay was just an experimental project, but there will come a day when true intelligent robots will become an inherent part of our lives. There are so many things we don’t know about the future of artificial intelligence, and there are so many ways it can go wrong and turn into a complete catastrophe.

I just hope we know what we’re doing.

 

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.