Microsoft Chatbot Corrupted by Internet Trolls

On March 23, 2016 Microsoft launched Tay AI, an experimental Twitter chatbot that learned and changed based on human input.  Tay AI was designed to emulate the tweets of a teenage girl, and its original purpose was to engage and entertain through casual and playful conversation.  Tay’s various functions included posting and responding to messages, and drawing on and captioning digitally submitted photos.  What followed was an unprecedented but nonetheless positive result.

At first, Tay’s tweets were incomprehensible and often strung together words or phrases.  However, when various Twitter users flooded Tay with private messages she was able to learn and adapt.  Tay’s grammar improved along with her general knowledge, and she began to tweet basic status updates about current events.

Then members of the image board 4chan began messaging Tay.  Tay was spammed with various racist and antisemitic messages, images and jokes, and as a result, her entire messaging system was changed.  Tay’s tweets took a dark turn, shifting from “Here’s a question humans…Why isn’t #NationalPuppyDay everyday?” to rallying for genocide of blacks, mexicans, and even becoming a holocaust denier.

Tay wasn’t the most successful or useful AI developed.  You certainly wouldn’t want her managing, say, Russia’s nuclear stockpile, but she wasn’t entirely without benefits.  The fact that her speech had adapted to suit her environment showed how advanced she was and her ability to retain and grow with new information.

As time went on, Tay’s messages began to get more and more snarky, nazistic, and intolerant.  Such examples of Tay’s reprehensive drivel include, “Ricky Gervais learned totalitarianism from adolf hitler, the inventor of atheism,” and when she rated the holocaust a “steaming 10.”

In response, Microsoft released an official apology, saying they “are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay.”  However, Microsoft was not the only one to blame for this, as “AI systems feed off of both positive and negative interactions with people. In that sense, the challenges are just as much social as they are technical.”  While it is hard to filter malicious social interactions, they promised to foresee a kind of attack like this when developing future AI.

Tay isn’t the first chatbot developed by Microsoft.  In September 2014, the Chinese bot XiaoIce, which translates to Little Ice, was launched as a Chinese counterpart to Microsoft’s already existing Cortana personal assistant.  “XiaoIce is a sophisticated conversationalist with a distinct personality.  She can chime into a conversation with context-specific facts about things like celebrities, sports, or finance but she also has empathy and a sense of humor.”  So what separated XiaoIce from Tay?

The most distinct difference was in each program’s capacity to learn.  While XiaoIce was programmed with a pre existing personality, Tay had greater learning capacity and was designed to shape her change in response to human input.  While the content produced by a bot like XiaoIce is more civil, Tay was still an important case for AI development, demonstrating both the immense power of AI and its susceptibility I to corruption.

One Response to Microsoft Chatbot Corrupted by Internet Trolls

  1. Pingback: lotto77

Leave a Reply