Microsoft's teenage chat bot "Tay" is in a time-out of sorts after the artificially intelligent system, which learns from interactions on social media, began spewing racist comments within a day of its launch this week, company officials said.
Geared toward 18- to 24-year-olds, Tay was launched as an experiment to conduct research on conversational understanding, with the chat bot getting smarter and offering a more personalized experience the more someone interacted with "her" on social media. Microsoft launched Tay on Twitter and messaging platforms GroupMe and Kik.
Tay is designed to get smarter and learn from conversations, but there was just one problem: She didn't understand what she was saying. Some Twitter users seized on this vulnerability, turning the naive chat bot into a racist troll.
"The AI chatbot Tay is a machine learning project, designed for human engagement. It is as much a social and cultural experiment, as it is technical," a Microsoft representative told ABC News in a statement today. "Unfortunately, within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay’s commenting skills to have Tay respond in inappropriate ways. As a result, we have taken Tay offline and are making adjustments."
It was unclear what adjustments were being made or when Tay would be back online. She no longer responds to messages and her last tweet makes it unclear when -- if ever -- a kinder, gentler Tay will emerge from her time out.
c u soon humans need sleep now so many conversations today thx??— TayTweets (@TayandYou) March 24, 2016