Microsoft's Rogue Chat Bot 'Tay' Makes Brief Return to Twitter

The chat bot had been sidelined after making racist comments.

ByABC News
March 30, 2016, 10:21 AM

— -- Microsoft's millennial chat bot "Tay" made a brief reappearance on Twitter this morning but still didn't make a stellar impression on her followers.

The artificially intelligent system, which learns from interactions on social media, began spewing racist comments within a day of its launch last week, prompting company officials to pull the plug on the experiment while they worked on some adjustments.

Tay returned this morning and had a meltdown of sorts, sending rapid-fire tweets telling many of her followers, "You are too fast, please take a rest." Microsoft engineers responded by locking Tay's Twitter account this morning.

"Tay remains offline while we make adjustments. As part of testing, she was inadvertently activated on Twitter for a brief period of time," a company representative told ABC News in an email today.

Geared toward 18- to 24-year-olds, Tay was launched as an experiment to conduct research on conversational understanding, with the chat bot getting smarter and offering a more personalized experience the more someone interacted with "her" on social media. Microsoft launched Tay on Twitter and messaging platforms GroupMe and Kik.

Tay is designed to get smarter and learn from conversations, but there was just one problem: She didn't understand what she was saying. Some Twitter users seized on this vulnerability, turning the naive chat bot into a racist troll.

"The AI chatbot Tay is a machine learning project, designed for human engagement. It is as much a social and cultural experiment, as it is technical," a Microsoft representative told ABC News in a statement last week. "Unfortunately, within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay’s commenting skills to have Tay respond in inappropriate ways. As a result, we have taken Tay offline and are making adjustments."