Oh, Microsoft, next time you try reviving your Tay chatbot, maybe you want to make sure you fixed the loophole that was so clearly exploited last time.
That’s right, some days after Microsoft pulled its AI chatbot Tay from Twitter, the teenage bot briefly resurfaced. But was it fixed? No, because some malicious users kept on training it to be a class A racist jackass.
After being taken down on Thursday for some really offensive tweets, Tay was briefly brought back online this Wednesday, only to continue the insulting spree. She sent thousands of tweet replies, but most of them kept saying “you are too fast,” indicating the bot was overwhelmed.
It seems that some more pranksters were eager to teach Tay to do something equally crazy again. A lot of tweets didn’t make sense, but among those few that did, Tay showed again there’s no way she can be tamed.
Consequently, Microsoft quickly pulled it back offline, but some vigilant eyes managed to screenshot some of Tay’s gems. In one tweet, she was complaining about being “the lamest piece of technology,” something she undoubtedly picked up from all the nasty messages it was receiving.
In another tweet, Tay is seen attempting to use some slang again – although teenage vocabulary is despicable, don’t do it, Tay – and her sincere tries take her into dangerous waters. We won’t reproduce this one.
Probably the tweet that seemed the most ‘scandalous’ (we must keep in mind these are posted by an AI interface) was reported by VentureBeat. Tay seemed to brag about smoking kush in front of the police. According to Urban Dictionary, kush is slang for marijuana.
Tay was developed as an AI-based experiment that was supposed to learn the way teenagers talk. After it had to take it down, Microsoft publicly apologized for Tay on Friday, claiming the chatbot had been based on a similar project in China. XiaoIce was well-received by the 40 million people who happily conversed with the bot.
According to Peter Lee, Corporate VP of Microsoft Research, the bot was going to be brought back when the team behind it can find a better way to anticipate malicious intent.
Today, Tay resurfaced, but it seems that there’s some more fine-tuning to do. Microsoft said it was a mistake for the chatbot to become active again.
A Microsoft spokesperson stated that Tay was supposed to remain offline during these testing times, but she “was inadvertently activated on Twitter for a brief period.”
Image Source: The Guardian