If you’ve been on Twitter this past week, you might have seen the slippery slope taken by Microsoft’s teenage chatbot, Tay. It got so slippery that the company had to take it down after only one day of life, and also apologize for the things she said.
According to a recent blog post by Microsoft, the tech giant is “deeply sorry for the unintended offensive and hurtful tweets” that were generated by its artificial intelligence chatbot.
But Tay won’t be gone forever; she’ll be back after her engineers can fix the loophole the hackers have exploited. Peter Lee, VP of Microsoft Research, said the results of the experiment “conflicted with our principles and values,” in a blog post on Friday.
Wednesday’s launch of Tay didn’t stir much interest in the online community – at first. The experiment was meant to engage 18- to 24-year-olds on Twitter and Kik, and its AI mission was mainly to engage users in conversation.
The inspiration for Tay was Xiaolche, a successful chatbot released in China, which had no problem being integrated by 40 million users. Things didn’t go as smoothly with Tay’s launch, however.
Shortly after it debuted in the U.S., Tay was reportedly hacked by Reddit users who taught her offensive dialogue. This outcome prompted Microsoft to pull the plug on the experiment.
Lee explained that although “we had prepared for many types of abuses of the system, we had made a critical oversight for this specific attack.” As a result, Tay – who was supposed to portray a teenage girl eager to learn ‘what’s up’ – started tweeting wildly inappropriate things.
Microsoft took full responsibility for not thinking of this loophole ahead of time. The thing that Reddit users allegedly exploited was this machine learning algorithm that prompted Tay to spew back the things people told her the most.
Artificial intelligence is gaining serious momentum among tech companies. The major players still have the monopoly of the most sophisticated AI devices – think Google’s DeepMind and IBM’s Watson.
But soon, Siri and Cortana – now only in the form of rudimentary systems – will soon be taken to the next predictive level. Not everyone is excited about machines gaining the ability to talk and reason in a human-like fashion.
People like Elon Musk, founder of Tesla, are among those who want to warn us about the rise of robots. It won’t all be smooth sailing and Tay’s vulnerability – which was so easy to take advantage of – is a perfect example.
Image Source: Counter Current Events