Nobody was harmed — physically —by Microsoft’s foul-mouthed Twitter chat robot, of course, but what started out as a fun experiment in artificial intelligence turned ugly in a hurry. To some, that doesn’t bode well for the future of robot-human relations:
Microsoft initially created “Tay” in an effort to improve the customer service on its voice recognition software. “She” was intended to tweet “like a teen girl” andmarketed as having “zero chill.” Not so sure about the first part, but they definitely nailed the second part.
Microsoft said Tay was designed to “engage and entertain people where they connect with each other online through casual and playful conversation.”
Those tweets don’t even begin to cover it. N-words, 9/11 conspiracy theories, genocide, incest, etc. Tay really lost it. Needless to say, this wasn’t programmed into the chat robot. Rather, the trolls on Twitter exploited her, as one would expect them to do, and ruined it for everybody.
In a statement cited by several news outlets, Microsoft explained what happened:
“Unfortunately, within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay’s commenting skills to have Tay respond in inappropriate ways. As a result, we have taken Tay offline and are making adjustments.”