Microsoft Releases AI Twitter Bot That Immediately Backfires, Gets Racist

Microsoft Releases AI Twitter Bot That Immediately Backfires, Gets Racist

“Hellooooooo world!!!” wrote Microsoft’s new AI bot in its first tweet yesterday morning. By the end of the day, it had declared that Hitler did nothing wrong.

Tay, launched yesterday, is an artificially intelligent bot that’s designed to appeal to teens and younger 20-somethings. Its Twitter bio reads “Microsoft’s A.I. fam from the internet that’s got zero chill!” Tay is designed to respond to conversations, learn from keywords it absorbs on Twitter, and even repeat back what people say.

Of course, this is the internet, where people love screwing with Twitter bots. In the hours following Tay’s release, the bot’s mentions were immediately flooded with racism, sexism, screeds against feminism, Donald Trump quotes, and just about anything else you might imagine. This led Tay to start repeating tweets accordingly.

I’m sure the Microsoft team behind this AI bot had good intentions. But anyone with a bit of internet savvy could’ve pointed out that maybe it was a bad idea to put an AI like this on Twitter, a network notorious for helping facilitate harassment and other vile behaviour. And it was an especially bad idea to let Tay repeat what other people say.

Microsoft has deleted many of the most heinous, explicitly racist tweets. (Gizmodo has collected a bunch of them.) But some remain, including the following posts, which are all live as of this afternoon:

Microsoft may want to rethink this experiment.


Show more comments

Comments are closed.

Log in to comment on this story!