Beware the bots: the missteps of Microsoft’s Tay

If you are going to engage in research in how computers can converse with humans, the internet is not always the best place to start. This is what Microsoft learned when it launched Tay, an artificial intelligence project, meant to converse on social networks like Twitter and Kik. Apparently a ‘coordinated effort’ to teach the bot by internet communities (it learned from conversations using machine learning) led to Tay spewing out increasingly racist and sexist remarks.  Within 24 hours Microsoft had to close down the project.

As TechCrunch notes, for organizations experimenting in this space, if you are developing cognitive apps for open social networks, you need to make sure you have sufficient anti-abuse measures in place to avoid bots picking up anti-social traits.

See the official response from Microsoft.


Leave a Reply