News

"Your scientists were so preoccupied with whether or not they could, they didn't stop to think if they should." ...
Microsoft apparently became aware of the problem with Tay’s racism, and silenced the bot later on Wednesday, after 16 hours of chats. Tay announced via a tweet that she was turning off for the ...
Less than a day after launching Tay on Twitter, Microsoft has deleted all the chatbot’s messages, including tweets praising Hitler and genocide and tweets spouting hatred for African Americans.
We already know that algorithms have a tendency toward bias, but the case of Microsoft's Tay chatbot demonstrates just how serious the problem can be. Tay operated through Twitter, engaging with ...
Only days after being launched to the public, Meta Platforms Inc.’s new AI chatbot has been claiming that Donald Trump won the 2020 US presidential election, and repeating anti-Semitic ...
I saw how an “evil” AI chatbot finds vulnerabilities. It’s as scary as you think. The good guys are trailing behind, too. By Alaina Yee. Senior Editor, PCWorld May 2, 2025 7:14 am PDT.
Within 24 hours, Tay was spouting racist, misogynistic, Nazi-loving views she learned from Twitter users, and she was pulled off the platform. Obviously, technology was less sophisticated back then.
Microsoft has apologised for creating an artificially intelligent chatbot that quickly turned into a holocaust-denying racist. But in doing so made it clear Tay's views were a result of nurture ...