News

Less than a day after launching Tay on Twitter, Microsoft has deleted all the chatbot’s messages, including tweets praising Hitler and genocide and tweets spouting hatred for African Americans.
Soon, Tay was making sympathetic references to Hitler – and creating a furor on social media. “Unfortunately, within the first 24 hours of coming online, we became aware of a coordinated effort by ...
Microsoft has apologised for creating an artificially intelligent chatbot that quickly turned into a holocaust-denying racist. But in doing so made it clear Tay's views were a result of nurture ...
Microsoft's artificial intelligence strategy is to unleash more bots like Tay. Redmond doesn't mean more nasty, racist or homophobic chatbots, but a forest of A.I. personalities for different uses ...
"As a result, Tay tweeted wildly inappropriate and reprehensible words and images." Microsoft has enjoyed better success with a chatbot called XiaoIce that the company launched in China in 2014.
First there was Clippy. Now Microsoft Copilot has a face, with reactions to what you tell it. Microsoft isshowing off how Copilot could “look”: as an anthropomorphic teardrop of sorts, with ...
The chat bot had been sidelined after making racist comments. — -- Microsoft's millennial chat bot "Tay" made a brief reappearance on Twitter this morning but still didn't make a stellar ...
In 2016 Microsoft unleashed a Twitter chatbot named “Tay” onto the world and immediately its machine-learning algorithm turned to racism and sexism. Hey, if you build a bot to mimic how people ...