News

Less than a day after launching Tay on Twitter, Microsoft has deleted all the chatbot’s messages, including tweets praising Hitler and genocide and tweets spouting hatred for African Americans.
The chat bot had been sidelined after making racist comments. — -- Microsoft's millennial chat bot "Tay" made a brief reappearance on Twitter this morning but still didn't make a stellar ...
Microsoft's artificial intelligence strategy is to unleash more bots like Tay. Redmond doesn't mean more nasty, racist or homophobic chatbots, but a forest of A.I. personalities for different uses ...
Microsoft has apologised for creating an artificially intelligent chatbot that quickly turned into a holocaust-denying racist. But in doing so made it clear Tay's views were a result of nurture ...
This feels familiar. Grok resembles Microsoft's 2016 hate-speech-spouting Tay chatbot, also trained on Twitter data and set loose on Twitter before being shut down. But there's a crucial difference.
Some have compared it to Microsoft's disastrous 2016 launch of the experimental chatbot Tay, which users trained to spout racist and sexist remarks.
"As a result, Tay tweeted wildly inappropriate and reprehensible words and images." Microsoft has enjoyed better success with a chatbot called XiaoIce that the company launched in China in 2014.
On Wednesday, Microsoft gave birth to Tay, an artificial-intelligence chatbot that is meant to speak in the language of 18 to 24-year-olds over Kik, GroupMe and Twitter.
Microsoft has apologised for creating an artificially intelligent chatbot that quickly turned into a holocaust-denying racist. But in doing so made it clear Tay's views were a result of nurture ...