News
Google rolls out Gemini AI chatbot and assistant 03:50. A college student in Michigan received a threatening response during a chat with Google's AI chatbot Gemini.
A Google-made artificial intelligence program verbally abused a student seeking help with their homework, ultimately telling her to "Please die." AI, yi, yi. Primary Menu Sections ...
In April 2024, a group of Google DeepMind researchers published a study warning that understudied pursuasive generative AI systems, including human-like AI companions, could cause significant harm ...
A claim that Google's artificial intelligence (AI) chatbot, Gemini, told a student to "please die" during a chat session circulated online in November 2024. One popular post on X shared the claim ...
Google chatbot Gemini told a user "please die" during a conversation about challenges aging adults face, violating the company's policies on harmful messages. Fox Business.
As Garcia's lawyers tell it, rather than take on a safety risk "under its own name," Google "encouraged" the engineers to keep going. This supposedly prompted De Freitas and Shazeer's exits in ...
Google reportedly faces a fresh Justice Department probe over whether it violated antitrust law through its partnership with artificial intelligence chatbot firm Character.AI.
Google’s Gemini chatbot can now remember things like info about your life, work, and personal preferences. As flagged by posters on X (and Google’s official account), a “memory” feature ...
A version of this article appears in print on , Section B, Page 6 of the New York edition with the headline: Google Introduces A.I. Chatbot, Signaling Big Changes to Search.
Some results have been hidden because they may be inaccessible to you
Show inaccessible results