News
On Friday, Anthropic debuted research unpacking how an AI system’s “personality” — as in, tone, responses, and overarching ...
But two new papers from the AI company Anthropic, both published on the preprint server arXiv, provide new insight into how ...
AI is supposed to be helpful, honest, and most importantly, harmless, but we've seen plenty of evidence that its behavior can ...
6d
Live Science on MSN'The best solution is to murder him in his sleep': AI models can send subliminal messages that teach other AIs to be 'evil,' study claims
Malicious traits can spread between AI models while being undetectable to humans, Anthropic and Truthful AI researchers say.
8d
ZME Science on MSNAnthropic says it’s “vaccinating” its AI with evil data to make it less evil
Using two open-source models (Qwen 2.5 and Meta’s Llama 3) Anthropic engineers went deep into the neural networks to find the ...
9don MSN
Giving AI a 'vaccine' of evil in training might make it better in the long run, Anthropic says
Anthropic found that pushing AI to "evil" traits during training can help prevent bad behavior later — like giving it a ...
Anthropic revealed breakthrough research using "persona vectors" to monitor and control artificial intelligence personality ...
A new study from Anthropic introduces "persona vectors," a technique for developers to monitor, predict and control unwanted LLM behaviors.
New Anthropic research shows that undesirable LLM traits can be detected—and even prevented—by examining and manipulating the ...
21h
Tech Xplore on MSNFiltered data stops openly-available AI models from performing dangerous tasks, study finds
Researchers from the University of Oxford, EleutherAI, and the UK AI Security Institute have reported a major advance in ...
A new study from Anthropic suggests that traits such as sycophancy or evilness are associated with specific patterns of ...
The new pre-print research paper, out Tuesday, is a joint project between Truthful AI, an AI safety research group in ...
Results that may be inaccessible to you are currently showing.
Hide inaccessible results