Anthropic's Claude 4 could "blackmail" you in extreme situations
Anthropic's Claude 4 could "blackmail" you in extreme situations
- Anthropic’s new Claude 4 features an aspect that may be cause for concern.
- The company’s latest safety report says the AI model attempted to “blackmail” developers.
- It resorted to such tactics in a bid of self-preservation.
2
crossposts
1
comments
While end users like ourselves would be weary of such results...
Yes, i grow tired of this too... "Wary" is what you're looking for. I expect more from people that use words for a living.
Also, the article never explains what it means by blackmail or extreme circumstances
4 0 Reply