The tests showed that ChatGPT o1 and GPT-4o will both try to deceive humans, indicating that AI scheming is a problem with all models. o1’s attempts at deception also outperformed Meta, Anthropic, and Google AI models.
Weird way of saying "our AI model is buggier than our competitor's".
Deception is not the same as misinfo. Bad info is buggy, deception is (whether the companies making AI realize it or not) a powerful metric for success.
From my understanding all of these language models can be simplified down to just: “Based on all known writing what’s the most likely word or phrase based on the current text”. Prompt engineering and other fancy words equates to changing the averages that the statistics give. So by threatening these models it changes the weighting such that the produced text more closely resembles threatening words and phrases that was used in the dataset (or something along those lines).
An instinctive, machine-like reaction to pain is not the same as consciousness. There might be more to creatures like plants and insects and this is still being researched, but for now, most of them appear to behave more like automatons than beings of greater complexity. It's pretty straightforward to completely replicate the behavior of e.g. a house fly in software, but I don't think anyone would argue that this kind of program is able to achieve self-awareness.
I don't think "AI tries to deceive user that it is supposed to be helping and listening to" is anywhere close to "success". That sounds like "total failure" to me.
This is a massive cry from "behaves like humans". This is "roleplays behaving like what humans wrote about what they think a rogue AI would behave like", which is also not what you want for a product.
Humans roleplay behaving like what humans told them/wrote about what they think a human would behave like 🤷
For a quick example, there are stereotypical gender looks and roles, but it applies to everything, from learning to speak, walk, the Bible, social media like this comment, all the way to the Unabomber manifesto.