Claude 3 notices when a sentence about pizza toppings doesn't fit with its surrounding text. Whole internet including Tim Sweeney and Margaret Mitchell concludes that it's probably self-aware now.
Claude 3 notices when a sentence about pizza toppings doesn't fit with its surrounding text. Whole internet including Tim Sweeney and Margaret Mitchell concludes that it's probably self-aware now.

Anthropic’s Claude 3 causes stir by seeming to realize when it was being tested

me when the machine specifically designed to pass the turing test passes the turing test
If you can design a model that spits out self-aware-sounding things after not having been trained on a large corpus of human text, then I'll bite. Until then, it's crazy that anybody who knows anything about how current models are trained accepts the idea that it's anything other than a stochastic parrot.
Glad that the article included a good amount of dissenting opinion, highlighting this one from Margaret Mitchell: "I think we can agree that systems that can manipulate shouldn't be designed to present themselves as having feelings, goals, dreams, aspirations."
Cool tech. We should probably set it on fire.
Despite the hype, from my admittedly limited experience I haven't seen a chatbot that is anywhere near passing the turing test. It can seemingly fool people who want to be fooled but throw some non-sequiturs or anything cryptic and context-dependent at it and it will fail miserably.
I agree, except with the first sentence.
The Turing test doesn't say any of that. Which is why it was first passed in the 60s, and is a bad test.