Claude 3 notices when a sentence about pizza toppings doesn't fit with its surrounding text. Whole internet including Tim Sweeney and Margaret Mitchell concludes that it's probably self-aware now.
me when the machine specifically designed to pass the turing test passes the turing test
If you can design a model that spits out self-aware-sounding things after not having been trained on a large corpus of human text, then I'll bite. Until then, it's crazy that anybody who knows anything about how current models are trained accepts the idea that it's anything other than a stochastic parrot.
Despite the hype, from my admittedly limited experience I haven't seen a chatbot that is anywhere near passing the turing test. It can seemingly fool people who want to be fooled but throw some non-sequiturs or anything cryptic and context-dependent at it and it will fail miserably.
I don't think a computer program has passed the Turing test without interpreting the rules in a very lax way and heavily stacking the deck in the bot's favor.
I'd be impressed if a machine does something hard even if the machine is specifically designed to do that. Something like proving the Riemann hypothesis or actually passing an honest version of Turing test.
Yea I don't think the Turing test is that great for establishing genuine artificial intelligence, but I also maintain that current state of the art doesn't even pass the Turing test to an intellectually honest standard and certainly didn't in the 60s.