This article buries the lede so much that many readers probably miss it completely: the important takeaway here, which is clearer in The Register's version of the story, is that ChatGPT cannot actually play chess:
“Despite being given a baseline board layout to identify pieces, ChatGPT confused rooks for bishops, missed pawn forks, and repeatedly lost track of where pieces were."
To actually use an LLM as a chess engine without the kind of manual intervention that this person did, you would need to combine it with some other software to automate continuing to ask it for a different next move every time it suggests an invalid one. And, if you did that, it would still mostly lose, even to much older chess engines than Atari's Video Chess.
edit: i see now that numerous people have done this; you can find many websites where you can "play chess against chatgpt" (which actually means: with chatgpt and also some other mechanism to enforce the rules). and if you know how to play chess you should easily win :)
You probably could train an AI to play chess and win, but it wouldn't be an LLM.
In fact, let's go see...
Stockfish: Open-source and regularly ranks at the top of computer chess tournaments. It uses advanced alpha-beta search and a neural network evaluation (NNUE).
Leela Chess Zero (Lc0): Inspired by DeepMind’s AlphaZero, it uses deep reinforcement learning and plays via a neural network with Monte Carlo tree search.
AlphaZero: Developed by DeepMind, it reached superhuman levels using reinforcement learning and defeated Stockfish in high-profile matches (though not under perfectly fair conditions).
Hmm. neural networks and reinforcement learning. So non-LLM AI.
you can play chess against something based on chatgpt, and if you're any good at chess you can win
You don't even have to be good. You can just flat out lie to ChatGPT because fiction and fact are intertwined in language.
"You can't put me in check because your queen can only move 1d6 squares in a single turn."
Well... yeah. That's not what LLMs do. That's like saying "A leafblower got absolutely wrecked by 1998 Dodge Viper in beginner's drag race". It's only impressive if you don't understand what a leafblower is.
People write code with LLMs. Programming language is just a language specialised at precise logic. That’s what „AI” is advertised to be good at. How can you do that an not the other?
It's not very good at it though, if you've ever used it to code. It automates and eases a lot of mundane tasks, but still requires a LOT of supervision and domain knowledge to not have it go off the rails or hallucinate code that's either full of bugs or will never work. It's not a "prompt and forget" thing, not by a long shot. It's just an easier way to steal code it picked up from Stackoverflow and GitHub.
Me as a human will know to check how much data is going into a fixed size buffer somewhere and break out of the code if it exceeds it. The LLM will have no qualms about putting buffer overflow vulnerabilities all over your shit because it doesn't care, it only wants to fulfill the prompt and get something to work.