Stubsack: weekly thread for sneers not worth an entire post, week ending 16th November 2025
V0ldek @ V0ldek @awful.systems Posts 9Comments 992Joined 2 yr. ago
V0ldek @ V0ldek @awful.systems
Posts
9
Comments
992
Joined
2 yr. ago
Featured
Stubsack: weekly thread for sneers not worth an entire post, week ending 16th November 2025
Featured
Stubsack: weekly thread for sneers not worth an entire post, week ending 16th November 2025
Does AI make researchers more productive? What? Why would it? Apparently you can just say that and almost get published!
None of those words are in the Bible 2.0 (Tossed Salads And Scrumbled Eggs — Ludicity)
I mean if you ever toyed around with neural networks or similar ML models you know it's basically impossible to divine what the hell is going on inside by just looking at the weights, even if you try to plot them or visualise in other ways.
There's a whole branch of ML about explainable or white-box models because it turns out you need to put extra care and design the system around being explainable in the first place to be able to reason about its internals. There's no evidence OpenAI put any effort towards this, instead focusing on cool-looking outputs they can shove into a presser.
In other words, "engineers don't know how it works" can have two meanings - that they're hitting computers with wrenches hoping for the best with no rhyme or reason; or that they don't have a good model of what makes the chatbot produce certain outputs, i.e. just by looking at the output it's not really possible to figure out what specific training data it comes from or how to stop it from producing that output on a fundamental level. The former is demonstrably false and almost a strawman, I don't know who believes that, a lot of people that work on OpenAI are misguided but otherwise incredibly clever programmers and ML researchers, the sheer fact that this thing hasn't collapsed under its own weight is a great engineering feat even if externalities it produces are horrifying. The latter is, as far as I'm aware, largely true, or at least I haven't seen any hints that would falsify that. If OpenAI satisfyingly solved the explainability problem it'd be a major achievement everyone would be talking about.