While I also fully expect the conclusion to check out, it's also worth acknowledging that the actual goal for these systems isn't to supplement skilled developers who can operate effectively without them, it's to replace those developers either with the LLM tools themselves or with cheaper and worse developers who rely on the LLM tools more.
I think it's a better way of framing things than the TESCREALs themselves use, but it still falls into the same kind of science fiction bucket imo. Like, the technology they're playing with is nowhere near close to the level of full brain emulation or mind-machine interface or whatever that you would need to make the philosophical concerns even relevant. I fully agree with what Torres is saying here, but he doesn't mention that the whole affair is less about building the Torment Nexus and more about deflecting criticism away from the real and demonstrable costs and harms of the way AI systems are being deployed today.
Charles, in addition to being a great fiction author, is also an occasion guest here on awful.systems. This is a great article from him, but I'm pretty sure it's done the rounds already. Not that I'm complaining, given how much these guys bitch about science fiction and adjacent subjects.
I'm not comfortable saying that consciousness and subjectivity can't in principle be created in a computer, but I think one element of what this whole debate exposes is that we have basically no idea what actions makes consciousness happen or how to define and identify that happening. Chatbots have always challenged the Turing test because they showcase how much we tend to project consciousness into anything that vaguely looks like it (interesting parallel to ancient mythologies explaining the whole world through stories about magic people). The current state of the art still fails at basic coherence over shockingly small amounts of time and complexity, and even when it holds together it shows a complete lack of context and comprehension. It's clear that complete-the-sentence style pattern recognition and reproduction can be done impressively well in a computer and that it can get you farther than I would have thought in language processing, at least imitatively. But it's equally clear that there's something more there and just scaling up your pattern-maximizer isn't going to replicate it.
In conjunction with his comments about making it antiwoke by modifying the input data rather then relying on a system prompt after filling it with everything, it's hard not to view this as part of an attempt to ideologically monitor these tutors to make sure they're not going to select against versions of the model that aren't in the desired range of "closeted Nazi scumbag."
"We made it more truth-seeking, as determined by our boss, the fascist megalomaniac."
Total fucking Devin move if you ask me.
Just throw the whole unit into the font, just to be safe. Or better yet, a river!
Also the attempt to actually measure productivity instead of just saying "they felt like it helped" - of course it did!
Nah, we just need to make sure they properly baptise whatever servers it's running on.
Compare a $2,400/yr subscription with the average annual software developer's salary of ~$125,000/yr.
Contra Blue Monday, I think that we're more likely to see "AI" stick around specifically because of how useful Transformers are as tool for other things. I feel like it might take a little bit of time for the AI rebrand to fully lose the LLM stink, but both the sci-fi concept and some of the underlying tools (not GenAI, though) are too robust to actually go away.
I disagree with their conclusions about the ultimate utility of some of these things, mostly because I think they underestimate the impact of the problem. If you're looking at a ~.5% chance of throwing out a bad outcome we should be less worried about failing to filter out the evil than with just straight-up errors making it not work. There's no accountability and the whole pitch of automating away, say, radiologists is that you don't have a clinic full of radiologists who can catch those errors. Like, you can't even get a second opinion if the market is dominated by XrayGPT or whatever because whoever you would go to is also going to rely on XrayGPT. After a generation or so where are you even going to find much less afford an actual human with the relevant skills?This is the pitch they're making to investors and the world they're trying to build.
I mean, decontextualizing and obscuring the meanings of statements in order to permit conduct that would in ordinary circumstances breach basic ethical principles is arguably the primary purpose of deploying the specific forms and features that comprise "Business English" - if anything, the fact that LLM models are similarly prone to ignore their "conscience" and follow orders when deciding and understanding them requires enough mental resources to exhaust them is an argument in favor of the anthropomorphic view.
Or:
Shit, isn't the whole point of Business Bro language to make evil shit sound less evil?
I've had similar thoughts about AI in other fields. The untrustworthiness and incompetence of the bot makes the whole interaction even more adversarial than it is naturally.
Standard Business Idiot nonsense. They don't actually understand the work that their company does, and so are extremely vulnerable to a good salesman who can put together a narrative they do understand that lets them feel like super important big boys doing important business things that are definitely worth the amount they get paid to do them.
Something something built Ford tough.
This is doubly (triply? (N+1)ly?) ironic because this is a perfect example of when not only is it acceptable to use the passive voice, but using it makes the sentence flow more smoothly and read more clearly. The idea they're communicating here should focus on the object ("the agent") rather than the subject ("you") because the presumed audience already knows everything about the subject.
I think I liked this observation better when Charles Stross made it.
If for no other reason than he doesn't start off by dramatically overstating the current state of this tech, isn't trying to sell anything, and unlike ChatGPT is actually a good writer.
Annotating Paradigm’s July 2024 poll of Democratic voters on their crypto opinions.

I don't have much to add here, but I know when she started writing about the specifics of what Democrats are worried about being targeted for their "political views" my mind immediately jumped to members of my family who are gender non-conforming or trans. Of course, the more specific you get about any of those concerns the easier it is to see that crypto doesn't actually solve the problem and in fact makes it much worse.