Skip Navigation
Stubsack: Stubsack: weekly thread for sneers not worth an entire post, week ending 20th July 2025 - awful.systems
  • While I also fully expect the conclusion to check out, it's also worth acknowledging that the actual goal for these systems isn't to supplement skilled developers who can operate effectively without them, it's to replace those developers either with the LLM tools themselves or with cheaper and worse developers who rely on the LLM tools more.

  • Stubsack: Stubsack: weekly thread for sneers not worth an entire post, week ending 20th July 2025 - awful.systems
  • I think it's a better way of framing things than the TESCREALs themselves use, but it still falls into the same kind of science fiction bucket imo. Like, the technology they're playing with is nowhere near close to the level of full brain emulation or mind-machine interface or whatever that you would need to make the philosophical concerns even relevant. I fully agree with what Torres is saying here, but he doesn't mention that the whole affair is less about building the Torment Nexus and more about deflecting criticism away from the real and demonstrable costs and harms of the way AI systems are being deployed today.

  • Found a pretty good blog post on our friends in the wild
  • Charles, in addition to being a great fiction author, is also an occasion guest here on awful.systems. This is a great article from him, but I'm pretty sure it's done the rounds already. Not that I'm complaining, given how much these guys bitch about science fiction and adjacent subjects.

  • Stubsack: Stubsack: weekly thread for sneers not worth an entire post, week ending 20th July 2025 - awful.systems
  • I'm not comfortable saying that consciousness and subjectivity can't in principle be created in a computer, but I think one element of what this whole debate exposes is that we have basically no idea what actions makes consciousness happen or how to define and identify that happening. Chatbots have always challenged the Turing test because they showcase how much we tend to project consciousness into anything that vaguely looks like it (interesting parallel to ancient mythologies explaining the whole world through stories about magic people). The current state of the art still fails at basic coherence over shockingly small amounts of time and complexity, and even when it holds together it shows a complete lack of context and comprehension. It's clear that complete-the-sentence style pattern recognition and reproduction can be done impressively well in a computer and that it can get you farther than I would have thought in language processing, at least imitatively. But it's equally clear that there's something more there and just scaling up your pattern-maximizer isn't going to replicate it.

  • Stubsack: Stubsack: weekly thread for sneers not worth an entire post, week ending 13th July 2025
  • In conjunction with his comments about making it antiwoke by modifying the input data rather then relying on a system prompt after filling it with everything, it's hard not to view this as part of an attempt to ideologically monitor these tutors to make sure they're not going to select against versions of the model that aren't in the desired range of "closeted Nazi scumbag."

  • draft Pivot: "AI is here to stay"
  • Contra Blue Monday, I think that we're more likely to see "AI" stick around specifically because of how useful Transformers are as tool for other things. I feel like it might take a little bit of time for the AI rebrand to fully lose the LLM stink, but both the sci-fi concept and some of the underlying tools (not GenAI, though) are too robust to actually go away.

  • A non-anthropomorphized view of LLMs
  • I disagree with their conclusions about the ultimate utility of some of these things, mostly because I think they underestimate the impact of the problem. If you're looking at a ~.5% chance of throwing out a bad outcome we should be less worried about failing to filter out the evil than with just straight-up errors making it not work. There's no accountability and the whole pitch of automating away, say, radiologists is that you don't have a clinic full of radiologists who can catch those errors. Like, you can't even get a second opinion if the market is dominated by XrayGPT or whatever because whoever you would go to is also going to rely on XrayGPT. After a generation or so where are you even going to find much less afford an actual human with the relevant skills?This is the pitch they're making to investors and the world they're trying to build.

  • Stubsack: Stubsack: weekly thread for sneers not worth an entire post, week ending 13th July 2025
  • I mean, decontextualizing and obscuring the meanings of statements in order to permit conduct that would in ordinary circumstances breach basic ethical principles is arguably the primary purpose of deploying the specific forms and features that comprise "Business English" - if anything, the fact that LLM models are similarly prone to ignore their "conscience" and follow orders when deciding and understanding them requires enough mental resources to exhaust them is an argument in favor of the anthropomorphic view.

    Or:

    Shit, isn't the whole point of Business Bro language to make evil shit sound less evil?

  • Microsoft lays off the staff who make the money to fund AI that doesn’t
  • Standard Business Idiot nonsense. They don't actually understand the work that their company does, and so are extremely vulnerable to a good salesman who can put together a narrative they do understand that lets them feel like super important big boys doing important business things that are definitely worth the amount they get paid to do them.

  • Only 3%* of US AI users are willing to pay a penny for it
  • This is doubly (triply? (N+1)ly?) ironic because this is a perfect example of when not only is it acceptable to use the passive voice, but using it makes the sentence flow more smoothly and read more clearly. The idea they're communicating here should focus on the object ("the agent") rather than the subject ("you") because the presumed audience already knows everything about the subject.

  • Molly White breaks down a "Kamala should go easy on Crypto" poll
    www.mollywhite.net Annotated: Paradigm’s July 2024 Democratic Public Opinion Poll

    Annotating Paradigm’s July 2024 poll of Democratic voters on their crypto opinions.

    Annotated: Paradigm’s July 2024 Democratic Public Opinion Poll

    I don't have much to add here, but I know when she started writing about the specifics of what Democrats are worried about being targeted for their "political views" my mind immediately jumped to members of my family who are gender non-conforming or trans. Of course, the more specific you get about any of those concerns the easier it is to see that crypto doesn't actually solve the problem and in fact makes it much worse.

    0
    InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)YO
    YourNetworkIsHaunted @awful.systems
    Posts 1
    Comments 983