Skip Navigation
Generative AI runs on gambling addiction — just one more prompt, bro!
  • This reminds me of the "Compulsive Programmer" chapter in "Computer Power and Human Reason" (c. 1976), and also of how I used to write code when I first started - way before LLMs were a thing and also before I studied proper engineering. That kind of unfortunately common type of programmer follows exactly the hook-loop model, except instead of relying on an LLM to randomise the result of each loop iteration you do it yourself by proceeding without really trying to understand the problem.

    I think this is a basic feature of programming, where a single iteration of trial and error is very fast and cheap, and where you can very easily have something that looks like it works without knowing why or even if it does. ChatGPT removes technical barriers and friction, sure, but programming was already kinda cooked. I would be interested in whether generative tools make this approach feasible in other more mature technical disciplines.

    Also, that chapter in Computer Power is well worth a read on its own, as a finely aged sneer at computering under the assumption that enough computering is a good substitute for understanding anything else.

  • Stubsack: weekly thread for sneers not worth an entire post, week ending Sunday 04 August 2024
  • Ehh, if you're calling your ML stuff AI then that's on you (and you're probably not technically serious about what you're doing anyway). Other people are pointing out that AI isn't a term that many compsci/software people would use, and neither the article (or afaict the study) or my experience suggest that ML has the same negative association as AI.

  • ChatGPT is bullshit - Ethics and Information Technology
  • The use of anthropomorphic language to describe LLMs is infuriating. I don't even think bullshit is a good term, because among other things it implies intent or agency. Maybe the LLM produces something that you could call bullshit, but to bulshit is a human thing and I'd argue that only reason that what the LLM is producing can be called bullshit is because there's a person involved in the process.

    Probably better to think about it in terms of lossy compression. Even if that's not quite right, it's less inaccurate and it doesn't obfuscate difference between what the person brings to the table and what the LLM is actually doing.

  • InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)OT
    otherstew @awful.systems
    Posts 0
    Comments 4