Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)BL
Posts
80
Comments
881
Joined
2 yr. ago

  • Text version just came up - excellent read as usual.

    A lot of Yudkowsky’s despair is that his most devoted acolytes heard his warnings “don’t build the AI Torment Nexus, you idiots” and they all went off to start companies building the AI Torment Nexus.

    AHAHAHAHAHAHAHAHAHAHAHAHAHAHAHA

    While we’re talking about what rationalists actually believe, I’d be remiss not to mention one deeply unpleasant thing about the rationalist subculture: they are really, really into race and IQ theories and scientific racism. Overwhelmingly.

    Considering the whole thing's deeply fucking steeped in fascism, I'm not shocked.

  • TechTakes @awful.systems

    Stubsack: weekly thread for sneers not worth an entire post, week ending 24th August 2025

    MoreWrite @awful.systems

    A Mini-Essay on Tech's Future Prospects

  • Another day, another case of "personal responsibility" used to shift blame for systemic issues, and scapegoat the masses for problems bad actors actively imposed on them.

    Its not like we've heard that exact same song and dance a million times before, I'm sure the public hasn't gotten sick and tired of it by this point.

    Probable hot take: this shit's probably also hampering people's efforts to overcome self-serving bias, as well - taking responsibility for your own faults is hard enough in a vacuum, its likely even harder when bad actors act with impunity by shifting the blame to you.

  • I like the DNF / vaporware analogy, but did we ever have a GPT Doom or Duke3d killer app in the first place? Did I miss it?

    In a literal sense, Google did attempt to make GPT Doom, and failed (i.e. a large language model can't run Doom).

    In a metaphorical sense, the AI equivalent to Doom was probably AI Dungeon, a roleplay-focused chatbot viewed as quite impressive when it released in 2020.

  • Ed Zitron's given his thoughts on GPT-5's dumpster fire launch:

    Personally, I can see his point - the Duke Nukem Forever levels of hype around GPT-5 set the promptfondlers up for Duke Nukem Forever levels of disappointment with GPT-5, and the "deaths" of their AI waifus/therapists this has killed whatever dopamine delivery mechanisms they've set up for themselves.

  • Anyways, personal sidenote/prediction: I suspect the Internet Archive’s gonna have a much harder time archiving blogs/websites going forward.

    Me, two months ago

    Looks like I was on the money - Reddit's began limiting what the Internet Archive can access, claiming AI corps have been scraping archived posts to get around Reddit's pre-existing blocks on scrapers. Part of me suspects more sites are gonna follow suit pretty soon - Reddit's given them a pretty solid excuse to use.

  • You're dead right on that.

    Part of me suspects STEM in general (primarily tech, the other disciplines look well-protected from the fallout) will have to deal with cleaning off the stench of Eau de Fash after the dust settles, with tech in particular viewed as unequipped to resist fascism at best and out-and-proud fascists at worst.

  • I wrote yesterday about red-team cybersecurity and how the attack testing teams don’t see a lot of use for AI in their jobs. But maybe the security guys should be getting into AI. Because all these agents are a hilariously vulnerable attack surface that will reap rich rewards for a long while to come.

    Hey, look on the bright side, David - the user is no longer the weakest part of a cybersecurity system, so they won't face as many social engineering attempts on them.

    Seriously, though, I fully expect someone's gonna pull off a major breach through a chatbot sooner or later. We're probably overdue for an ILOVEYOU-level disaster.

  • SneerClub @awful.systems

    Sarah Lyons on AI doom crankery

    MoreWrite @awful.systems

    Some Quick-and-Dirty Thoughts on Technological Determinism

    SneerClub @awful.systems

    AI disagreements - Brian Merchant on our very good friends

    TechTakes @awful.systems

    Stubsack: Stubsack: weekly thread for sneers not worth an entire post, week ending 10th August 2025

    MoreWrite @awful.systems

    Some Off-The-Cuff Predictions about the Next AI Winter

    TechTakes @awful.systems

    Stubsack: weekly thread for sneers not worth an entire post, week ending 27th July 2025

    NotAwfulTech @awful.systems

    Godot Showcase - Dogwalk

    MoreWrite @awful.systems

    A Mini-Essay on Newgrounds' Resistance to AI Slop

    TechTakes @awful.systems

    Stubsack: Stubsack: weekly thread for sneers not worth an entire post, week ending 13th July 2025

    TechTakes @awful.systems

    Stubsack: Stubsack: weekly thread for sneers not worth an entire post, week ending 6th July 2025

    TechTakes @awful.systems

    Rolling the ladder up behind us - Xe Iaso on the LLM bubble

    TechTakes @awful.systems

    Stubsack: Stubsack: weekly thread for sneers not worth an entire post, week ending 29th June 2025

    MoreWrite @awful.systems

    Some More Quick-and-Dirty Thoughts on AI's Future

    NotAwfulTech @awful.systems

    We started porting LEGO Island to... everything?

    SneerClub @awful.systems

    The Psychology Behind Tech Billionaires

    TechTakes @awful.systems

    Stubsack: weekly thread for sneers not worth an entire post, week ending 22nd June 2025

    TechTakes @awful.systems

    Stubsack: weekly thread for sneers not worth an entire post, week ending 15th June 2025

    TechTakes @awful.systems

    Stubsack: weekly thread for sneers not worth an entire post, week ending 8th June 2025