Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)BL
Posts
99
Comments
1,159
Joined
2 yr. ago

  • They've already extruded a fake semblance of humanity for the government corruption machine, might as well extrude some flimsy botshit to "justify" its unjustifiable existence

    A new artificially-extruded minister has given its inaugural address to Albania's parliament, defending its role as "not here to replace people, but to help them".

    You exist to replace would-be resistance with a digital Quisling, shut the fuck up

    "Let me remind you, the real danger to constitutions has never been the machines but the inhumane decisions of those in power," the clanker said.

    Your purpose is to perpetrate inhumane decisions and protect those in power from accountability, shut the fuck up

    The AI also responded to constitutional concerns, noting that the law "speaks of duties, responsibilities, transparency, without discrimination."

    "I assure you, I embody these values as rigorously as any human colleague. Perhaps even more so."

    You spew lies and perpetrate bigotry with every syllable you extrude from your "corpus" of stolen humanity, shut the fuck up

  • LLM's ability to fake solving word problems hinges on being able to crib the answer, so using aliens from cartoons (or automatically-generating random names for objects/characters) will prove highly effective until AI corps can get the answers into their training data.

    As for context breaks, those will remain highly effective against LLMs pretty much forever - successfully working around a context break requires reasoning, which LLMs are categorically incapable of doing.

    Constantly and subtly twiddling with questions (ideally through automatic means) should prove effective as well - Apple got "reasoning" text extruders to flounder and fail at simple logic puzzles through such a method.

  • A damn good find - bringing particular attention to this passage, since it pinpoints one of the major causes of the rot:

    The following doesn’t apply to everybody in technology, but it applies to enough of them: At some point STEM education was the only thing the Olds cared about because of something something Asia, and now we have a couple of generations that are highly educated on paper and comically unaware of the complexity of the world outside of WordPress plugins.

  • OT: Baldur Bjarnason's lamented how his webdev feed has turned to complete shit:

    Between the direct and indirect support of fascism and the uncritical embrace of LLMs and the overwhelming majority of the dev sites in my feed reader have turned to an undifferentiated puddle of nonsense…

    …Two years ago these feeds (I never subscribed to any of the React grifters) were all largely posts on concrete problem-solving and, y’know, useful stuff. Useful dev discourse has collapsed into a tiny handful of blogs.

  • Also accidentally posted in an old thread:

    Hot take: If a text extruder’s winning gold medals at your contest, that’s not a sign the text extruder’s good at something, that’s a sign your contest is worthless for determining skill.

  • The purpose of artificial intelligence is to lie and deceive, so seeing it used to make "synthetic data" is infuriating, but not shocking.

    You or I might think that claiming a chatbot model simulates human psychology was obviously a weird and foolish claim. So we have a response paper: “Large Language Models Do Not Simulate Human Psychology.” [arXiv, PDF]

    That this has to be said at all pisses me off, but its at least nice to see there's some pushback against the knowledge destruction machine.

  • (I don't know why, but part of me's saying the quantum bubble isn't gonna last long. Its probably the fact the AI bubble is still going - when that bursts, the sheer economic devastation it'll cause will likely burst the quantum bubble as well.)

    In this paper, Gutmann is telling cryptographers not to worry too much about quantum computing. Though cryptographers have still been on the case for a couple of decades, just in case there’s a breakthrough.

    Cryptographers do tend to be paranoid about threats to encryption. Given how every single government's hellbent on breaking it or bypassing it, I can't blame them on that front.

    The AI bubble launched with a super-impressive demo called ChatGPT, and quantum computing doesn’t have anything like that. There are no products. But the physics experiments are very pretty.

    Moreover, quantum can't really break into the consumer market like AI has. AI had slopgens of all stripes and supposedly sky-high adoption (through [forcing it on everyone](https://www.bloodinthemachine.com/p/how-big-tech-is-force-feeding-us https://awful.systems/post/5348844)), quantum's gonna have none of that.

    (I don't see the general public falling for the quantum hype, either, given how badly they got burned by the AI hype.)

  • New edition of AI Killed My Job, giving a deep dive into how genAI has hurt artists. I'd like to bring particular attention to Meilssa's story, which is roughly halfway through, specifically the ending:

    There's a part of me that will never forgive the tech industry for what they've taken from me and what they've chosen to do with it. In the early days as the dawning horror set in, I cried about this almost every day. I wondered if I should quit making art. I contemplated suicide. I did nothing to these people, but every day I have to see them gleefully cheer online about the anticipated death of my chosen profession. I had no idea we artists were so hated—I still don't know why. What did my silly little cat drawings do to earn so much contempt? That part is probably one of the hardest consequences of AI to come to terms with. It didn't just try to take my job (or succeed in making my job worse) it exposed a whole lot of people who hate me and everything I am for reasons I can't fathom. They want to exploit me and see me eradicated at the same time.

  • Given how gen-AI has utterly consumed the tech industry over these past two years, I see very little reason to give the benefit of the doubt here.

    Focusing on NVidia, they've made billions selling shovels in the AI gold rush (inflating their stock prices in the process), and have put billions more into money-burning AI startups to keep the bubble going. They have a vested interest in forcing AI onto everyone and everything they can.

  • Nvidia and California College of the Arts Enter Into a Partnership

    Oh, I'm sure the artists enrolling at the CCA are gonna be so happy to hear they've been betrayed

    The collaboration with CCA is described in today’s announcement as aiming to “prepare a new generation of creatives to thrive at the intersection of art, design and emerging technologies.”

    Hot take: There is no "intersection" between these three, because the "emerging technologies" in question are a techno-fascist ideology designed to destroy art for profit

  • And Copilot hallucinated all the way through the study.

    HORRIFYING: The Automatic Lying Machine Lied All The Way Through

    The evaluation did not find evidence that time savings have led to improved productivity, and control group participants had not observed productivity improvements from colleagues taking part in the M365 Copilot pilot.

    SHOCKING: The Mythical Infinite Productivity Machine Is A Fucking Myth

    At least 72% of the test subjects enjoyed themselves.

    Gambling and racism are two of the UK's specialties, and AI is very good at both of those). On this statistic, I am not shocked.

  • Is there already a word for “an industry which has removed itself from reality and will collapse when the public’s suspension of disbelief fades away”?

    If there is, I haven't heard of it. To try and preemptively coin one, "artificial industry" ("AI" for short) would be pretty fitting - far as I can tell, no industry has unmoored itself from reality like this until the tech industry pulled it off via the AI bubble.

    Calling this just “a bubble” doesn’t cut it anymore, they’re just peddling sci-fi ideas now. (Metaverse was a bubble, and it was stupid as hell, but at least those headsets and the legless avatars existed.)

    I genuinely forgot the metaverse existed until I read this.

  • Naturally, the best and most obvious fix — don’t hoard all that shit in the first place — wasn’t suggested.

    At this point, I'm gonna chalk the refusal to stop hoarding up to ideology more than anything else. The tech industry clearly sees data not as information to be taken sparingly, used carefully, and deleted when necessary, but as Objective Reality Unitstm which are theirs to steal and theirs alone.

  • Starting things off with a newsletter by Jared White that caught my attention: Why “Normies” Hate Programmers and the End of the Playful Hacker Trope, which directly discusses how the public perception of programmers has changed for the worse, and how best to rehabilitate it.

    Adding my own two cents, the rise of gen-AI has definitely played a role here - I'm gonna quote Baldur Bjarnason directly here, since he said it better than I could:

  • If AI slop is an insult to life itself, then this shit is an insult to knowledge. Any paper that actually uses "synthetic data" should be immediately retracted (and ideally destroyed altogether), but it'll probably take years before the poison is purged from the scientific record.

    Artificial intelligence is the destruction of knowledge for profit. It has no place in any scientific endeavor. (How you managed to maintain a calm, detached tone when talking about this shit, I will never know.)