Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)BL
Posts
100
Comments
1,172
Joined
2 yr. ago

  • The purpose of artificial intelligence is to lie and deceive, so seeing it used to make "synthetic data" is infuriating, but not shocking.

    You or I might think that claiming a chatbot model simulates human psychology was obviously a weird and foolish claim. So we have a response paper: “Large Language Models Do Not Simulate Human Psychology.” [arXiv, PDF]

    That this has to be said at all pisses me off, but its at least nice to see there's some pushback against the knowledge destruction machine.

  • (I don't know why, but part of me's saying the quantum bubble isn't gonna last long. Its probably the fact the AI bubble is still going - when that bursts, the sheer economic devastation it'll cause will likely burst the quantum bubble as well.)

    In this paper, Gutmann is telling cryptographers not to worry too much about quantum computing. Though cryptographers have still been on the case for a couple of decades, just in case there’s a breakthrough.

    Cryptographers do tend to be paranoid about threats to encryption. Given how every single government's hellbent on breaking it or bypassing it, I can't blame them on that front.

    The AI bubble launched with a super-impressive demo called ChatGPT, and quantum computing doesn’t have anything like that. There are no products. But the physics experiments are very pretty.

    Moreover, quantum can't really break into the consumer market like AI has. AI had slopgens of all stripes and supposedly sky-high adoption (through [forcing it on everyone](https://www.bloodinthemachine.com/p/how-big-tech-is-force-feeding-us https://awful.systems/post/5348844)), quantum's gonna have none of that.

    (I don't see the general public falling for the quantum hype, either, given how badly they got burned by the AI hype.)

  • New edition of AI Killed My Job, giving a deep dive into how genAI has hurt artists. I'd like to bring particular attention to Meilssa's story, which is roughly halfway through, specifically the ending:

    There's a part of me that will never forgive the tech industry for what they've taken from me and what they've chosen to do with it. In the early days as the dawning horror set in, I cried about this almost every day. I wondered if I should quit making art. I contemplated suicide. I did nothing to these people, but every day I have to see them gleefully cheer online about the anticipated death of my chosen profession. I had no idea we artists were so hated—I still don't know why. What did my silly little cat drawings do to earn so much contempt? That part is probably one of the hardest consequences of AI to come to terms with. It didn't just try to take my job (or succeed in making my job worse) it exposed a whole lot of people who hate me and everything I am for reasons I can't fathom. They want to exploit me and see me eradicated at the same time.

  • Given how gen-AI has utterly consumed the tech industry over these past two years, I see very little reason to give the benefit of the doubt here.

    Focusing on NVidia, they've made billions selling shovels in the AI gold rush (inflating their stock prices in the process), and have put billions more into money-burning AI startups to keep the bubble going. They have a vested interest in forcing AI onto everyone and everything they can.

  • Nvidia and California College of the Arts Enter Into a Partnership

    Oh, I'm sure the artists enrolling at the CCA are gonna be so happy to hear they've been betrayed

    The collaboration with CCA is described in today’s announcement as aiming to “prepare a new generation of creatives to thrive at the intersection of art, design and emerging technologies.”

    Hot take: There is no "intersection" between these three, because the "emerging technologies" in question are a techno-fascist ideology designed to destroy art for profit

  • And Copilot hallucinated all the way through the study.

    HORRIFYING: The Automatic Lying Machine Lied All The Way Through

    The evaluation did not find evidence that time savings have led to improved productivity, and control group participants had not observed productivity improvements from colleagues taking part in the M365 Copilot pilot.

    SHOCKING: The Mythical Infinite Productivity Machine Is A Fucking Myth

    At least 72% of the test subjects enjoyed themselves.

    Gambling and racism are two of the UK's specialties, and AI is very good at both of those). On this statistic, I am not shocked.

  • Is there already a word for “an industry which has removed itself from reality and will collapse when the public’s suspension of disbelief fades away”?

    If there is, I haven't heard of it. To try and preemptively coin one, "artificial industry" ("AI" for short) would be pretty fitting - far as I can tell, no industry has unmoored itself from reality like this until the tech industry pulled it off via the AI bubble.

    Calling this just “a bubble” doesn’t cut it anymore, they’re just peddling sci-fi ideas now. (Metaverse was a bubble, and it was stupid as hell, but at least those headsets and the legless avatars existed.)

    I genuinely forgot the metaverse existed until I read this.

  • Naturally, the best and most obvious fix — don’t hoard all that shit in the first place — wasn’t suggested.

    At this point, I'm gonna chalk the refusal to stop hoarding up to ideology more than anything else. The tech industry clearly sees data not as information to be taken sparingly, used carefully, and deleted when necessary, but as Objective Reality Unitstm which are theirs to steal and theirs alone.

  • Starting things off with a newsletter by Jared White that caught my attention: Why “Normies” Hate Programmers and the End of the Playful Hacker Trope, which directly discusses how the public perception of programmers has changed for the worse, and how best to rehabilitate it.

    Adding my own two cents, the rise of gen-AI has definitely played a role here - I'm gonna quote Baldur Bjarnason directly here, since he said it better than I could:

  • If AI slop is an insult to life itself, then this shit is an insult to knowledge. Any paper that actually uses "synthetic data" should be immediately retracted (and ideally destroyed altogether), but it'll probably take years before the poison is purged from the scientific record.

    Artificial intelligence is the destruction of knowledge for profit. It has no place in any scientific endeavor. (How you managed to maintain a calm, detached tone when talking about this shit, I will never know.)

  • Saw an AI-extruded "art" "timelapse" in the wild recently - the "timelapse" in question isn't gonna fool anyone who actually cares about art, but it's Good Enoughtm to pass muster on someone mindlessly scrolling, and its creation serves only to attack artists' ability to prove their work was human made.

    This isn't the first time AI bros have pulled this shit (Exhibit A, Exhibit B), by the way.

  • Burke and Goodnough are working to rectify the report. That sounds like removing the fake stuff but not the conclusions based on it. Those were determined well ahead of time.

    In a better world, those conclusions would've been immediately thrown out as lies and Burke and Goodnough would've been immediately fired. We do not live in a better timeline, but a man can dream.

  • This isn't the first time I've heard about this - Baldur Bjarnason's talked about how text extruders can be poisoned to alter their outputs before, noting its potential for manipulating search results and/or serving propaganda.

    Funnily enough, calling a poisoned LLM as a "sleeper agent" wouldn't be entirely inaccurate - spicy autocomplete, by definition, cannot be aware that their word-prediction attempts are being manipulated to produce specific output. Its still treating these spicy autocompletes with more sentience than they actually have, though

  • Not to mention, Cursor's going to be training on a lot of highly sensitive material (sensitive data, copyrighted code, potential trade secrets) - the moment that shit starts to leak, all hell's gonna break loose on the legal front.

  • Now, you might object: Anysphere wouldn’t be abusing just their customers’ data. Their customers’ customers’ data may have non-disclosure agreements with teeth. Then there’s personal data covered by the GDPR and so on.

    If we're lucky, this will spook customers into running for the hills and hasten its demise. Whatever magical performance benefits Cursor's promising isn't gonna be worth getting blamed for a data breach.