Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)BL
Posts
100
Comments
1,177
Joined
2 yr. ago

  • Wikipedia also just upped their standards in another area - they've updated their speedy deletion policy, enabling the admins to bypass standard Wikipedia bureaucracy and swiftly nuke AI slop articles which meet one of two conditions:

    • "Communication intended for the user”, referring to sentences directly aimed at the promptfondler using the LLM (e.g. "Here is your Wikipedia article on…,” “Up to my last training update …,” and "as a large language model.”)
    • Blatantly incorrect citations (examples given are external links to papers/books which don't exist, and links which lead to something completely unrelated)

    Ilyas Lebleu, who contributed to the update in policy, has described this as a "band-aid" that leaves Wikipedia in a better position than before, but not a perfect one. Personally, I expect this solution will be sufficent to permanently stop the influx of AI slop articles. Between promptfondlers' utter inability to recognise low-quality/incorrect citations, and their severe laziness and lack of care for their """work""", the risk of an AI slop article being sufficiently subtle to avoid speedy deletion is virtually zero.

  • Cloudflare has publicly announced the obvious about Perplexity stealing people's data to run their plagiarism, and responded by de-listing them as a verified bot and added heuristics specifically to block their crawling attempts.

    Personally, I'm expecting this will significantly hamper Perpllexity going forward, considering Cloudflare's just cut them off from roughly a fifth of the Internet.

  • I’m sure you can think of hypothetical use cases for Google Glass and Meta AI RayBans. But these alleged non-creepshot use cases already failed to keep Google Glass alive. I predict they won’t be enough to keep Meta AI RayBans alive.

    Its not an intentional use case, but its an easy way to identify people who I should keep far, far away from.

    It turns out normal people really do not like this stuff, and I doubt the public image of tech bros has improved between 2014 and 2025. So have fun out there with your public pariah glasses. If you just get strong words, count yourself lucky.

    On a wider note, I wouldn't be shocked if we heard of Rapist RayBan wearers getting beaten up or shot in the street - if the torching of Waymos in anti-ICE protests, the widespread vandalism of Cybertrucks, and Luigi Mangione's status as a folk hero are anything to go by, I'd say the conditions are right for cases of outright violence against anyone viewed as supporting the techbros.

    EDIT: Un-fucked the finishing sentence, and added a nod to Cybertrucks getting fucked up.

  • Ran across a pretty solid sneer: Every Reason Why I Hate AI and You Should Too.

    Found a particularly notable paragraph near the end, focusing on the people focusing on "prompt engineering":

    In fear of being replaced by the hypothetical ‘AI-accelerated employee’, people are forgoing acquiring essential skills and deep knowledge, instead choosing to focus on “prompt engineering”. It’s somewhat ironic, because if AGI happens there will be no need for ‘prompt-engineers’. And if it doesn’t, the people with only surface level knowledge who cannot perform tasks without the help of AI will be extremely abundant, and thus extremely replaceable.

    You want my take, I'd personally go further and say the people who can't perform tasks without AI will wind up borderline-unemployable once this bubble bursts - they're gonna need a highly expensive chatbot to do anything at all, they're gonna be less productive than AI-abstaining workers whilst falsely believing they're more productive, they're gonna be hated by their coworkers for using AI, and they're gonna flounder if forced to come up with a novel/creative idea.

    All in all, any promptfondlers still existing after the bubble will likely be fired swiftly and struggle to find new work, as they end up becoming significant drags to any company's bottom line.

  • I don’t need that, in fact it would be vastly superior to just “steal” from one particularly good implementation that has a compatible license you can just comply with. (And better yet to try to avoid copying the code and to find a library if at all possible). Why in the fuck even do the copyright laundering on code that is under MIT or similar license? The authors literally tell you that you can just use it.

    I'd say its a combo of them feeling entitled to plagiarise people's work and fundamentally not respecting the work of others (a point OpenAI's Studio Ghibli abomination machine demonstrated at humanity's expense.

    On a wider front, I expect this AI bubble's gonna cripple the popularity of FOSS licenses - the expectation of properly credited work was a major aspect of the current FOSS ecosystem, and that expectation has been kneecapped by the automated plagiarism machines, and programmers are likely gonna be much stingier with sharing their work because of it.

  • Ran across a notable post on Bluesky recently - seems there's some alt-text drama that's managed to slip me by:

    On a wider note, I wouldn't be shocked if the AI bubble dealt some setbacks to accessibility in tech - given the post I've mentioned earlier, there's signs its stigmatised alt-text as being an AI Bro Thing™.

  • Saatchi says you can type in a few words and the AI will generate scenes — or even a whole show. There are two test shows. One is Exit Valley, which is a copy of South Park set in Silicon Valley. Here’s an excerpt. [Vimeo]

    For anyone who decides not to click, you're not missing out - the "episode" was equivalent to one of those millions of shitty GoAnimate "grounded" animations that you can find on YouTube. (in retrospect, GoAnimate/Vyond was basically AI slop before AI slop was a thing)

    The closest that has to a use case is the guys who will do obnoxious parodies because the rights holders won’t like them. Let’s get Mickey Mouse doing something edgy!

    Considering Tay AI was deliberately derailed into becoming a Hitler-loving sex robot, and the first wave of AI slop featured deliberately offensive Pixar-styled "posters", I can absolutely see this happening. (At least until The Mouse starts threatening Showrunner with getting sued into the ground.)

  • i am not surprised that they are all this dumb: it takes an especially stupid person to decide “yes, i am fine allowing this machine to speak for me”. even more so when it’s made clear that the machine is a stochastic parrot trained on the exploitation of the global south via massive amounts of plagiarism and that it also cooks the planet

    And is also considered a virtual "KICK ME" sign in all but the most tech-brained parts of the 'Net.

  • Found an attempt to poison LLMs in the wild recently, aimed at trying to trick people into destroying their Cybertrucks:

    This is very much in the same vein as the iOS Wave hoax which aimed to trick people into microwaving their phones, and Eduardo Valdés-Hevia's highly successful attempt at engorging Google's AI summary systems.

    Poisoning AI systems in this way isn't anything new (Baldur Bjarnason warned about it back in 2023), but seeing it used this way does make me suspect we're gonna see more ChatGPT-powered hoaxes like this in the future.

  • Fear: There’s a million ways people could die, but featuring ones that require the fewest jumps in practicality seem the most fitting. Perhaps microdrones equipped with bioweapons that spray urban areas. Or malicious actors sending drone swarms to destroy crops or other vital infrastructure.

    I can think of some more realistic ideas. Like AI-generated foraging books leading to people being poisoned, or chatbot-induced psychosis leading to suicide, or AI falsely accusing someone and sending a lynch mob after them, or people becoming utterly reliant on AI to function, leaving them vulnerable to being controlled by whoever owns whatever chatbot they're using.

    All of these require zero jumps in practicality, and as a bonus, they don't need the "exponential growth" setup LW's AI Doomsday Scenarios™ require.

    EDIT: Come to think of it, if you really wanted to make an AI Doomsday™ kinda movie, you could probably do an Idiocracy-style dystopia where the general masses are utterly reliant on AI, the villains control said masses through said AI, and the heroes have to defeat them by breaking the masses' reliance on AI.