Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)BL
Posts
82
Comments
911
Joined
2 yr. ago

  • Two ferrymen and three boats are on the left bank of a river. Each boat holds exactly one man. How can they get both men and all three boats to the right bank?

    Officially, you can't. Unofficially, just have one of the ferrymen tow a boat.

  • Hey, remember the thing that you said would happen?

    The part about condemnation and mockery? Yeah, I already thought that was guaranteed, but I didn't expect to be vindicated so soon afterwards.

    EDIT: One of the replies gives an example for my "death of value-neutral AI" prediction too, openly calling AI "a weapon of mass destruction" and calling for its abolition.

  • Discovered some commentary from Baldur Bjarnason about this:

    Somebody linked to the discussion about this on hacker news (boo hiss) and the examples that are cropping up there are amazing

    This highlights another issue with generative models that some people have been trying to draw attention to for a while: as bad as they are in English, they are much more error-prone in other languages

    (Also IMO Google translate declined substantially when they integrated more LLM-based tech)

    On a personal sidenote, I can see non-English text/audio becoming a form of low-background media in and of itself, for two main reasons:

    • First, LLMs' poor performance in languages other than English will make non-English AI slop easier to identify - and, by extension, easier to avoid
    • Second, non-English datasets will (likely) contain less AI slop in general than English datasets - between English being widely used across the world, the tech corps behind this bubble being largely American, and LLM userbases being largely English-speaking, chances are AI slop will be primarily generated in English, with non-English AI slop being a relative rarity.

    By extension, knowing a second language will become more valuable as well, as it would allow you to access (and translate) low-background sources that your English-only counterparts cannot.

  • Found a good security-related sneer in response to a low-skill exploit in Google Gemini (tl;dr: "send Gemini a prompt in white-on-white/0px text"):

    I've got time, so I'll fire off a sidenote:

    In the immediate term, this bubble's gonna be a goldmine of exploits - chatbots/LLMs are practically impossible to secure in any real way, and will likely be the most vulnerable part of any cybersecurity system under most circumstances. A human can resist being socially engineered, but these chatbots can't really resist being jailbroken.

    In the longer term, the one-two punch of vibe-coded programs proliferating in the wild (featuring easy-to-find and easy-to-exploit vulnerabilities) and the large scale brain drain/loss of expertise in the tech industry (from juniors failing to gain experience thanks to using LLMs and seniors getting laid off/retiring) will likely set back cybersecurity significantly, making crackers and cybercriminals' jobs a lot easier for at least a few years.

  • Found a neat tangent whilst going through that thread:

    The single most common disciplinary offense on scpwiki for the past year+ has been people posting AI-generated articles, and it is EXTREMELY rare for any of those cases to involve a work that had been positively received

    On a personal note, I expect the Foundation to become a reliable source of post-'22 human-made work for the same reasons I stated Newgrounds would recently:

    • An explicit ban on AI slop, which deters AI bros and allow staff to nuke it on sight
    • A complete lack of an ad system, which prevents content farms from setting up shop
    • Dedicated quality control systems (deletion and rewrite policies, in this case) which prevent slop from gaining a foothold and drowning out human-made work
  • Tangential: I’ve heard that there are 3D printer people that print junk and sell them. This would not be much of a problem if they didn’t pollute the spaces they operate in.

    So, essentially AI slop, but with more microplastics. Given the 3D printer bros are much more limited in their ability to pollute their spaces (they have to pay for filament/resin, they're physically limited in where they can pollute, and they produce slop much slower than an LLM), they're hopefully easier to deal with.

  • Similarly, at the chip production facilities, a committee of representatives stands at the end of the production line basically and rolls a ten-sided die for each chip; chips that don’t roll a 1 are destroyed on the spot.

    Ah, yes, artificially kneecap chip fabs' yields, I'm sure that will go over well with the capitalist overlords who own them

  • The deluge of fake bug reports is definitely something I should have noted as well, since that directly damages FOSS' capacity to find and fix bugs.

    Baldur Bjanason has predicted that FOSS is at risk of being hit by "a vicious cycle leading to collapse", and security is a major part of his hypothesised cycle:

    1. Declining surplus and burnout leads to maintainers increasingly stepping back from their projects.
    2. Many of these projects either bitrot serious bugs or get taken over by malicious actors who are highly motivated because they can’t relay on pervasive memory bugs anymore for exploits.
    3. OSS increasingly gets a reputation (deserved or not) for being unsafe and unreliable.
    4. That decline in users leads to even more maintainers stepping back.
  • Potential hot take: AI is gonna kill open source

    Between sucking up a lot of funding that would otherwise go to FOSS projects, DDOSing FOSS infrastructure through mass scraping, and undermining FOSS licenses through mass code theft, the bubble has done plenty of damage to the FOSS movement - damage I'm not sure it can recover from.

  • Reading through some of the examples at the end of the article it’s infuriating when these slop reports have opened and when the patient curl developers try to give them benefit of the doubt the reporter replies with “you have a vulnerability and I cannot explain further since I’m not an expert”

    At that point, I feel the team would be justified in telling these slop-porters to go fuck themselves and closing the report - they've made it crystal clear they're beyond saving.

    (And on a wider note, I suspect the security team is gonna be a lot less willing to give benefit of the doubt going forward, considering the slop-porters are actively punishing them for doing so)