Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)BL
Posts
100
Comments
1,177
Joined
2 yr. ago

  • …You know, if I actually believed in the whole AGI doom scenario (and bought into Eliezer’s self-hype) I would be even more pissed at him and sneer even harder at him. He basically set himself up as a critical savior to mankind, one of the only people clear sighted enough to see the real dangers and most important question… and then he totally failed to deliver. Not only that he created the very hype that would trigger the creation of the unaligned AGI he promised to prevent!

    As the cherry on top of this shit sundae, the bubble caused by said hype dealt devastating damage to the Internet and the world at large in spite of failing to create the unaligned AGI Yud was doomsaying about, and made people more vulnerable to falling for the plagiarism-fueled lying machines behind said bubble.

  • Now we need to make a logic puzzle involving two people and one cup. Perhaps they are trying to share a drink equitably. Each time they drink one third of remaining cup’s volume.

    Step one: Drink two-thirds of the cup's volume

    Step two: Piss one sixth of the cup's volume

    Problem solved

  • Two ferrymen and three boats are on the left bank of a river. Each boat holds exactly one man. How can they get both men and all three boats to the right bank?

    Officially, you can't. Unofficially, just have one of the ferrymen tow a boat.

  • Hey, remember the thing that you said would happen?

    The part about condemnation and mockery? Yeah, I already thought that was guaranteed, but I didn't expect to be vindicated so soon afterwards.

    EDIT: One of the replies gives an example for my "death of value-neutral AI" prediction too, openly calling AI "a weapon of mass destruction" and calling for its abolition.

  • Discovered some commentary from Baldur Bjarnason about this:

    Somebody linked to the discussion about this on hacker news (boo hiss) and the examples that are cropping up there are amazing

    This highlights another issue with generative models that some people have been trying to draw attention to for a while: as bad as they are in English, they are much more error-prone in other languages

    (Also IMO Google translate declined substantially when they integrated more LLM-based tech)

    On a personal sidenote, I can see non-English text/audio becoming a form of low-background media in and of itself, for two main reasons:

    • First, LLMs' poor performance in languages other than English will make non-English AI slop easier to identify - and, by extension, easier to avoid
    • Second, non-English datasets will (likely) contain less AI slop in general than English datasets - between English being widely used across the world, the tech corps behind this bubble being largely American, and LLM userbases being largely English-speaking, chances are AI slop will be primarily generated in English, with non-English AI slop being a relative rarity.

    By extension, knowing a second language will become more valuable as well, as it would allow you to access (and translate) low-background sources that your English-only counterparts cannot.

  • Found a good security-related sneer in response to a low-skill exploit in Google Gemini (tl;dr: "send Gemini a prompt in white-on-white/0px text"):

    I've got time, so I'll fire off a sidenote:

    In the immediate term, this bubble's gonna be a goldmine of exploits - chatbots/LLMs are practically impossible to secure in any real way, and will likely be the most vulnerable part of any cybersecurity system under most circumstances. A human can resist being socially engineered, but these chatbots can't really resist being jailbroken.

    In the longer term, the one-two punch of vibe-coded programs proliferating in the wild (featuring easy-to-find and easy-to-exploit vulnerabilities) and the large scale brain drain/loss of expertise in the tech industry (from juniors failing to gain experience thanks to using LLMs and seniors getting laid off/retiring) will likely set back cybersecurity significantly, making crackers and cybercriminals' jobs a lot easier for at least a few years.

  • Found a neat tangent whilst going through that thread:

    The single most common disciplinary offense on scpwiki for the past year+ has been people posting AI-generated articles, and it is EXTREMELY rare for any of those cases to involve a work that had been positively received

    On a personal note, I expect the Foundation to become a reliable source of post-'22 human-made work for the same reasons I stated Newgrounds would recently:

    • An explicit ban on AI slop, which deters AI bros and allow staff to nuke it on sight
    • A complete lack of an ad system, which prevents content farms from setting up shop
    • Dedicated quality control systems (deletion and rewrite policies, in this case) which prevent slop from gaining a foothold and drowning out human-made work