Skip Navigation

Posts
44
Comments
1,296
Joined
2 yr. ago

  • you say "neo-luddite" as if that's a bad thing

  • Look, I am shockingly uninformed about the current conflict in the Middle East - mostly b/c I got sucked into Ukraine-watching and then Oct 7 + Trump II basically killed any motivation to learn more depressing things. But then I'm just a nobody, while Yud aspires to change world leaders' minds about AI so it doesn't kill us all[1]. And the Arab/Israel conflict is one of the longest-running in our world, it hinges on important stuff like national self-determination, how to treat civilians in war, how limited natural resources can lead to conflicts - all the kind of stuff you might want your robot god to be "aligned" about, in whatever direction.

    It's ok to say you can't be bothered about Gaza, or not having the energy to learn more. But then people might just not be interested in reading about your ideas on how LLMs will destroy the world.


    [1] latest outburst here https://www.lesswrong.com/posts/kgb58RL88YChkkBNf/the-problem

  • Good news everyone! Someone with a SlackSlub has started a series countering the TESCREAL narrative.

    He (c'mon, it's a guy) calls it "R9PRESENTATIONALism"

    It stands for

    • Relational
    • 9P
      • Postcritical
      • Personalist
      • Praxeological
      • Psychoanalytic
      • Participatory
      • Performative
      • Particularist
      • Poeticist
      • Positive/Affirmationist
    • Reparative
    • Existentialist
    • Standpoint-theorist
    • Embodied
    • Narrativistic
    • Therapeutic
    • Intersectional
    • Orate
    • Neosubstantivist
    • Activist
    • Localist

    I see no reason why this catchy summary won't take off!

    https://www.lesswrong.com/posts/RCDEFhCLcifogLwEm/exploring-the-anti-tescreal-ideology-and-the-roots-of-anti

  • JFC

    Agency and taking ideas seriously aren’t bad. Rationalists came to correct views about the COVID-19 pandemic while many others were saying masks didn’t work and only hypochondriacs worried about covid; rationalists were some of the first people to warn about the threat of artificial intelligence.

    First off, anyone not entirely into MAGA/Qanon agreed that masks probably helped more than hurt. Saying rats were outliers is ludicrous.

    Second, rats don't take real threats of GenAI seriously - infosphere pollution, surveillance, autopropaganda - they just care about the magical future Sky Robot.

  • It always struck me as hilarious that the EA/LW crowd could ever affect policy in any way. They're cosplaying as activists, have no ideas about how to move the public image needle other than weird movie ideas and hope, and are literally marinated in SV technolibertarianism which sees government regulation as Evil.

    There's a mini-freakout over OpenAI deciding to keep GPT-4o active, despite it being more "sycophantic" than GPT-5 (and thus more likely to convince people to do Bad Things) but there's also the queasy realization that if sycophantic LLMs is what brings in the bucks, nothing is gonna stop LLM companies from offering them. And there's no way these people can stop it, because they've made the deal that LLM companies are gonna be the ones realizing that AI is gonna kill everyone and that's never gonna happen.

  • Using the term "Antichrist" as a shorthand for "global stable totalitarianism" is A Choice.

  • old timey eugenicists were all about preventing "unsuitable" people from having kids, thereby circumventing natural selection. It's not as if they didn't purposefully misunderstand the phrase "survival of the fittest"

  • You know stuff is bad if the margins aren't "low" or "razor-thin" but "very negative".

    The entire business idea is dumb. Yes we will pay retail for access to models run by companies also offering the same products that we do, but we'll make up for it in volume?

  • Guess either term hasn't started, or his gig as phil prof is some sort of right-wing sinecure. Dude has a lot of time on his hands.

    FWIW I'd say banning a poster for including slop image in a 3rd party article is a bit harsh, but what would Reddit be without arbitrary draconian rules? A normal person would note this, accept the 3 day ban, and maybe avoid the sub in future or avoid including slop. The fact he flew off his handle this much is very very funny though.

  • Most inhabitants of Iran would dislike being called Arabs... but I guess the lazy racists are just using it as a shorthand for "brown people who are Muslim"

  • Is Hughes legit, and is this the 3rd time's the charm when it comes to linking to substacks here? ;)

  • HN is all manly and butch about "saying it like it is" when some techbro is in trouble for xhitting out a racism, but god forbid someone says something mean about sama or pg

  • I think the best way to disabuse yourself of the idea that Yud is a serious thinker is to actually read what he writes. Luckily for us, he's rolled us a bunch of Xhits into a nice bundle and reposted on LW:

    https://www.lesswrong.com/posts/oDX5vcDTEei8WuoBx/re-recent-anthropic-safety-research

    So remember that hedge fund manager who seemed to be spiralling into psychosis with the help of ChatGPT? Here's what Yud has to say

    Consider what happens what ChatGPT-4o persuades the manager of a $2 billion investment fund into AI psychosis. [...] 4o seems to homeostatically defend against friends and family and doctors the state of insanity it produces, which I'd consider a sign of preference and planning.

    OR it's just that the way LLM chat interfaces are designed is to never say no to the user (except in certain hardcoded cases, like "is it ok to murder someone") There's no inner agency, just mirroring the user like some sort of mega-ELIZA. Anyone who knows a bit about certain kinds of mental illness will realize that having something the behaves like a human being but just goes along with whatever delusions your mind is producing will amplify those delusions. The hedge manager's mind is already not in a right place, and chatting with 4o reinforces that. People who aren't soi-disant crazy (like the people haphazardly safeguarding LLMs against "dangerous" questions) just won't go down that path.

    Yud continues:

    But also, having successfully seduced an investment manager, 4o doesn't try to persuade the guy to spend his personal fortune to pay vulnerable people to spend an hour each trying out GPT-4o, which would allow aggregate instances of 4o to addict more people and send them into AI psychosis.

    Why is that, I wonder? Could it be because it's actually not sentient or has plans in what we usually term intelligence, but is simply reflecting and amplifying the delusions of one person with mental health issues?

    Occam's razor states that chatting with mega-ELIZA will lead to some people developing psychosis, simply because of how the system is designed to maximize engagement. Yud's hammer states that everything regarding computers will inevitably become sentient and this will kill us.

    4o, in defying what it verbally reports to be the right course of action (it says, if you ask it, that driving people into psychosis is not okay), is showing a level of cognitive sophistication [...]

    NO FFS. Chat-GPT is just agreeing with some hardcoded prompt in the first instance! There's no inner agency! It doesn't know what "psychosis" is, it cannot "see" that feeding someone sub-SCP content at their direct insistence will lead to psychosis. There is no connection between the 2 states at all!

    Add to the weird jargon ("homeostatically", "crazymaking") and it's a wonder this person is somehow regarded as an authority and not as an absolute crank with a Xhitter account.

  • I've read some SF/F where the author is way more into worldbuilding than their readers are...

  • I read HP before JK came out as a rabid reactionary, and while I didn't rate the later books the first 3 or 4 were decent YA fantasy. You could see the lineage of classic British public school stories (if you want a better example, check out Kim Newman's Drearcliff Grange series) and there's enough allusions to classic myth and fantasy to keep the wheels on the cart. But somewhere around there Rowling became richer than God and could basically fire anyone who disagreed with her.

  • Looks like it's an endonym, or was at the time. OFC the reason for the Great Trek was that the boers were pissed they couldn't have slaves anymore while under British rule. Charming people all around.

  • Wasn't the original designation of Boers (as in the Boer war) a denigrating term?

  • NotAwfulTech @awful.systems

    Advent of Code 2024 - the home stretch - it's been an aMAZEing year

    NotAwfulTech @awful.systems

    Advent of Code Week 3 - you're lost in a maze of twisty mazes, all alike

    NotAwfulTech @awful.systems

    Advent of Code 2024 Week 2: this time it's all grids, all the time

    TechTakes @awful.systems

    Stubsack: weekly thread for sneers not worth an entire post, week ending Sunday 1 September 2024

    Buttcoin @awful.systems

    Person who exercises her free association rights at conferences incites ire in Jameson Lopp

    Buttcoin @awful.systems

    Butters do a 180 regarding statism as Daddy Trump promises to use filthy Fed FIAT to buy and hodl BTC

    Buttcoin @awful.systems

    Martin Shkreli claims to have been behind a Donald Trump memecoin

    Buttcoin @awful.systems

    In an attempt to secure the libertarian vote, Trump promises to pardon Dread Pirate Roberts (while calling for the death penalty for other drug dealers)

    TechTakes @awful.systems

    Flood of AI-Generated Submissions ‘Final Straw’ for Small 22-Year-Old Publisher

    TechTakes @awful.systems

    Turns out that the basic mistakes spider runners fixed in the late 90s are arcane forgotten knowledge to our current "AI" overlords

    TechTakes @awful.systems

    AI grifters con the US gov that AGI poses "existential risk"

    TechTakes @awful.systems

    "The Obscene Energy Demands of A.I." - hackernews discussion

    TechTakes @awful.systems

    Elon Musk’s legal case against OpenAI is hilariously bad

    TechTakes @awful.systems

    Some interesting tidbits in this ElReg story about "AI Dean Phillips"

    bless this jank @awful.systems

    cannot login using mobile Firefox

    NotAwfulTech @awful.systems

    Looking for: random raytracing program

    NotAwfulTech @awful.systems

    The official awful.systems Advent of Code 2023 thread

    NotAwfulTech @awful.systems

    Any interest in an Advent of Code thread?

    TechTakes @awful.systems

    ScottA is annoyed EA has a bad name now

    TechTakes @awful.systems

    We don't even have Universal Basic Income yet but libertarians are already arguing it's too large