Skip Navigation
Single Pilot Operations — let’s replace airline copilots with AI!
  • Maybe Elon can install Grok as the copilot of his private jets.

  • LLMs average <5% on 2025 Math Olympiad; award each other 20x points
  • "Thought process"

    "Intuitively"

    "Figured out"

    "Thought path"

    I miss the days when the consensus reaction to Blake Lemoine was to point and laugh. Now the people anthropomorphizing linear algebra are being taken far too seriously.

  • Stubsack: weekly thread for sneers not worth an entire post, week ending 6th April 2025
  • As a fellow Usenet junkie from way back, now I'm curious which newsgroups Yarvin hung out in.

  • "The Phony Comforts of AI Optimism": Ed Zitron on lazy hypemongers and CoreWeave's brick-wall-headed trajectory
  • I always tune into Casey Newton and Kevin Roose's podcast to get my latest fix of AI hype, now that they've moved on from crypto hype and multiverse hype. Can't wait to see what the next hype cycle will bring!

  • Scoots hot new AGI goss just dropped, Trump loses 3rd election to Grok in stunning upset
  • Scott talks a bit about it in the video, but he was recently in the news as the guy who refused to sign a non-disparagement agreement when he left OpenAI, which caused them to claw back his stock options.

  • Scoots hot new AGI goss just dropped, Trump loses 3rd election to Grok in stunning upset
  • I'm fascinated by the way they're hyping up Daniel Kokotajlo to be some sort of AI prophet. Scott does it here, but so does Caroline Jeanmaire in the OP's twitter link. It's like they all got the talking point (probably from Scott) that Daniel is the new guru. Perhaps they're trying to anoint someone less off-putting and awkward than Yud. (This is also the first time I've ever seen Scott on video, and he definitely gives off a weird vibe.)

  • Scoots hot new AGI goss just dropped, Trump loses 3rd election to Grok in stunning upset
  • After minutes of meticulous research and quantitative analysis, I've come up with my own predictions about the future of AI.

  • LessOnline is a festival celebrating truthseeking and blogging, the totally not race science is just a bonus
  • Of course they use shitty AI slop as the background for their web page.

    Like, what the hell is it even supposed to be? A mustachioed man writing in a journal in what appears to be a French village town square? Shadowy individuals chatting around an oddly incongruous fire pit? Guitar dude and listener sitting on invisible benches? I get that AI produces this kind of garbage all the time, but did the lesswrongers even bother to evaluate it for appropriateness?

  • EAs sad that their previous rich grifters are trying to distance themselves from the movement
  • This commenter may be saying something we already knew, but it's nice to have the confirmation that Anthropic is chock full of EAs:

    (I work at Anthropic, though I don't claim any particular insight into the views of the cofounders. For my part I'll say that I identify as an EA, know many other employees who do, get enormous amounts of value from the EA community, and think Anthropic is vastly more EA-flavored than almost any other large company, though it is vastly less EA-flavored than, like, actual EA orgs. I think the quotes in the paragraph of the Wired article give a pretty misleading picture of Anthropic when taken in isolation and I wouldn't personally have said them, but I think "a journalist goes through your public statements looking for the most damning or hypocritical things you've ever said out of context" is an incredibly tricky situation to come out of looking good and many of the comments here seem a bit uncharitable given that.)

  • Renowned Tumblr folklore expert Strange Æons covers Yud's Potter Fanfic
  • Sorry, when she started taking Yud's claims to be a "renowned AI researcher" at face value, I noped out.

  • AI sales startup 11x claims customers it doesn’t have for software that doesn’t work
  • Hilarious. How much do you want to bet they vibe-coded the whole app.

  • Orange site censoring posts left and right as US descends further into fascism

    The tech bro hive mind on HN is furiously flagging (i.e., voting into invisibility) any submissions dealing with Tesla, Elon Musk or the kafkaesque US immigration detention situation. Add "/active" to the URL to see.

    The site's moderator says it's fine because users are "tired of the repetition". Repetition of what exactly? Attempts to get through the censorship wall?

    17
    ‘guys, i’m under attack’ — ‘vibe coding’ in the wild
  • I'm fine with the name. It's a good signifier that shit code has been written.

  • Yud follows up Sammy Boy's AI-Generated "Metafiction"
  • LLMs producing garbage fiction? Oh Yud, he's getting close...

  • Yudkowsky: eugenics is now "the third most important project in the world." After AI doom and anime, presumably.
  • One of the most important projects in the world. Somebody should fund it.

    The Pioneer Fund (now the Human Diversity Foundation) has been funding this bullshit for years, Yud.

  • "Tracing Woodgrains" starts a eugenics-oriented education policy "think-tank"
    www.educationprogress.org Center for Educational Progress | CEP | Substack

    A think tank centered on orienting education towards a culture of excellence. Click to read Center for Educational Progress, a Substack publication with thousands of subscribers.

    Center for Educational Progress | CEP | Substack

    Sneerclubbers may recall a recent encounter with "Tracing Woodgrains", née Jack Despain Zhou, the rationalist-infatuated former producer and researcher for "Blocked and Reported", a podcast featuring prominent transphobes Jesse Singal and Katie Herzog.

    It turns out he's started a new venture: a "think-tank" called the "Center for Educational Progress." What's this think-tank's focus? Introducing eugenics into educational policy. Of couse they don't put it in those exact words, but that's the goal. The co-founder of the venture is Lillian Tara, former executive director of Pronatalist.org, the outfit run by creepy Harry Potter look-a-likes (and moderately frequent topic in this forum) Simone and Malcolm Collins. According to the anti-racist activist group Hope Not Hate:

    >The Collinses enlisted Lillian Tara, a pronatalist graduate student at Harvard University. During a call with our undercover reporter, Tara referred three times to her work with the Collinses as eugenics. “I don’t care if you call me a eugenicist,” she said.

    Naturally, the CEP is concerned about IQ and want to ensure that mentally superior (read white) individuals don't have their hereditarily-deserved resources unfairly allocated to the poors and the stupids. They have a reading list on the substack, which includes people like Arthur Jensen and LessWrong IQ-fetishist Gwern.

    So why are Trace and Lillian doing this now? I suppose they're striking while the iron is hot, probably hoping to get some sweet sweet Thiel-bucks as Elon and his goon-squad do their very best to gut public education.

    And more proof for the aphorism: "Scratch a rationalist, find a racist".

    18
    Casey Newton drinks the kool-aid

    In a recent Hard Fork (Hard Hork?) episode, Casey Newton and Kevin Roose described attending the recent "The Curve" conference -- a conference in Berkeley organized and attended mostly by our very best friends. When asked about the most memorable session he attended at this conference, Casey said:

    >That would have been a session called If Anyone Builds It, Everyone Dies, which was hosted by Eliezer Yudkowski. Eliezer is sort of the original doomer. For a couple of decades now, he has been warning about the prospects of super intelligent AI. > >His view is that there is almost no scenario in which we could build a super intelligence that wouldn't either enslave us or hurt us, kill all of us, right? So he's been telling people from the beginning, we should probably just not build this. And so you and I had a chance to sit in with him. > >People fired a bunch of questions at him. And we should say, he's a really polarizing figure, and I think is sort of on one extreme of this debate. But I think he was also really early to understanding a lot of harms that have bit by bit started to materialize. > >And so it was fascinating to spend an hour or so sitting in a room and hearing him make his case.

    [...]

    >Yeah, my case for taking these folks seriously, Kevin, is that this is a community that, over a decade ago, started to make a lot of predictions that just basically came true, right? They started to look at advancements in machine learning and neural networks and started to connect the dots. And they said, hey, before too long, we're going to get into a world where these models are incredibly powerful. > >And all that stuff just turned out to be true. So, that's why they have credibility with me, right? Everything they believe, you know, we could hit some sort of limit that they didn't see coming. > >Their model of the world could sort of fall apart. But as they have updated it bit by bit, and as these companies have made further advancements and they've built new products, I would say that this model of the world has basically held so far. And so, if nothing else, I think we have to keep this group of folks in mind as we think about, well, what is the next phase of AI going to look like for all of us?

    6
    Adderall in Higher Doses May Raise Psychosis Risk

    Excerpt: >A new study published on Thursday in The American Journal of Psychiatry suggests that dosage may play a role. It found that among people who took high doses of prescription amphetamines such as Vyvanse and Adderall, there was a fivefold increased risk of developing psychosis or mania for the first time compared with those who weren’t taking stimulants.

    Perhaps this explains some of what goes on at LessWrong and in other rationalist circles.

    4
    OK doomer
    www.newyorker.com Among the A.I. Doomsayers

    Some people think machine intelligence will transform humanity for the better. Others fear it may destroy us. Who will decide our fate?

    Among the A.I. Doomsayers

    The New Yorker has a piece on the Bay Area AI doomer and e/acc scenes.

    Excerpts:

    >[Katja] Grace used to work for Eliezer Yudkowsky, a bearded guy with a fedora, a petulant demeanor, and a p(doom) of ninety-nine per cent. Raised in Chicago as an Orthodox Jew, he dropped out of school after eighth grade, taught himself calculus and atheism, started blogging, and, in the early two-thousands, made his way to the Bay Area. His best-known works include “Harry Potter and the Methods of Rationality,” a piece of fan fiction running to more than six hundred thousand words, and “The Sequences,” a gargantuan series of essays about how to sharpen one’s thinking.

    [...]

    >A guest brought up Scott Alexander, one of the scene’s microcelebrities, who is often invoked mononymically. “I assume you read Scott’s post yesterday?” the guest asked [Katja] Grace, referring to an essay about “major AI safety advances,” among other things. “He was truly in top form.”

    >Grace looked sheepish. “Scott and I are dating,” she said—intermittently, nonexclusively—“but that doesn’t mean I always remember to read his stuff.”

    [...]

    >“The same people cycle between selling AGI utopia and doom,” Timnit Gebru, a former Google computer scientist and now a critic of the industry, told me. “They are all endowed and funded by the tech billionaires who build all the systems we’re supposed to be worried about making us extinct.”

    36
    Since age 12, SBF was a dedicated utilitarian, mommy says. It's not fair to imprison him for life.

    In her sentencing submission to the judge in the FTX trial, Barbara Fried argues that her son is just a misunderstood altruist, who doesn't deserve to go to prison for very long.

    Excerpt: >One day, when he was about twelve, he popped out of his room to ask me a question about an argument made by Derik Parfit, a well-known moral philosopher. As it happens, | am quite familiar with the academic literature Parfi’s article is a part of, having written extensively on related questions myself. His question revealed a depth of understanding and critical thinking that is not all that common even among people who think about these issues for a living. ‘What on earth are you reading?” I asked. The answer, it turned out, was he was working his way through the vast literature on utiitarianism, a strain of moral philosophy that argues that each of us has a strong ethical obligation to live so as to alleviate the suffering of those less fortunate than ourselves. The premises of utilitarianism obviously resonated strongly with what Sam had already come to believe on his own, but gave him a more systematic way to think about the problem and connected him to an online community of like-minded people deeply engaged in the same intellectual and moral journey.

    Yeah, that "online community" we all know and love.

    20
    Let rationalists put GMO bacteria in your mouth

    They've been pumping this bio-hacking startup on the Orange Site (TM) for the past few months. Now they've got Siskind shilling for them.

    26
    Effective Obfuscation
    newsletter.mollywhite.net Effective obfuscation

    Silicon Valley's "effective altruism" and "effective accelerationism" only give a thin philosophical veneer to the industry's same old impulses.

    Effective obfuscation

    Molly White is best known for shining a light on the silliness and fraud that are cryptocurrency, blockchain and Web3. This essay may be a sign that she's shifting her focus to our sneerworthy friends in the extended rationalism universe. If so, that's an excellent development. Molly's great.

    19
    Let them Fight: EAs and neoreactionaries go at it

    Non-paywalled link: https://archive.ph/9Hihf

    In his latest NYT column, Ezra Klein identifies the neoreactionary philosophy at the core of Marc Andreessen's recent excrescence on so-called "techno-optimism". It wasn't exactly a difficult analysis, given the way Andreessen outright lists a gaggle of neoreactionaries as the inspiration for his screed.

    But when Andreessen included "existential risk" and transhumanism on his list of enemy ideas, I'm sure the rationalists and EAs were feeling at least a little bit offended. Klein, as the founder of Vox media and Vox's EA-promoting "Future Perfect" vertical, was probably among those who felt targeted. He has certainly bought into the rationalist AI doomer bullshit, so you know where he stands.

    So have at at, Marc and Ezra. Fight. And maybe take each other out.

    3
    Let's walk through the uncanny valley with SBF so we can collapse some wave functions together
    hachyderm.io Molly White (@molly0xfff@hachyderm.io)

    Attached: 2 images I finally got hold of the government exhibit that SBF's lawyers worried prosecutors were using just to "show that he’s some sort of crazy person" https://mollywhite.net/storage/sbf-trial/GX-39A.pdf #FTX #SBF #crypto #cryptocurrency

    Molly White (@molly0xfff@hachyderm.io)

    Rationalist check-list:

    1. Incorrect use of analogy? Check.
    2. Pseudoscientific nonsense used to make your point seem more profound? Check.
    3. Tortured use of probability estimates? Check.
    4. Over-long description of a point that could just have easily been made in 1 sentence? Check.

    This email by SBF is basically one big malapropism.

    37
    Generative AI producing racially biased results? Stop worrying about it, say orange site tech bros. It's just reflecting reality.

    Representative take: >If you ask Stable Diffusion for a picture of a cat it always seems to produce images of healthy looking domestic cats. For the prompt "cat" to be unbiased Stable Diffusion would need to occasionally generate images of dead white tigers since this would also fit under the label of "cat".

    8
    "Yudkowsky is a genius and one of the best people in history."

    [All non-sneerclub links below are archive.today links]

    Diego Caleiro, who popped up on my radar after he commiserated with Roko's latest in a never-ending stream of denials that he's a sex pest, is worthy of a few sneers.

    For example, he thinks Yud is the bestest, most awesomest, coolest person to ever breathe: >Yudkwosky is a genius and one of the best people in history. Not only he tried to save us by writing things unimaginably ahead of their time like LOGI. But he kind of invented Lesswrong. Wrote the sequences to train all of us mere mortals with 140-160IQs to think better. Then, not satisfied, he wrote Harry Potter and the Methods of Rationality to get the new generation to come play. And he founded the Singularity Institute, which became Miri. It is no overstatement that if we had pulled this off Eliezer could have been THE most important person in the history of the universe.

    As you can see, he's really into superlatives. And Jordan Peterson: >Jordan is an intellectual titan who explores personality development and mythology using an evolutionary and neuroscientific lenses. He sifted through all the mythical and religious narratives, as well as the continental psychoanalysis and developmental psychology so you and I don’t have to.

    At Burning Man, he dons a 7-year old alter ego named "Evergreen". Perhaps he has an infantilization fetish like Elon Musk: >Evergreen exists ephemerally during Burning Man. He is 7 days old and still in a very exploratory stage of life.

    As he hinted in his tweet to Roko, he has an enlightened view about women and gender: >Men were once useful to protect women and children from strangers, and to bring home the bacon. Now the supermarket brings the bacon, and women can make enough money to raise kids, which again, they like more in the early years. So men have become useless.

    And: >That leaves us with, you guessed, a metric ton of men who are no longer in families.

    Yep, I guessed about 12 men.

    0
    "Tech Right" scribe Richard Hanania promoted white supremacy for years under a pen name
    www.huffpost.com This Man Has The Ear Of Billionaires — And A White Supremacist Past He Kept A Secret

    Hanania is championed by tech moguls and a U.S. senator, but HuffPost found he used a pen name to become an important figure in the “alt-right.”

    This Man Has The Ear Of Billionaires — And A White Supremacist Past He Kept A Secret

    Excerpt: >Richard Hanania, a visiting scholar at the University of Texas, used the pen name “Richard Hoste” in the early 2010s to write articles where he identified himself as a “race realist.” He expressed support for eugenics and the forced sterilization of “low IQ” people, who he argued were most often Black. He opposed “miscegenation” and “race-mixing.” And once, while arguing that Black people cannot govern themselves, he cited the neo-Nazi author of “The Turner Diaries,” the infamous novel that celebrates a future race war.

    He's also a big eugenics supporter: >“There doesn’t seem to be a way to deal with low IQ breeding that doesn’t include coercion,” he wrote in a 2010 article for AlternativeRight .com. “Perhaps charities could be formed which paid those in the 70-85 range to be sterilized, but what to do with those below 70 who legally can’t even give consent and have a higher birthrate than the general population? In the same way we lock up criminals and the mentally ill in the interests of society at large, one could argue that we could on the exact same principle sterilize those who are bound to harm future generations through giving birth.”

    (Reminds me a lot of the things Scott Siskind has written in the past.)

    Some people who have been friendly with Hanania:

    • Mark Andreessen, Silion Valley VC and co-founder of Andreessen-Horowitz
    • Hamish McKenzie, CEO of Substack
    • Elon Musk, Chief Enshittification Officer of Tesla and Twitter
    • Tyler Cowen, libertarian econ blogger and George Mason University prof
    • J.D. Vance, US Senator from Ohio
    • Steve Sailer, race (pseudo)science promoter and all-around bigot
    • Amy Wax, racist law professor at UPenn.
    • Christopher Rufo, right-wing agitator and architect of many of Florida governor Ron DeSantis's culture war efforts
    3
    Grimes pens new TREACLES anthem: "I wanna be software, upload my mind"
    genius.com Grimes&nbsp;&amp; Illangelo – I Wanna Be Software

    [Verse 1] / I wanna be software / Upload my mind / Take all my data / What will you find? / I wanna be software / Battery heart / Infinite options / State of the art / I wanna be

    0
    TinyTimmyTokyo TinyTimmyTokyo @awful.systems
    Posts 18
    Comments 92