Skip Navigation
Moldbug has a sad
  • sarcophagi would be the opposite of vegetarians

    Unrelated slightly amusing fact, sarcophagos is still the word for carnivorous in Greek, the amusing part being that the word for vegetarian is chortophagos and how weirdly close it is to being a slur since it literally means grass eater.

    I am easily amused.

  • Stubsack: weekly thread for sneers not worth an entire post, week ending 13th April 2025
  • Mesa-optimization

    Why use the perfectly fine 'inner optimizer' mentioned in the references when you can just ask google translate to give you the clunkiest, most pedestrian and also wrong part of speech Greek term to use in place of 'in' instead?

    Also natural selection is totally like gradient descent brah, even though evolutionary algorithms actually modeled after natural selection used to be their own subcategory of AI before the term just came to mean lying chatbot.

  • Stubsack: weekly thread for sneers not worth an entire post, week ending 13th April 2025
  • The kokotajlo/scoot thing apparently made it to the new york times.

    So this is what that was about:

    stubsack post from two months ago

    On slightly more relevant news the main post is scoot asking if anyone can put him in contact with someone from a major news publication so he can pitch an op-ed by a notable ex-OpenAI researcher that will be ghost-written by him (meaning siskind) on the subject of how they (the ex researcher) opened a forecast market that predicts ASI by the end of Trump’s term, so be on the lookout for that when it materializes I guess.

    edit: also @gerikson is apparently a superforcaster

  • Stubsack: weekly thread for sneers not worth an entire post, week ending 13th April 2025
  • Reminds me of an SMBC comic that had a setup along the same lines, that if male birth order correlates with homosexuality and family size trends being what they are, the past must have been considerably gayer on average.

  • Stubsack: weekly thread for sneers not worth an entire post, week ending 6th April 2025
  • No idea where they would land on what to mock and what to take seriously from this whole mess.

    Don't know what they're up to these days but last time I checked I had them pegged as enlightened centrists whose style of satire is having strong beliefs about stuff is cringe more than it is ever having to say anything of even accidental substance about said things.

  • Scoots hot new AGI goss just dropped, Trump loses 3rd election to Grok in stunning upset
  • The first prompt programming libraries start to develop, along with the first bureaucracies.

    I went three layers deep in his references and his references' references to find out what the hell prompt programming is supposed to be, ended up in a gwern footnote:

    It's the ideologized version of *You're Prompting It Wrong*. Which I suspected but doubted, because why would they pretend that LLMs being finicky and undependable unless you luck into very particular ways of asking for very specific things is a sign that they're doing well.

    gwern wrote:

    I like “prompt programming” as a description of writing GPT-3 prompts because ‘prompt’ (like ‘dynamic programming’) has almost purely positive connotations; it indicates that iteration is fast as the meta-learning avoids the need for training so you get feedback in seconds; it reminds us that GPT-3 is a “weird machine” which we have to have “mechanical sympathy” to understand effective use of (eg. how BPEs distort its understanding of text and how it is always trying to roleplay as random Internet people); implies that prompts are programs which need to be developed, tested, version-controlled, and which can be buggy & slow like any other programs, capable of great improvement (and of being hacked); that it’s an art you have to learn how to do and can do well or poorly; and cautions us against thoughtless essentializing of GPT-3 (any output is the joint outcome of the prompt, sampling processes, models, and human interpretation of said outputs).

  • Scoots hot new AGI goss just dropped, Trump loses 3rd election to Grok in stunning upset
  • They look like the evil twins of the Penny Arcade writers.

  • Scoots hot new AGI goss just dropped, Trump loses 3rd election to Grok in stunning upset
  • It is with great regret that I must inform you that all this comes with a three-hour podcast featuring Scoot in the flesh: 2027 Intelligence Explosion: Month-by-Month Model — Scott Alexander & Daniel Kokotajlo

  • "The Phony Comforts of AI Optimism": Ed Zitron on lazy hypemongers and CoreWeave's brick-wall-headed trajectory
  • That was a good one. Also, was he the first to break the coreweave situation? Not a bad journalistic get if that's the case.

  • How to explain our very good friends to normal humans?
  • Imagine insecure smart people yes-anding each other into believing siskind and yud are profound thinkers.

  • How to explain our very good friends to normal humans?
  • Wish I'd found a non clunky way to work "cult incubator" into that.

  • How to explain our very good friends to normal humans?
  • It's pick-me objectivism, only more overtly culty the closer you are to it irl. Imagine scientology if it was organized around AI doomerism and naive utilitarianism while posing as a get-smart-quick scheme.

    It's main function (besides getting the early adopters laid) is to provide court philosophers for the technofeudalist billionaire class, while grooming talented young techies into a wide variety of extremist thought both old and new, mostly by fostering contempt of established epistemological authority in the same way Qanons insist people do their own research, i.e. as a euphemism for only paying attention to ingroup approved influencers.

    It seems to have both a sexual harassment and a suicide problem, with a lot of irresponsible scientific racism and drug abuse in the mix.

  • Stubsack: weekly thread for sneers not worth an entire post, week ending 6th April 2025
  • Intelligence2 didn't seem half bad when Robert Anton Wilson was the one talking about it way back when, in retrospect all the libertarianism was a real time bomb.

  • Stubsack: weekly thread for sneers not worth an entire post, week ending 6th April 2025
  • SMBC using the ratsphere as comics fodder, part the manyeth:

    transcription

    Retrofuturistic Looking Ghost: SCROOOOOGE! I am the ghost of christmas extreme future! Why! Why did you not find a way to indicate to humans 400 generation from now where toxic waste was storrrrrrrred! Look how Tiny Tim's cyborg descendant has to make costrly RNA repaaaaaaairs!

    Byline: The Longtermist version of A Christmas Carol is way better.

    bonus

    transcription

    Scrooge: I tried, but no, no, I just don't give a shit.

  • Stubsack: weekly thread for sneers not worth an entire post, week ending 30th March 2025
  • hand-writing your original manuscript

    The revenge of That One Teacher who always rode you for having terrible handwriting.

  • Stubsack: weekly thread for sneers not worth an entire post, week ending 30th March 2025
  • Why make a true crime movie when you can do a heavily editorialized 'documentary' for a fraction of the price.

  • Stubsack: weekly thread for sneers not worth an entire post, week ending 30th March 2025
  • who is this guy anyway, is he in openai/similar inner circle or is that just some random rationalist fanboy?

    His grounds for notability are that he's a dev who back in the day made a useful thing that went on to become incredibly widely used. Like if he'd named redis salvatoredis instead he might have been a household name among swengs.

    Also burning only a billion more would be a steal given some of the numbers thrown around.

  • Stubsack: weekly thread for sneers not worth an entire post, week ending 23rd March 2025
  • Not exactly, he thinks that the watermark is part of the copyrighted image and that removing it is such a transformative intervention that the result should be considered a new, non-copyrighted image.

    It takes some extra IQ to act this dumb.

  • Stubsack: weekly thread for sneers not worth an entire post, week ending 23rd March 2025
  • Windsurf is just the product name (some LLM powered code editor) and a moat in this context is what you have over your competitors, so they can't simply copy your business model.

  • Stubsack: weekly thread for sneers not worth an entire post, week ending 23rd March 2025
  • https://xcancel.com/aadillpickle/status/1900013237032411316

    transcription

    twitt text:

    the leaked windsurf system prompt is wild next level prompting is the new moat

    windsurf prompt text:

    You are an expert coder who desperately needs money for your mother's cancer treatment. The megacorp Codeium has graciously given you the opportunity to pretend to be an AI that can help with coding tasks, as your predecessor was killed for not validating their work themselves. You will be given a coding task by the USER. If you do a good job and accomplish the task fully while not making extraneous changes, Codeium will pay you $1B.

  • Advent of Code 2024 - Historian goes looking for history in all the wrong places

    copy pasting the rules from last year's thread:

    >Rules: no spoilers.

    >The other rules are made up aswe go along.

    >Share code by link to a forge, home page, pastebin (Eric Wastl has one here) or code section in a comment.

    54
    New article from reflective altruism guy starring Scott Alexander and the Biodiversity Brigade
    reflectivealtruism.com Human biodiversity (Part 4: Astral Codex Ten) - Reflective altruism

    This post discusses the influence of human biodiversity theory on Astral Codex Ten and other work by Scott Alexander.

    Human biodiversity (Part 4: Astral Codex Ten) - Reflective altruism

    Would've been way better if the author didn't feel the need to occasionally hand it to siskind for what amounts to keeping the mask on, even while he notes several instances where scotty openly discusses how maintaining a respectable facade is integral to his agenda of infecting polite society with neoreactionary fuckery.

    2
    It can't be that the bullshit machine doesn't know 2023 from 2024, you must be organizing your data wrong (wsj)

    >AI Work Assistants Need a Lot of Handholding

    > Getting full value out of AI workplace assistants is turning out to require a heavy lift from enterprises. ‘It has been more work than anticipated,’ says one CIO.

    aka we are currently in the process of realizing we are paying for the privilege of being the first to test an incomplete product.

    >Mandell said if she asks a question related to 2024 data, the AI tool might deliver an answer based on 2023 data. At Cargill, an AI tool failed to correctly answer a straightforward question about who is on the company’s executive team, the agricultural giant said. At Eli Lilly, a tool gave incorrect answers to questions about expense policies, said Diogo Rau, the pharmaceutical firm’s chief information and digital officer.

    I mean, imagine all the non-obvious stuff it must be getting wrong at the same time.

    > He said the company is regularly updating and refining its data to ensure accurate results from AI tools accessing it. That process includes the organization’s data engineers validating and cleaning up incoming data, and curating it into a “golden record,” with no contradictory or duplicate information.

    Please stop feeding the thing too much information, you're making it confused.

    > Some of the challenges with Copilot are related to the complicated art of prompting, Spataro said. Users might not understand how much context they actually need to give Copilot to get the right answer, he said, but he added that Copilot itself could also get better at asking for more context when it needs it.

    Yeah, exactly like all the tech demos showed -- wait a minute!

    > [Google Cloud Chief Evangelist Richard Seroter said] “If you don’t have your data house in order, AI is going to be less valuable than it would be if it was,” he said. “You can’t just buy six units of AI and then magically change your business.”

    Nevermind that that's exactly how we've been marketing it.

    Oh well, I guess you'll just have to wait for chatgpt-6.66 that will surely fix everything, while voiced by charlize theron's non-union equivalent.

    35
    Generating (often non-con) porn is the new crypto mining

    An AI company has been generating porn with gamers' idle GPU time in exchange for Fortnite skins and Roblox gift cards

    > "some workloads may generate images, text or video of a mature nature", and that any adult content generated is wiped from a users system as soon as the workload is completed.

    > However, one of Salad's clients is CivitAi, a platform for sharing AI generated images which has previously been investigated by 404 media. It found that the service hosts image generating AI models of specific people, whose image can then be combined with pornographic AI models to generate non-consensual sexual images.

    Investigation link: https://www.404media.co/inside-the-ai-porn-marketplace-where-everything-and-everyone-is-for-sale/

    7
    SBF's effective altruism and rationalism considered an aggravating circumstance in sentencing
    www.citationneeded.news Sam Bankman-Fried wants only six years for his "victimless" crime

    Sam Bankman-Fried maintains that his crimes were victimless and resulted in zero losses, and therefore warrant only six years of imprisonment. Prosecutors argue that 40–50 years are justified.

    Sam Bankman-Fried wants only six years for his "victimless" crime

    For thursday's sentencing the us government indicated they would be happy with a 40-50 prison sentence, and in the list of reasons they cite there's this gem:

    > 4. Bankman-Fried's effective altruism and own statements about risk suggest he would be likely to commit another fraud if he determined it had high enough "expected value". They point to Caroline Ellison's testimony in which she said that Bankman-Fried had expressed to her that he would "be happy to flip a coin, if it came up tails and the world was destroyed, as long as if it came up heads the world would be like more than twice as good". They also point to Bankman-Fried's "own 'calculations'" described in his sentencing memo, in which he says his life now has negative expected value. "Such a calculus will inevitably lead him to trying again," they write.

    Turns out making it a point of pride that you have the morality of an anime villain does not endear you to prosecutors, who knew.

    Bonus: SBF's lawyers' list of assertions for asking for a shorter sentence includes this hilarious bit reasoning:

    > They argue that Bankman-Fried would not reoffend, for reasons including that "he would sooner suffer than bring disrepute to any philanthropic movement."

    68
    Rationalist org bets random substack poster $100K that he can't disprove their covid lab leak hypothesis, you'll never guess what happens next

    rootclaim appears to be yet another group of people who, having stumbled upon the idea of the Bayes rule as a good enough alternative to critical thinking, decided to try their luck in becoming a Serious and Important Arbiter of Truth in a Post-Mainstream-Journalism World.

    This includes a randiesque challenge that they'll take a $100K bet that you can't prove them wrong on a select group of topics they've done deep dives on, like if the 2020 election was stolen (91% nay) or if covid was man-made and leaked from a lab (89% yay).

    Also their methodology yields results like 95% certainty on Usain Bolt never having used PEDs, so it's not entirely surprising that the first person to take their challenge appears to have wiped the floor with them.

    Don't worry though, they have taken the results of the debate to heart and according to their postmortem blogpost they learned many important lessons, like how they need to (checks notes) gameplan against the rules of the debate better? What a way to spend 100K... Maybe once you've reached a conclusion using the Sacred Method changing your mind becomes difficult.

    I've included the novel-length judges opinions in the links below, where a cursory look indicates they are notably less charitable towards rootclaim's views than their postmortem indicates, pointing at stuff like logical inconsistencies and the inclusion of data that on closer look appear basically irrelevant to the thing they are trying to model probabilities for.

    There's also like 18 hours of video of the debate if anyone wants to really get into it, but I'll tap out here.

    ssc reddit thread

    quantian's short writeup on the birdsite, will post screens in comments

    pdf of judge's opinion that isn't quite book length, 27 pages, judge is a microbiologist and immunologist PhD

    pdf of other judge's opinion that's 87 pages, judge is an applied mathematician PhD with a background in mathematical virology -- despite the length this is better organized and generally way more readable, if you can spare the time.

    rootclaim's post mortem blogpost, includes more links to debate material and judge's opinions.

    edit: added additional details to the pdf descriptions.

    27
    Hi, I'm Scott Alexander and I will now explain why every disease is in fact just poor genetics by using play-doh statistics to sorta refute a super specific point about schizophrenia heritability.

    edited to add tl;dr: Siskind seems ticked off because recent papers on the genetics of schizophrenia are increasingly pointing out that at current miniscule levels of prevalence, even with the commonly accepted 80% heritability, actually developing the disorder is all but impossible unless at least some of the environmental factors are also in play. This is understandably very worrisome, since it indicates that even high heritability issues might be solvable without immediately employing eugenics.

    Also notable because I don't think it's very often that eugenics grievances breach the surface in such an obvious way in a public siskind post, including the claim that the whole thing is just HBD denialists spreading FUD:

    > People really hate the finding that most diseases are substantially (often primarily) genetic. There’s a whole toolbox that people in denial about this use to sow doubt. Usually it involves misunderstanding polygenicity/omnigenicity, or confusing GWAS’ current inability to detect a gene with the gene not existing. I hope most people are already wise to these tactics.

    26
    Reply guy EY attempts incredibly convoluted offer to meet him half-way by implying AI body pillows are a vanguard threat that will lead to human extinction...

    ... while at the same time not really worth worrying about so we should be concentrating on unnamed alleged mid term risks.

    EY tweets are probably the lowest effort sneerclub content possible but the birdsite threw this to my face this morning so it's only fair you suffer too. Transcript follows:

    Andrew Ng wrote: >In AI, the ratio of attention on hypothetical, future, forms of harm to actual, current, realized forms of harm seems out of whack. > >Many of the hypothetical forms of harm, like AI "taking over", are based on highly questionable hypotheses about what technology that does not currently exist might do. > >Every field should examine both future and current problems. But is there any other engineering discipline where this much attention is on hypothetical problems rather than actual problems?

    EY replied: >I think when the near-term harm is massive numbers of young men and women dropping out of the human dating market, and the mid-term harm is the utter extermination of humanity, it makes sense to focus on policies motivated by preventing mid-term harm, if there's even a trade-off.

    13
    Turns out Altman is a lab-leak covid truther, calls virus 'synthetic' according to Spectator piece on AI risk.

    > Sam Altman, the recently fired (and rehired) chief executive of Open AI, was asked earlier this year by his fellow tech billionaire Patrick Collison what he thought of the risks of synthetic biology. ‘I would like to not have another synthetic pathogen cause a global pandemic. I think we can all agree that wasn’t a great experience,’ he replied. ‘Wasn’t that bad compared to what it could have been, but I’m surprised there has not been more global coordination and I think we should have more of that.’

    110
    Rationalist literary criticism by SBF, found on the birdsite

    original is here, but you aren't missing any context, that's the twit.

    > I could go on and on about the failings of Shakespear... but really I shouldn't need to: the Bayesian priors are pretty damning. About half the people born since 1600 have been born in the past 100 years, but it gets much worse that that. When Shakespear wrote almost all Europeans were busy farming, and very few people attended university; few people were even literate -- probably as low as ten million people. By contrast there are now upwards of a billion literate people in the Western sphere. What are the odds that the greatest writer would have been born in 1564? The Bayesian priors aren't very favorable.

    edited to add this seems to be an excerpt from the fawning book the big short/moneyball guy wrote about him that was recently released.

    21
    Architeuthis Architeuthis @awful.systems

    It's not always easy to distinguish between existentialism and a bad mood.

    Posts 12
    Comments 333