You could argue that another moral of Parfit's hitchhiker is that being a purely selfish agent is bad, and humans aren't purely selfish so it's not applicable to the real world anyway, but in Yudkowsky's philosophy—and decision theory academia—you want a general solution to the problem of rational choice where you can take any utility function and win by its lights regardless of which convoluted setup philosophers drop you into.
I'm impressed that someone writing on LW managed to encapsulate my biggest objection to their entire process this coherently. This is an entire model of thinking that tries to elevate decontextualization and debate-team nonsense into the peak of intellectual discourse. It's a manner of thinking that couldn't have been better designed to hide the assumptions underlying repugnant conclusions if indeed it had been specifically designed for that purpose.
Among the leadership of the biggest AI capability companies (OpenAI, Anthropic, Meta, Deepmind, xAI), at least 4/5 have clearly been heavily influenced by ideas from LessWrong.
I'm trying, but I can't not donate any harder!
The most popular LessWrong posts, SSC posts or books like HPMoR are usually people's first exposure to core rationality ideas and concerns about AI existential risk.
Yes but if I donate to Lightcone I can get a T-shirt for $1000! A special edition T-shirt! Whereas if I donated $1000 to Archive Of Our Own all I'd get is... a full sized cotton blanket, a mug, a tote bag and a mystery gift.
Holy smokes that's a lot of words. From their own post it sounds like they massively over-leveraged and have no more sugar daddies so now their convention center is doomed (yearly 1 million dollar interest payments!); but they can't admit that so are desperately trying to delay the inevitable.
Also don't miss this promise from the middle:
Concretely, one of the top projects I want to work on is building AI-driven tools for research and reasoning and communication, integrated into LessWrong and the AI Alignment Forum. [...] Building an LLM-based editor. [...] AI prompts and tutors as a content type on LW
It's like an anti-donation message. "Hey if you donate to me I'll fill your forum with digital noise!"
Open Phil generally seems to be avoiding funding anything that might have unacceptable reputational costs for Dustin Moskovitz
"reputational cost" eh? Let's see Mr. Moskovitz's reasoning in his own words:
Spoiler - It's not just about PR risk
But I do want agency over our grants. As much as the whole debate has been framed (by everyone else) as reputation risk, I care about where I believe my responsibility lies, and where the money comes from has mattered. I don't want to wake up anymore to somebody I personally loathe getting platformed only to discover I paid for the platform. That fact matters to me.
I cannot control what the EA community chooses for itself norm-wise, but I can control whether I fuel it.
I've long taken for granted that I am not going to live in integrity with your values and the actions you think are best for the world. I'm only trying to get back into integrity with my own.
If you look at my comments here and in my post, I've elaborated on other issues quite a few times and people keep ignoring those comments and projecting "PR risk" on to everything. I feel incapable of being heard correctly at this point, so I guess it was a mistake to speak up at all and I'm going to stop now. [Sorry I got frustrated; everyone is trying their best to do the most good here] I would appreciate if people did not paraphrase me from these comments and instead used actual quotes.
again, beyond "reputational risks", which narrows the mind too much on what is going on here
“PR risk” is an unnecessarily narrow mental frame for why we’re focusing.
I guess "we're too racist and weird for even a Facebook exec" doesn't have quite the same ring to it though.
This seems particularly important to consider given the upcoming conservative administration, as I think we are in a much better position to help with this conservative administration than the vast majority of groups associated with AI alignment stuff. We've never associated ourselves very much with either party, have consistently been against various woke-ish forms of mob justice for many years, and have clearly been read a non-trivial amount by Elon Musk (and probably also some by JD Vance).
"The reason for optimism is that we can cozy up to fascists!"