There's an actual explanation in the original article about some of the wardrobe choices. It's even dumber, and it involves effective altruism.
It is a very cold home. It’s early March, and within 20 minutes of being here the tips of some of my fingers have turned white. This, they explain, is part of living their values: as effective altruists, they give everything they can spare to charity (their charities). “Any pointless indulgence, like heating the house in the winter, we try to avoid if we can find other solutions,” says Malcolm. This explains Simone’s clothing: her normal winterwear is cheap, high-quality snowsuits she buys online from Russia, but she can’t fit into them now, so she’s currently dressing in the clothes pregnant women wore in a time before central heating: a drawstring-necked chemise on top of warm underlayers, a thick black apron, and a modified corset she found on Etsy. She assures me she is not a tradwife. “I’m not dressing trad now because we’re into trad, because before I was dressing like a Russian Bond villain. We do what’s practical.”
That's the trouble with talking about thoroughly disingenuous people, you get bogged down with defining if they meant to mean what they wrote. It's all optics.
Either that or he let his performative contrarianism get out of hand, he did delete the post after all.
Still, it's just like an HBD enthusiast heavy into eugenic optimisation to think that there might be something to measuring skulls, even if it didn't pan out the first time, maybe if they had known about IQ it would have been different, it's a shame the woke mob has made using calipers on school children a crime, etc.
To be really precise it was about measuring the size and distribution of all sorts of skull irregularities (the proverbial 'bumps') and mapping them to various traits, it's basically palm reading for the head.
Siskind is just being his usual disingenuous self, i.e. 'everyone always uses skull shape' (to indicate that my intellectual precursors were clowns) is obviously referencing phrenology, then immediately motte-and-baiieys it to a claim of correlation of cranial capacity and IQ.
Except for M&B sleight of hand to work the claim shift shouldn't happen in the same sentence, otherwise it's extremely obvious that you are claiming one thing while carrying water for the other thing (phrenology), which is probably why he ended up deleting the post.
Alexandros Marinos, whom I read as engaged-with-but-skeptical-of the “Rationalist” community, says:
Seeing as Marinos' whole beef with Siskind was about the latter's dismissal of invermectin as a potent anti-covid concoction, I would hesitate to cite him as an authority on research standards.
Did the Aella moratorium from r/sneerclub carry over here?
Because if not
for the record, im currently at ~70% that we're all dead in 10-15 years from AI. i've stopped saving for retirement, and have increased my spending and the amount of long-term health risks im taking
You can like a thinker without endorsing all of their beliefs, even if their beliefs are evil. Why do people like Schmitt and Heidegger even though they were fascists? Or Foucault given his views on the age of consent? I agree that Hanania's views are relevant context, but I think it's fine to write a book review that doesn't try to analyse the author's motivations or the book's place in a wider political context.
Hanania is clearly analogous to Foucault and Heidegger, and also is it even wrong to completely divorce a work from all context.
I think Scott was simply more interested in writing an article on arguments aginst civil rights law than an article on whether Hanania is engaged in an insidious project to smuggle rascist ideas into the mainstream via his legal arguments, and frankly I find that kind of review more interesting too. Perphaps this is irresponsible, but at the end of the day Scott is a modestly influential blogger that just likes to write about things he finds interesting.
uwu smolbean blogger with absolutely no agenda besides the pursuit of truth and civility strikes again.
HBD is a legit line of scientific inquiry you guys, it's not just eugenics obsessed weirdoes and fascists trying to bring back birthright as the primary path to privilege.
They say at one point that by being flaky, aloof and indifferent while rich SBF may have accidentally discovered the rules of pickup artistry for VCs, which is not a bad take.
Over time FHI faced increasing administrative headwinds within the Faculty of Philosophy (the Institute’s organizational home). Starting in 2020, the Faculty imposed a freeze on fundraising and hiring. In late 2023, the Faculty of Philosophy decided that the contracts of the remaining FHI staff would not be renewed. On 16 April 2024, the Institute was closed down.
Sound like Oxford increasingly did not want anything to do with them.
edit: Here's a 94 page "final report" that seems more geared towards a rationalist audience.
Wonder what this was about:
Why we failed
[...]
There also needs to be an understanding of how to communicate across organizational communities.
When epistemic and communicative practices diverge too much, misunderstandings proliferate. Several
times we made serious missteps in our communications with other parts of the university because we
misunderstood how the message would be received. Finding friendly local translators and bridgebuilders
is important.
If I had a 1980s sitcom mom sitting next to me here, she might ask “If Scott Alexander told you to jump off a bridge, would you do that too?” To which I’d respond probably not, but I would spend some time considering the possibility that I had a fundamentally flawed understanding of the laws of gravity.
HPTMOR is so obviously and unequivocally terrible that I can't help thinking I must be missing something significant about it, like how it could be scratching a very specific itch in young people on the spectrum.
As always, all bets are off if it happens to be the first long form literature someone read.
Yet AI researcher Pablo Villalobos told the Journal that he believes that GPT-5 (OpenAI's next model) will require at least five times the training data of GPT-4.
I tried finding the non-layman's version of the reasoning for this assertion and it appears to be a very black box assessment, based on historical trends and some other similarly abstracted attempts at modelling dataset size vs model size.
This is EpochAI's whole thing apparently, not that there's necessarily anything wrong with that. I was just hoping for some insight into dataset length vs architecture and maybe the gossip on what's going on with the next batch of LLMs, like how it eventually came out that gpt4.x is mostly several gpt3.xs in a trench coat.
There's an actual explanation in the original article about some of the wardrobe choices. It's even dumber, and it involves effective altruism.