Stubsack: weekly thread for sneers not worth an entire post, week ending 31st August 2025 - awful.systems
scruiser @ scruiser @awful.systems Posts 5Comments 271Joined 2 yr. ago
scruiser @ scruiser @awful.systems
Posts
5
Comments
271
Joined
2 yr. ago
Sneerquence classics: Eliezer on GOFAI (half serious half sneering effort post)
Is Scott and others like him at fault for Trump... no it's the "elitist's" fault!
It's a good post. A few minor quibbles:
I think at least some of the people at launch were true believers, but strong financial incentives and some cynics present at the start meant the true believers didn't really have a chance, culminating in the board trying but failing to fire Sam Altman and him successfully leveraging the threat of taking everyone with him to Microsoft. It figures one of the rare times rationalists recognize and try to mitigate the harmful incentives of capitalism they fall vastly short. OTOH... if failing to convert to a for-profit company is a decisive moment in popping the GenAI bubble, then at least it was good for something?
I wish people didn't feel the need to add all these disclaimers, or at least put a disclaimer on their disclaimer. It is a slightly better autocomplete for coding that also introduces massive security and maintainability problems if people entirely rely on it. It is a better web search only relative to the ad-money-motivated compromises Google has made. It also breaks the implicit social contract of web searches (web sites allow themselves to be crawled so that human traffic will ultimately come to them) which could have pretty far reaching impacts.
One of the things I liked and didn't know about before
That is hilarious! Kind of overkill to be honest, I think they've really overrated how much it can help with a bioweapons attack compared to radicalizing and recruiting a few good PhD students and cracking open the textbooks. But I like the author's overall point that this shut-it-down approach could be used for a variety of topics.
One of the comments gets it:
LLMs aren't actually smart enough to make delicate judgements, even with all the fine-tuning and RLHF they've thrown at them, so you're left with over-censoring everything or having the safeties overridden with just a bit of prompt-hacking (and sometimes both problems with one model)/1