Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)SC
Posts
6
Comments
326
Joined
2 yr. ago

  • He had me in the first half, I thought he was calling out rationalist's problems (even if dishonestly disassociating himself from then). But then his recommended solution was prediction markets (a concept which rationalists have in fact been trying to play around with, albeit at a toy model level with fake money).

  • The author occasionally posts to slatestarcodex, we kind of tried to explain what was wrong with Scott Alexander and I think she halfway got it... I also see her around the comments in sneerclub occasionally, so at least she is staying aware of things...

  • Poor historical accuracy in favor of meme potential is why our reality is so comically absurd. You can basically use the simulation hypothesis to justify anything you want by proposing some weird motive or goals of the simulators. It almost makes God-of-the-gaps religious arguments seem sane and well-founded by comparison!

  • Within the world-building of the story, the way the logic is structured makes sense in a ruthless utilitarian way (although Scott's narration and framing is way too sympathetic to the murderously autistic angel that did it), but taken in the context outside the story of the sort of racism Scott likes to promote, yeah it is really bad.

    We had previous discussion of Unsong on the old site. (Kind of cringing about the fact that I liked the story at one point and only gradually noticed all the problematic stuff and poor writing quality stuff.)

  • I've seen this concept mixed with the simulation "hypothesis". The logic goes that if future simulators are running a "rescue simulation" but only cared (or at least cared more) about the interesting or more agentic people (i.e. rich/white/westerner/lesswronger), they might only fully simulate those people and leave simpler nonsapient scripts/algorithms piloting the other people (i.e. poor/irrational/foreign people).

    So basically literally positing a mechanism by which they are the only real people and other people are literally NPCs.

  • Chiming in to agree your prediction write-ups aren't particularly good. Sure they spark discussion, but the whole forecasting/prediction game is one we've seen the rationalists play many times, and it is very easy to overlook or at least undercount your misses and over hype your successes.

    In general... I think your predictions are too specific and too optimistic...

  • Depends what you mean by "steelman". If you take their definition at it's word, then they fail to try all the time, just look at any of their attempts at understanding leftist writing or thought. Of course, it often actually means "entirely rebuild the opposing argument into something different" (because they don't have a basic humanities education or don't want to actually properly read leftist thought) and they can't resist doing that!

  • Putting this into the current context of LLMs... Given how Eliezer still repeats the "diamondoid bacteria" line in his AI-doom scenarios, even multiple decades after Drexler has both been thoroughly debunked and slightly contributed to inspiring real science, I bet memes of LLM-AGI doom and utopia will last long after the LLM bubble pops.

  • I brought this up right when it came out: https://awful.systems/post/5244605/8335074

    (Not demanding credit on keeping better up to date on hate-reading the EA forums, just sharing the previous discussion)

    Highlights from the previous discussion... I had thought Thiel was entirely making up his own wacky theology (because it was a distinctly different flavor of insanity from the typical right-wing Fundamentalist/Evangelical), but actually there is a "theologian" (I use that term loosely) who developed, René Girard, who developed the theology he is describing.

  • I keep seeing this sort of thinking on /r/singularity, people who are sure LLMs will be great once they have memory/ground-truth factual knowledge/some other feature that in fact the promptfarmers have already tried (and failed) to add via fancier prompting (i.e. RAG) or fine-tuning and would require a massive reinvention of the entire paradigm to actually fix. That, or they describe what basically amounts to a reinvention of the concept of expert systems like Cyc.

  • And we don’t want to introduce all the complexities of solving disagreements on Wikipedia.

    What they actually mean is they don't want them to be solved in favor of the dgerad type of people... like (reviewing the expose on lesswrong)... demanding quality sources that aren't HBD pseudoscience journals or right wing rags.

  • Given that the USA has refused more comprehensive gun laws or better funding of public mental health services even after many many school shootings, I think you are far too optimistic about the LLM induced mental health crisis actually leading to a ban or even just tighter liability on LLMs. My expectation is age verification plus giant disclaimers, and the crisis continuing. The inference cost will force the LLMs to be more obviously dumb and unable to keep track of context, and the lack of a technological moat will lead to LLM chatbots becoming commoditized, but I'm overall not optimistic.

    The LLM induced skill gap will be a thing yes... I predict companies trying to address it in the most hamfisted and belittling way possible. Like, they keep using code interviews (that are close to useless at evaluating the actual skills the employee needs), but now they want you to do the code interview with spyware installed to make sure you aren't using an LLM to help you.

  • It's a good post. A few minor quibbles:

    The “nonprofit” company OpenAI was launched under the cynical message of building a “safe” artificial intelligence that would “benefit” humanity.

    I think at least some of the people at launch were true believers, but strong financial incentives and some cynics present at the start meant the true believers didn't really have a chance, culminating in the board trying but failing to fire Sam Altman and him successfully leveraging the threat of taking everyone with him to Microsoft. It figures one of the rare times rationalists recognize and try to mitigate the harmful incentives of capitalism they fall vastly short. OTOH... if failing to convert to a for-profit company is a decisive moment in popping the GenAI bubble, then at least it was good for something?

    These tools definitely have positive uses. I personally use them frequently for web searches, coding, and oblique strategies. I find them helpful.

    I wish people didn't feel the need to add all these disclaimers, or at least put a disclaimer on their disclaimer. It is a slightly better autocomplete for coding that also introduces massive security and maintainability problems if people entirely rely on it. It is a better web search only relative to the ad-money-motivated compromises Google has made. It also breaks the implicit social contract of web searches (web sites allow themselves to be crawled so that human traffic will ultimately come to them) which could have pretty far reaching impacts.

    One of the things I liked and didn't know about before

    Ask Claude any basic question about biology and it will abort.

    That is hilarious! Kind of overkill to be honest, I think they've really overrated how much it can help with a bioweapons attack compared to radicalizing and recruiting a few good PhD students and cracking open the textbooks. But I like the author's overall point that this shut-it-down approach could be used for a variety of topics.

    One of the comments gets it:

    Safety team/product team have conflicting goals

    LLMs aren't actually smart enough to make delicate judgements, even with all the fine-tuning and RLHF they've thrown at them, so you're left with over-censoring everything or having the safeties overridden with just a bit of prompt-hacking (and sometimes both problems with one model)/1

  • Had me in the first few paragraphs…not gonna lie.

    Yeah, the first few paragraphs actually felt like they would serve as a defense of Hamas: Israel engineered a situation were any form of resistance against them would need to be violent and brutal so Hamas is justified even if it killed 5 people to save 1.

    The more I think about his metaphor the more frustrated I get. Israel holds disproportionate power in this entire situation, if anyone is contriving no-win situations to win temporary PR victories it is Israel (Netanyahu's trial is literally getting stalled out by the conflict).

  • Lots of woo and mysticism already has a veneer of stolen Quantum terminology. It's too far from respectable to get the quasi-expert endorsement or easy VC money that LLM hype has gotten, but quantum hucksters fusing quantum computing nonsense with quantum mysticism can probably still con lots of people out of their money.

  • I like how Zitron does a good job of distinguishing firm overall predictions from specific scenarios (his chaos bets) which are plausible but far from certain. AI 2027 specifically conflated and confused those things in a way that gave it's proponents more rhetorical room to hide and dodge.