Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)SC
Posts
6
Comments
326
Joined
2 yr. ago

  • I have three more examples of sapient marine mammals!

    • whales warning the team of an impending solar flare in Stargate Atlantis via echolocation induced hallucinations
    • the dolphins in hitchhiker’s guide to the galaxy
    • whales showing up to help in one book of Animorphs while they are morphed into dolphins
  • I was thinking this also, like it's the perfect parody of several lesswrong and EA memes: overly concerned with animal suffering/sapience, overly concerned with IQ stats, openly admitting to no expertise or even relevant domain knowledge but driven to pontificate anyway, and inspired by existing science fiction... I think the last one explains it and it isn't a parody. As cinnasverses points out, Cetacean intelligence shows up occasionally in sci-fi. to add to the examples... sapient whales warning the team of an impending solar flare in Stargate Atlantis via echolocation induced hallucinations, the dolphins in hitchhiker's guide to the galaxy, and the whales showing up to help in one book of Animorphs.

  • I was trying to figure out why he hadn't turned this into an opportunity to lecture (or write a mini-fanfic) about giving more attack surface to the AGI to manipulate you... I was stumped until I saw your comment. I think that is it, expressing his childhood distrust of authority trumps lecturing us on the AI-God's manipulations.

  • I have context that makes this even more cringe! "Lawfulness concerns" refers to like, Dungeons and Dragons lawfulness. Specifically the concept of lawfulness developed in the Pathfinder fanfiction we've previously discussed (the one with deliberately bad BDSM and eugenics). Like a proper Lawful Good Paladin of Iomedae wouldn't put you in a position where you had to trust they hadn't rigged the background prompt if you went to them for spiritual counseling. (Although a Lawful Evil cleric of Asmodeus totally would rig the prompt... Lawfulness as a measuring stick of ethics/morality is a terrible idea even accepting the premise of using Pathfinder fanfic to develop your sense of ethics.)

  • Well, this explains how KP manages to claim Scott Alexander is center left with a straight face, she has no clue about basic leftist thought or even what the fuck leftism is! Like another comment said, she has enough sense to know the right-wing is full of shitheads, and so doesn't want to squarely acknowledge how aligned with them she is.

  • you can’t have an early version that you’ll lie about being a “big step towards General Quantum Computing” or whatever

    So you might think that... but I recall some years ago an analog computer was labeled as quantum annealing or something like that... oh wait, found the wikipedia article: https://en.wikipedia.org/wiki/Quantum_annealing and https://en.wikipedia.org/wiki/D-Wave_Systems . So it sounds to a naive listener like the same sort of thing as the quantum computers that are supposed to break cryptography and even less plausible things, but actually it can only do one very specific algorithm.

    I bet you could squeeze the "quantum" label onto a variety of analog computers well short of general quantum computing and have it technically not be fraud and still fool lots of idiot VCs!

  • It's a nice master post that gets all his responses and many useful articles linked into one place. It's all familiar if you've kept up with techtakes and Zitron's other posts and pivot-to-ai, but I found a few articles I had previously missed reading.

    Related trend to all the but achskhually's AI booster's like to throw out. Has everyone else noticed the trend where someone makes a claim of a rumor they heard about an LLM making a genuine discovery in some science, except it's always repeated second hand so you can't really evaluate it, and in the rare cases they do have a link to the source, it's always much less impressive than they made it sound at first...

  • j/k he’s doubling down on being a dick.

    I had kind of gotten my hopes up from the comparisons of him to sneerclub that maybe he'd be funny or incisively cutting or something, but it looks mostly like typical lesswrong pedantry, just less awkwardly straining to be charitable (to the in-group).

  • Apparently Eliezer is actually against throwing around P(doom) numbers: https://www.lesswrong.com/posts/4mBaixwf4k8jk7fG4/yudkowsky-on-don-t-use-p-doom ?

    The objections to using P(doom) are relatively reasonable by lesswrong standards... but this is in fact once again all Eliezer's fault. He started a community centered around 1) putting overconfident probability "estimates" on subjective uncertain things 2) need to make a friendly AI-God, he really shouldn't be surprised that people combine the two. Also, he has regularly expressed his certainty that we are all going to die to Skynet in terms of ridiculously overconfident probabilities, he shouldn't be surprised that other people followed suit.

  • Lesswrong and SSC: capable of extreme steelmanning of... check notes... occult mysticism (including divinatory magic), Zen-Buddhism based cults, people who think we should end democracy and have kings instead, Richard Lynn, Charles Murray, Chris Langan, techbros creating AI they think is literally going to cause mankind's extinction...

    Not capable of even a cursory glance into their statements, much less steelmanning: sneerclub, Occupy Wallstreet

  • we cant do basic things

    That's giving them too much credit! They've generated the raw material for all the marketing copy and jargon pumped out by the LLM companies producing the very thing they think will doom us all! They've served a small but crucial role in the influence farming of the likes of Peter Thiel and Elon Musk. They've served as an entry point to the alt-right pipeline!

    dath ilan?

    As a self-certified Eliezer understander, I can tell you dath ilan would open up a micro-prediction market on various counterfactual ban durations. Somehow this prediction market would work excellently despite a lack of liquidity and multiple layers of skewed incentives that should outweigh any money going into it. Also Said would have been sent to a reeducation camp, quiet city and sterilized denied UBI if he reproduces for not conforming to dath ilan's norms much earlier.

  • That too.

    And judging by how all the elegantly charitably written blog posts on the EA forums did jack shit to stop the second manifest conference from having even more racists, debate really doesn't help.

  • I'm feeling an effort sneer...

    For roughly equally long have I spent around one hundred hours almost every year trying to get Said Achmiz to understand and learn how to become a good LessWrong commenter by my lights.

    Every time I read about a case like this my conviction grows that sneerclub's vibe based moderation is the far superior method!

    The key component of making good sneer club criticism is to never actually say out loud what your problem is.

    We've said it multiple times, it's just a long list that is inconvenient to say all at once. The major things that keep coming up: The cult shit (including the promise of infinite AGI God heaven and infinite Roko's Basilisk hell; and including forming high demand groups motivated by said heaven/hell); the racist shit (including the eugenics shit); the pretentious shit (I could actually tolerate that if it didn't have the other parts); and lately serving as crit-hype marketing for really damaging technology!

    They don't need to develop protocols of communication that produce functional outcomes

    Ahem... you just admitted to taking a hundred hours to ban someone, whereas dgerad and co kick out multiple troublemakers in our community within a few hours tops each. I think we are winning on this one.

    For LessWrong to become a place that can't do much but to tear things down.

    I've seen some outright blatant crank shit (as opposed to the crank shit that works hard to masquerade as more legitimate science) pretty highly upvoted and commented positively on lesswrong (GeneSmith's wild genetic engineering fantasies come to mind).

  • I missed that it’s also explicitly meant as rationalist esoterica.

    It turns in that direction about 20ish pages in... and spends hundreds of pages on it, greatly inflating the length from what could be a much more readable length. It then gets back to actual plot events after that.

  • I hadn't heard of MAPLE before, is it tied to lesswrong? From the focus on AI it's at least adjacent to it... so I'll add that to the list of cults lesswrong is responsible for. So all in all, we've got the Zizians, Leverage Research, and now Maple for proper cults, and stuff like Dragon Army and Michael Vassar's groupies for "high demand" groups. It really is a cult incubator.

  • I actually think "Project Lawful" started as Eliezer having fun with glowfic (he has a few other attempts at glowfics that aren't nearly as wordy... one of them actually almost kind of pokes fun at himself and lesswrong), and then as it took off and the plot took the direction of "his author insert gives lectures to an audience of adoring slaves" he realized he could use it as an opportunity to squeeze out all the Sequence content he hadn't bothered writing up in the past decade^ . And that's why his next attempt at a HPMOR-level masterpiece is an awkward to read rp featuring tons of adult content in a DnD spinoff, and not more fanfiction suitable for optimal reception to the masses.

    ^(I think Eliezer's writing output dropped a lot in the 2010s compared to when he was writing the sequences and the stuff he has written over the past decade is a lot worse. Like the sequences are all in bite-size chunks, and readable in chunks in sequence, and often rephrase legitimate science in a popular way, and have a transhumanist optimism to them. Whereas his recent writings are tiny little hot takes on twitter and long, winding, rants about why we are all doomed on lesswrong.)

  • Yeah, even if computers predicting other computers didn't require overcoming the halting problem (and thus contradict the foundations of computer science) actually implementing such a thing with computers smart enough to qualify as AGI in a reliable way seems absurdly impossible.