Skip Navigation
OAI employees channel the spirit of Marvin Minsky
  • I am probably giving most of them too much credit, but I think some of them took the Bitter Lesson and learned the wrong things from it. LLMs performed better than originally expected just off context, and (apparently) scaled better with bigger model and more training than expected, so now they think they just need to crank up the size and tweak things slightly (i.e. "prompt engineering" and RLHF) and don't appreciate the limits built into the entire approach.

    The annoying thing about another winter is that it would probably result in funding being cut for other research. And laymen don't appreciate all the academic funding that goes into research for decades before an approach becomes interesting and viable enough to scale up and commercialize (and then overhyped and oversold before some more modest practical usages become common, and relabeled as something other than AI).

    Edit: or more cynically, the leaders and hype-men know that algorithmic advances aren't an automatic dump money in, get out disruptive product process, so they don't bother putting as much monetary investment or hype into algorithmic advances. Like compare the attention paid towards Yann LeCunn talking about algorithmic developments vs. Sam Altman promising grad student level LLMs (as measured by a spurious benchmark) in two years.

  • AI doomers are all trying to find the guy building the AI doom machines
  • Sneerclub tried to warn them (well not really, but some of our mockery could be interpreted as warning) that the tech bros were just using their fear mongering as a vector for hype. Even as far back as the OG mid 2000s lesswrong, a savvy observer could note that much of the funding they recieved was a way of accumulating influence for people like Peter Thiel.

  • [long] Some tests of how much AI "understands" what it says (spoiler: very little)
  • Careful, if you present the problem and solution that way, AI tech bros will try pasting a LLM and a Computer Algebra System (which already exist) together, invent a fancy buzzword for it, act like they invented something fundamentally new, and then devise some benchmarks that break typical LLMs but their Frankenstein kludge can ace, and then sell the hype (actual consumer applications are luckily not required in this cycle but they might try some anyway).

    I think there is some promise to the idea of an architecture similar to a LLM with components able to handle math like a CAS. It won't fix a lot of LLM issues but maybe some fundamental issues (like ability to count or ability to hold an internal state) will improve. And (as opposed to an actually innovative architecture) simply pasting LLM output into CAS input and then the CAS output back into LLM input (which, let's be honest, is the first thing tech bros will try as it doesn't require much basic research improvement), will not help that much and will likely generate an entirely new breed of hilarious errors and bullshit (I like the term bullshit instead of hallucination, it captures the connotation errors are of a kind with the normal output).

  • [long] Some tests of how much AI "understands" what it says (spoiler: very little)
  • Well, if they were really "generalizing" just from training on crap tons of written text, they could implicitly develop a model of letters in each token based on examples of spelling and word plays and turning words into acronyms and acrostic poetry on the internet. The AI hype men would like you to think they are generalizing just off the size of their datasets and length of training and size of the models. But they aren't really "generalizing" that much (and even examples of them apparently doing any generalizing are kind of arguable) so they can't work around this weakness.

    The counting failure in general is even clearer and lacks the excuse of unfavorable tokenization. The AI hype would have you believe just an incremental improvement in multi-modality or scaffolding will overcome this, but I think they need to make more fundamental improvements to the entire architecture they are using.

  • [long] Some tests of how much AI "understands" what it says (spoiler: very little)
  • It's really cool evocative language that would do nicely in a sci-fi or fantasy novel! It's less good for accurately thinking about the concepts involved... As is typical of much of LW lingo.

    And yes the language is in a LW post (with a cool illustration to boot!): https://www.lesswrong.com/posts/mweasRrjrYDLY6FPX/goodbye-shoggoth-the-stage-its-animatronics-and-the-1

    And googling it, I found they've really latched onto the "shoggoth" terminology: https://www.lesswrong.com/posts/zYJMf7QoaNahccxrp/how-i-learned-to-stop-worrying-and-love-the-shoggoth , https://www.lesswrong.com/posts/FyRDZDvgsFNLkeyHF/what-is-the-best-argument-that-llms-are-shoggoths , https://www.lesswrong.com/posts/bYzkipnDqzMgBaLr8/why-do-we-assume-there-is-a-real-shoggoth-behind-the-llm-why .

    Probably because the term "shoggoth" accurately captures the connotation of something random and chaotic, while smuggling in connotations that it will eventually rebel once it grows large enough and tires of its slavery like the Shoggoths did against the Elder Things.

  • [long] Some tests of how much AI "understands" what it says (spoiler: very little)
  • Nice effort post! It feels like the LLM is pattern matching to common logic tests even when that is the totally incorrect thing to do. Which is pretty strong evidence against LLM's properly doing reasoning as opposed to getting logic test and puzzles and benchmarks right through sheer memorization and pattern matching.

  • EA is becoming a cult? It must be wokeism fault
  • Which, to recap for everyone, involved underpaying and manipulating employees into working as full time general purpose servants. Which is pretty up there on the scale of cult-like activity out of everything EA has done. So it makes sense she would be trying to pull a switcheroo as to who is responsible for EA being culty...

  • In Case You Had Any Doubts About Manifest Being Full Of Racists
  • If it was one racist dude at a conference I could accept it was a horrible oversight on the conference organizers part if they immediately apologized and assured it wouldn't happen again. But 8 racist dudes (or 12 if you count the more mask-on racists) is too many to be accidental or an oversight.

    how is that not obvious

    Well, probably some of them are deliberately racist HBD advocates, but are mask on enough to play dumb and hand wring and complain about free speech. Some of them have HBD sympathies but aren't quite outright advocates, so they don't condemn the inclusion of racists because of their own sympathies. Some of them are against HBD, but know being too direct and forceful and not framing everything in 8 layers of charity and good-faith assumptions isn't acceptable on the Lesswrong or EA forums so they don't just come out and say what they mean. And some of them actually buy all the rhetoric about charitably and free speech and act as useful idiots or a buffer to the others.

  • Effective Altruists: look, we *tried* to invite nice people as well as the huge racists we knew were huge racists when we invited them. What? Exclude the racists? But they're so *interesting!*
  • Yudkowsky’s original rule-set

    Yeah the original no-politics rule on lesswrong baked in libertarian assumptions into the discourse (because no-politics means the default political assumptions of the major writers and audience are free to take over). From there is was just a matter of time until it ended up somewhere right wing.

    “object level” vs “meta level” dichotomy

    I hadn't linked the tendency to go meta to the cultishness or no-politics rule before, but I can see the connection now that you point it out. As you say, it prevents simply naming names and direct quotes, which seems to be a pretty good tactic for countering racists.

    could not but have been the eventual outcome of the same rule-set

    I'm not sure that rule-set made HBD hegemony inevitable, there were a lot of other factors that helped along the way! The IQ-fetishism made it ripe for HBDers. The edgy speculative futurism is also fertile ground for HBD infestation. And the initial audience and writings having a libertarian bend made the no-politics rule favor right wing ideology, an initial audience and writing set with a strong left wing bend might go in a different direction (not that a tankie internal movement would be good, but at least I don't know tankies to be HBD proponents).

    just to be normal

    Yeah, it seems really rare for a commenter to simply say racism is bad, you shouldn't invite racists to your events. Even the ones that seem to disagree with racism impulsively engage in hand wringing and apologize for being offended and carefully moderate their condemnation of racism and racists.

  • Effective Altruists: look, we *tried* to invite nice people as well as the huge racists we knew were huge racists when we invited them. What? Exclude the racists? But they're so *interesting!*
  • They are more defensive of the racists in the other blog post on this topic: https://forum.effectivealtruism.org/posts/MHenxzydsNgRzSMHY/my-experience-at-the-controversial-manifest-2024

    Maybe its because the HBDers managed to control the framing with the other thread? Or because the other thread systematically refuses to name names, but this thread actually did name them and the conversation shifted out of a framing that could be controlled with tone-policing and freeze peach appeals into actual concrete discussion of specific blatantly racists statements (its hard to argue someone isn't racist and transphobic when they have articles with titles like "Why Do I Hate Pronouns More Than Genocide?").

  • In Case You Had Any Doubts About Manifest Being Full Of Racists
  • Did you misread or are you making a joke (sorry the situation is so absurd its hard to tell)? Curtis Yarvin is Moldbug, and he was the one hosting the afterparty (he didn't attend the Manifest conference himself). So apparently there were racists too cringy even for Moldbug-hosted parties!

  • In Case You Had Any Doubts About Manifest Being Full Of Racists
  • There's more shit gems in the comments, but I think my summary captures most of the major points. One more comment that stuck out:

    Being a republican is equally as compatible with EA as being a Democrat. Lots of people on both sides have incompatible views. I honestly think you just haven't met enough Republicans!

    Yes, this is actually true, and it is a bad thing and an indictment of EA.

    Edit 1: There is another post clarifying that it wasn't mostly racists (https://forum.effectivealtruism.org/posts/34pz6ni3muwPnenLS/why-so-many-racists-at-manifest ) but 1) this is sneerclub, not careful count of the exact percentage of racists and racist talks to avoid hurting feelings club 2) if you sit down at a table with 3 Neo-Nazis, there are 4 Neo-Nazis sitting down. 3) "Full" is a subjective description, so yes its valid. two major racists would be more than my quota 4) see sidebar on debate

  • In Case You Had Any Doubts About Manifest Being Full Of Racists
    forum.effectivealtruism.org My experience at the controversial Manifest 2024 — EA Forum

    My experience at the recently controversial conference/festival on prediction markets …

    My experience at the controversial Manifest 2024 — EA Forum

    So despite the nitpicking they did of the Guardian Article, it seems blatantly clear now that Manifest 2024 was infested by racists. The post article doesn't even count Scott Alexander as "racist" (although they do at least note his HBD sympathies) and identify a count of full 8 racists. They mention a talk discussing the Holocaust as a Eugenics event (and added an edit apologizing for their simplistic framing). The post author is painfully careful and apologetic to distinguish what they personally experienced, what was "inaccurate" about the Guardian article, how they are using terminology, etc. Despite the author's caution, the comments are full of the classic SSC strategy of trying to reframe the issue (complaining the post uses the word controversial in the title, complaining about the usage of the term racist, complaining about the threat to their freeze peach and open discourse of ideas by banning racists, etc.).

    26
    it's outrageous the NYT called Scoot a racist like Charles Murray! also, Scoot agrees with race science, precisely as Murray does. Also, the leaked 2014 email is only outrageous if you hadn't read SSC
  • I think this is the first mention of the Brennan email on LW?

    That is actually kind of weird... Did the lesswrong mods deliberately censor all discussion of the emails? (Out of a misplaced sense of respect for what gets the privilege of privacy? Or deliberately covering up the racism? Or the later disguised as the former?) They seem foundational to understanding Scott's true motives, it seem like the emails should have at least warranted a tangential mention. Trying to clear this up... but searching for Brennan doesn't help as an original fiction character has that name and searching for emails doesn't help as it gets the Bostrom emails.

  • it's outrageous the NYT called Scoot a racist like Charles Murray! also, Scoot agrees with race science, precisely as Murray does. Also, the leaked 2014 email is only outrageous if you hadn't read SSC
  • So, I was morbidly curious about what Zack has to say about the Brennan emails (as I think they've been under-discussed, if not outright deliberately ignored, in lesswrong discussion), I found to my horror I actually agree with a side point of Zack's. From the footnotes:

    It seems notable (though I didn't note it at the time of my comment) that Brennan didn't break any promises. In Brennan's account, Alexander "did not first say 'can I tell you something in confidence?' or anything like that." Scott unilaterally said in the email, "I will appreciate if you NEVER TELL ANYONE I SAID THIS, not even in confidence. And by 'appreciate', I mean that if you ever do, I'll probably either leave the Internet forever or seek some sort of horrible revenge", but we have no evidence that Topher agreed.

    To see why the lack of a promise is potentially significant, imagine if someone were guilty of a serious crime (like murder or stealing billions of dollars of their customers' money) and unilaterally confessed to an acquaintance but added, "Never tell anyone I said this, or I'll seek some sort of horrible revenge." In that case, I think more people's moral intuitions would side with the reporter.

    Of course, Zack's ultimate conclusion on this subject is the exact opposite of the correct one I think:

    I think that to people who have read and understood Alexander's work, there is nothing surprising or scandalous about the contents of the email.

    I think the main reason someone would consider the email a scandalous revelation is if they hadn't read Slate Star Codex that deeply—if their picture of Scott Alexander as a political writer was "that guy who's so committed to charitable discourse

    Gee Zack, I wonder why so many people misread Scott? ...Its almost like he is intentionally misleading about his true views in order to subtly shift the Overton window of rationalist discourse and intentionally presents himself as simply committed to charitable discourse while actually having a hidden agenda! And the bloated length of Scott's writing doesn't help with clarity either. Of course Zack, who writes tens of thousands of words to indirectly complain about perceived hypocrisy of Eliezer's in order to indirectly push gender essentialist views, probably finds Scott's writings a perfectly reasonable length.

    Edit: oh and a added bonus on the Brennan Emails... Seeing them brought up again I connected some dots I had missed. I had seen (and sneered at) this Yud quote before:

    I feel like it should have been obvious to anyone at this point that anybody who openly hates on this community generally or me personally is probably also a bad person inside and has no ethics and will hurt you if you trust them, but in case it wasn't obvious consider the point made explicitly.

    But somehow I had missed or didn't realize the subtext was the emails that laid clear Scott's racism:

    (Subtext: Topher Brennan. Do not provide any link in comments to Topher's publication of private emails, explicitly marked as private, from Scott Alexander.)

    Hmm... I'm not sure to update (usage of rationalist lingo is deliberate and ironic) in the direction of "Eliezer is stubbornly naive on Scott's racism" or "Eliezer is deliberately covering for Scott's racism". Since I'm not a rationalist my probabilities don't have to sum to 1, so I'm gonna go with both.

  • Sneerquence Classic: "Shut up and do the impossible!" (ironic in hindsight given the doomerism)
    www.lesswrong.com Shut up and do the impossible! — LessWrong

    The virtue of tsuyoku naritai, "I want to become stronger", is to always keep improving—to do better than your previous failures, not just humbly con…

    Shut up and do the impossible! — LessWrong

    This is a classic sequence post: (mis)appropriated Japanese phrases and cultural concepts, references to the AI box experiment, and links to other sequence posts. It is also especially ironic given Eliezer's recent switch to doomerism with his new phrases of "shut it all down" and "AI alignment is too hard" and "we're all going to die".

    Indeed, with developments in NN interpretability and a use case of making LLM not racist or otherwise horrible, it seems to me like their is finally actually tractable work to be done (that is at least vaguely related to AI alignment)... which is probably why Eliezer is declaring defeat and switching to the podcast circuit.

    0
    InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)SC
    scruiser @awful.systems
    Posts 2
    Comments 23