Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)GO
Posts
15
Comments
63
Joined
2 yr. ago

  • I highly suspect the voice analysis thing was just to confirm what they already knew, otherwise it would have been like looking for a needle in a haystack.

    People on twitter have been speculating that someone who knew him simply ratted him out.

  • The problem here is that “AI” is a moving target, and what “building an actual, usable AI” looks like is too. Back when OpenAI was demoing DOTA-playing bots, they were also building actual, usable AIs.

    For some context: prior to the release of chatGPT I didn't realize that OpenAI had personnel affiliated with the rationalist movement (Altman, Sutskever, maybe others?), so I didn't make the association, and i didn't really know about anything OpenAI did prior to GPT-2 or so.

    So, prior to chatGPT the only "rationalist" AI research I was aware of were the non-peer reviewed (and often self-published) theoretical papers that Yud and MIRI put out, plus the work of a few ancillary startups that seemed to go nowhere.

    The rationalists seemed to be all talk and no action, so really I was surprised that a rationalist-affiliated organization had any marketable software product at all, "AI" or not.

    and FWIW I was taught a different definition of AI when I was in college, but it seems like it's one of those terms that gets defined different ways by different people.

  • I suppose the goalpost shifting is my fault, the original comment was about Sutskever but I shifted talking about OpenAI in general, in part because I don't really know to what extent Sutskever is individually responsible for OpenAI's tech.

    also mock anyone who says it’s OK for them to act this way because they have a gigantic IQ.

    I think people are missing the irony in that comment.

  • The accomplishment I'm referring to is creating GPT/DALL-E. Yes, it's overhyped, unreliable, arguably unethical and probably financially unsustainable, but when I do my best to ignore the narratives and drama surrounding it and just try out the damn thing for myself I find that I'm still impressed with it as a technical feat. At the very, very least I think it's a plausible competitor to google translate for the languages I've tried, and I have to admit I've found it to be actually useful when writing regular expressions and a few other minor programming tasks.

    In all my years of sneering at Yud and his minions I didn't think their fascination with AI would amount to anything more than verbose blogposts and self-published research papers. I simply did not expect that the rationalists would build an actual, usable AI instead of merely talking about hypothetical AIs and pocketing the donor money, and it is in this context that I say I underestimated the enemy.

    With regards to "mocking the promptfans and calling them names": I do think that ridicule can be a powerful weapon, but I don't think it will work well if we overestimate the actual shortcomings of the technology. And frankly sneerclub as it exists today is more about entertainment than actually serving as a counter to the rationalist movement.

  • I hate to say it, but even sneerclub can get a bit biased and tribal sometimes. He who fights with monsters and so on

    I suspect watching the rationalists as they bloviate and hype themselves up and repeatedly fail for years on end have lulled people into thinking that they can't do anything right, but I think that's clearly not the case anymore. Despite all the cringe and questionable ethics, OpenAI has made a real and important accomplishment.

    They're in the big leagues now. We should not underestimate the enemy.

  • After lurking on rationalist discord servers I discovered that the Rationalist existential-risk-themed Burning Man camp was also called "Black Lotus", which is apparently a reference to an overpowered MtG card. (side note: theming your Burning Man camp on existential risk sounds like a bad trip waiting to happen)

    Anyways I think the Black Lotus camp might have been the one that Brent Dill tried to bring an underage student to, as referenced below:

    https://rationality.org/resources/updates/2019/cfars-mistakes-regarding-brent

    https://medium.com/@mittenscautious/warning-2-153ed9f5f1f3

  • In a world of moral totalitarianism, sometimes freedom looks like a short story about sex tourism in the Philippines.

    I literally lol'd. Beyond parody.

    the answer is to get Peter Thiel to try to magic up Dimes Square out of nothing, isn’t it?

    On a actually serious note: When I look back at the multiple years I spent on sneerclub and otherwise following the rationalists, I increasingly feel that I had been tilting at windmills. I had spent most of that time making fun of them instead of looking into their finances, and in doing so I had missed the big picture and simply hadn't realized how integral Peter Thiel was in propping them up and building a network to support them.

    Thiel funds or funded MIRI, EA groups, Curtis Yarvin, FTX and OpenAI.