Skip Navigation

帖子
18
评论
587
加入于
2 yr. ago

  • That he cites as if it were a philosophy paper, to non-rationalists.

  • Yep, from what I can tell second hand dath ilan world building definitely skews towards it doesn't count as totalitarianism if the enforced orthodoxy is in line with my Obviously Objectively Correct and Overdetermined opinions.

  • I can smell the 'rape (play) is the best kind of sex actually' from over here.

  • OG Dune actually had some complex and layered stuff to say about AI before the background lore was retconned to dollar store WH40K by the current handlers of the IP.

    There was no superintelligence, thinking machines were gatekept by specialists who formed entrenched elites, overreliance to them was causing widespread intellectual stagnation, and people were becoming content with letting unknowable algorithms decide on matters of life and death.

    The Butlerian Jihad was first and foremost a cultural revolution.

  • I'm still not sure if they actually grasp the totalitarian implications of going ham on tech companies and research this way. He sure doesn't get called out about his 'solutions' that imply that some sort of world government has to happen that will also crown him Grand Central Planner of All Technology.

    It's possible they just believe the eight [specific consumer electronic goods] per household is doable, and at worst equally authoritarian with the tenured elites snubbing their noses at HBD research.

  • If you're having to hide your AIs in faraday cages in case they get uppity, why are you even doing this, you are already way past the point of diminishing returns. There is no use case for keeping around an AI that actively doesn't want anything to do with you, at that point either you consider that part of the tech tree a dead end or you start some sort of digital personhood conversation.

    That's why Yud (and anthropic) is so big on AIs deceiving you about their 'real' capabilities. For all of MIRI's talk about the robopocalypse being a foregone conclusion, the path to get there sure is narrow and contrived, even on their own terms.

  • Who needs time travel when you have Timeless Updateless Functional Decision Theory, Yud's magnum opus and an arcane attempt at a game theoretic framework that boasts 100% success at preventing blackmail from pandimensional superintelligent entities that exist now in the future.

    It for sure helped the Zizians become well integrated members of society (warning: lesswrong link).

  • I for one don't mind if my reddit crap poisons future LLMs.

  • To be fair to Mr. Gay, he went in with the noblest of intentions, to get a chance to ask Thiel how in the hell does he not see that if anyone around here is the antichrist, it's him.

  • He kind of left his prime I think, the humor becoming alternatingly a bit too esoteric or a bit too obvious, and kind of stale in general. Nothing particularly objectionable about the author comes to mind otherwise.

  • I think it's more like you'll have a rat commissar deciding which papers get published and which get memory-holed while diverting funds from cancer research and epidemiology to research on which designer mouth bacteria can boost their intern's polygenic score by 0.023%

  • "not on squeaking terms"

    by the way I first saw this in the stubsuck

  • Genetic engineering and/or eugenics is the long term solution. Short-term you are supposed to ban GPU sales, bomb non-complying datacenters and have all the important countries sign an AI non-proliferation treaty that will almost certainly involve handing over the reins of human scientific progress to rationalist approved committees.

    Yud seems explicit that the point of all this is to buy enough time to create our metahuman overlords.

  • Honestly, it gets dumber. In rat lore the AGI escaping restraints and self improving unto godhood is considered a foregone conclusion, the genetically augmented smartbrains are supposed to solve ethics before that has a chance to happen so we can hardcode a don't-kill-all-humans moral value module to the superintelligence ancestor.

    This is usually referred to as producing an aligned AI.

  • Apparently genetically engineering ~300 IQ people (or breeding them, if you have time) is the consensus solution on how to subvert the acausal robot god, or at least the best the vast combined intellects of siskind and yud have managed to come up with.

    So, using your influence to gradually stretch the overton window to include neonazis and all manner of caliper wielding lunatics in the hope that eugenics and human experimentation become cool again seems like a no-brainer, especially if you are on enough uppers to kill a family of domesticated raccoons at all times.

    On a completely unrelated note, adderall abuse can cause cardiovascular damage, including heart issues or stroke, but also mental health conditions like psychosis, depression, anxiety and more.

  • For reference, that's the guy who wants Thiel to give him 40M to put a baskeball-player-sized titanium cross on the moon.

  • In collaboration with cryptocurrency outfits Coinbase, MetaMask, and the Ethereum foundation, Google also produced an extension that would integrate the cryptocurrency-oriented x402 protocol, allowing for AI-driven purchasing from crypto wallets.

    what could possibly go wrong

    In either case, the goal is to maintain an auditable trail that can be reexamined in cases of fraud.

    Which is a thing that you only need to worry about if you use these types of agents.

    Which in any case you can't, because

    The protocol is built for a future in which AI agents routinely shop for products on customers’ behalf and engage in complex real-time interactions with retailers’ AI agents.

  • Can't see how this doesn't defeat the purpose, if you have a mock data generator of sufficient fidelity you already have a well-defined mechanism to describe the data, shouldn't you be using that instead of developing a new model to capture the characteristics of the model that creates those characteristics?