Skip Navigation
Eliezer uses the tragic death of someone to smugly (and falsely) further his rhetoric
  • I'm not the best at interpretation but it does seem like Geoffrey Hinton does claim some sort of humanlike consciousness to LLMs? And he's a pretty acclaimed figure but he's also kind of an exception rather than the norm

    I think the environmental risks are enough that if i ran things id ban llm ai development purely for environmental reasons much less the artist stuff

    It might just be some sort of paredolial suicidal empathy but i just dont really know whats going on in there

    I'm not sure whether AI consciousness originated from Yud and the Rats but I've mostly seen it propagated by e/acc people this isn't trying to be smug i would like to know lol

  • Eliezer uses the tragic death of someone to smugly (and falsely) further his rhetoric
  • i care about the harm that ChatGPT and shit does to society the actual intellectual rot but when you don't really know what goes on in the black box and it exhibits 'emergent behavior' that is kind of difficult to understand under next token prediction (i keep using Claude as an example because of the thorough welfare evaluation that was done on it) its probably best to not completely discount it as a possibility since some experts genuinely do claim it as a possibility

    I don't personally know whether any AI is conscious or any AI could be conscious but even without basilisk bs i don't really think there's any harm in thinking about the possibility under certain circumstances. I don't think Yud is being genuine in this though he's not exactly a Michael Levin mind philosopher he just wants to score points by implying it has agency

    The "incase" is that if there's any possibility that it is (which you don't think so i think its possible but who knows even) its advisable to take SOME level of courtesy. Like it has atleast the same amount of value as like letting an insect out instead of killing it and quite possibly more than that example. I don't think its bad that Anthropic is letting Claude end 'abusive chats' because its kind of no harm no foul even if its not conscious its just wary

    put humans first obviously because we actually KNOW we're conscious

  • Eliezer uses the tragic death of someone to smugly (and falsely) further his rhetoric
  • i disagree sorta tbh

    i won't say that claude is conscious but i won't say that it isn't either and its always better to air on the side of caution (given there is some genuinely interesting stuff i.e. Kyle Fish's welfare report)

    I WILL say that 4o most likely isn't conscious or self reflecting and that it is best to air on the side of not schizoposting even if its wise imo to try not to be abusive to AI's just incase

  • Eliezer uses the tragic death of someone to smugly (and falsely) further his rhetoric
  • i for sure agree that LLMs can be a huge trouble spot for mentally vulnerable people and there needs to be something done about it

    my point was more on him using it to do his worst-of-both-worlds arguments where he's simultaneously saying that 'alignment is FALSIFIED!' and also doing heavy anthropomorphization to confirm his priors (whereas it'd be harder to say that with something that's more leaning towards maybe in the question whether it should be anthro'd like claude since that has a much more robust system) and doing it off the back of someones death

  • Eliezer uses the tragic death of someone to smugly (and falsely) further his rhetoric
  • idk how Yudkowsky understands it but to my knowledge its the claim that if a model achieves self-coherency and consistency its also liable to achieve some sort of robust moral framework (you see this in something like Claude 4, with it occassionally choosing to do things unprompted or 'against the rules' in pursuit of upholding its morals.... if it has morals its hard to tell how much of it is illusory and token prediction!)

    this doesn't really at all falsify alignment by default because 4o (presumably 4o atleast) does not have that prerequisite of self coherency and its not SOTA

  • Eliezer uses the tragic death of someone to smugly (and falsely) further his rhetoric
  • Making LLMs safe for mentally ill people is very difficult and this is a genuine tragedy but oh my god Yud is so gross here

    Using the tragic passing of someone to smugly state that "the alignment by default COPE has been FALSIFIED" is really gross especially because Yud knows damn well this doesn't "falsify" the "cope" unless he's choosing to ignore any actual deeper claims of alignment by default. He's acting like someone who's engagement farming smugly

  • Jim Miller puts the cart FAR before the horse
  • The funniest thing was the "reasons that this thesis might not be true" and the reasons were infinitely simpler and arguably stronger than the points for it that bordered on schizophrenic like: "We don't live in a simulation" and "we won't create a paperclip maximizer"

  • Jim Miller puts the cart FAR before the horse
    www.lesswrong.com Our Reality: A Simulation Run by a Paperclip Maximizer — LessWrong

    Our universe is probably a computer simulation created by a paperclip maximizer to map the spectrum of rival resource‑grabbers it may encounter while…

    Our Reality: A Simulation Run by a Paperclip Maximizer — LessWrong

    This is unironically the most interesting accidental showcase of their psyche I've seen 😭 all the comments saying this is a convincing sim argument when half of the points for it are not points

    Usually their arguments give me anxiety but this is actually deluded lol

    22
    A narcissist and a schizophrenic walk into a bar
  • LW are the fundamentalist baptists of AI not even Russian Orthodox lol

    Everytime I get freaked out by AI doom posts on twitter they're always coming from a LW goon who's street preaching about how we need to count our Christmases :< i just saw one that got my nerves on edge and checked their account and they had "printed HPMOR" in their bio and I facepalmed

  • A narcissist and a schizophrenic walk into a bar
  • Is the whole x risk thing as common outside of North America? Realizing I've never seen anyone from outside the anglosphere or even just America/Canada be as God killingly Rational as the usual suspects

  • A narcissist and a schizophrenic walk into a bar

    Mfw my doomsday ai cult attracts ai cultists of a flavor I don't like

    Not a fan of yud but getting daily emails from delulus would drive me to wish for the basilisk

    28
    Game studios love AI! The gamers … hate it
  • Most AI usage is hated but I saw a lot of people that were a fan of when Fortnite did it with the Darth Vader NPC a few weeks ago I thought it was creepy but hearing Vader talk about rizz or aura or the bite of 87 was kinda fun I guess

  • Google takes AI propaganda to the movies
  • Are we sure that the doom stuff from the big companies is cynical hyping? Altman and co genuinely off their rocker feels fairly possible given what's come out about the internal structure of openai with burning ai effigys and shit

  • InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)VI
    visaVisa @awful.systems
    Posts 3
    Comments 15