Skip Navigation

Posts
98
Comments
1,971
Joined
2 yr. ago

  • There is a certain irony to everyone involved in this argument, if it can be called that.

    don’t do this debatefan here crap here, thanks

    This, and similar writing I’ve seen, seems to make a fundamental mistake in treating time like only the next few, decades maybe, exist, that any objective that takes longer than that is impossible and not even worth trying, and that any problem that emerges after a longer period of time may be ignored.

    this isn’t the article you’re thinking of. this article is about Silicon Valley technofascists making promises rooted in Golden Age science fiction as a manipulation tactic. at no point does the article state that, uh, long-term objectives aren’t worth trying because they’d take a long time??? and you had to ignore a lot of the text of the article, including a brief exploration of the techno-optimists and their fascist ties (and contrasting cases where futurism specifically isn’t fascist-adjacent), to come to the wrong conclusion about what the article’s about.

    unless you think the debunked physics and unrealistic crap in Golden Age science fiction will come true if only we wish long and hard enough in which case, aw, precious, this article is about you!

  • it’s appropriate that you think your brain works like an LLM, because you regurgitated this shitty opinion from somewhere else without giving it any thought at all

  • it can’t be that stupid, you must be using yesterday’s model

  • nobody asked you to come in here and advertise for perplexity, but you couldn’t fucking help yourself could you

  • if you’re considering pasting the output of an LLM into this thread in order to fail to make a point: reconsider

  • we didn’t ask for LLM slop, thx

  • I knew you were a lying promptfondler the instant you came into the thread, but I didn’t expect you to start acting like a gymbro trying to justify their black market steroid habit. new type of AI booster unlocked!

    now fuck off

  • can we agree that 90% of the problem with cigarettes are capitalism and not the actual smoking?

    after all, the genie is out of the bottle. you can’t destroy them, there are tobacco plants grown at home. even if you ban them, you’ll still have people hand-rolling cigarettes.

    it’s fucking weird how I only hear about open source LLMs when someone tries to make this exact point. I’d say it’s because the open source LLMs fucking suck, but that’d imply that the commercial ones don’t. none of this horseshit has a use case.

  • this one was definitely my pleasure

    “how can you fools not see that Wikipedia’s utterly inaccurate summary LLM is exactly like digital art, 3D art, and CGI, which are all the same thing and are/were universally hated(???)” is a take that only gets more wild the more you think on it too, and that’s one they’ve been pulling out for at least two years

    I didn’t catch much else from their posts, cause it’s almost all smarm and absolutely no substance, but fortunately they formatted it like paragraph soup so it slid right off my eyeballs anyway

  • why would anyone want to play as an attractive Puerto Rican when peak sexiness has already been achieved

  • god I looked at your post history and it’s just all this. 2 years of AI boosterism while cosplaying as a leftist, but the costume keeps slipping

    are you not exhausted? you keep posting paragraphs and paragraphs and paragraphs but you’re still just a cosplay leftist arguing for the taste of the boot. don’t you get tired of being like this?

  • holy shit I’m upgrading you to a site-wide ban

    so many paragraphs and my eyes don’t want any of them

  • Hinton? hey I have a pretty good post summarizing what’s wrong with Hinton, oh wait it was you two weeks ago

    what are we doing here

    you want to know what e/acc is? it’s when some fucker comes and makes the stupidest posts imaginable about LLMs and tries their best to sound like a recycled chan meme cause they think that’ll give them a pass

    bye bye e/acc

  • some experts genuinely do claim it as a possibility

    zero experts claim this. you’re falling for a grift. specifically,

    i keep using Claude as an example because of the thorough welfare evaluation that was done on it

    asking the LLM about “its mental state” is part of a very old con dating back to mechanical Turks playing chess and horses that do math. of course the LLM generated some interesting sentences when prompted about its internal state — it was trained on appropriated copies of every piece of fiction in existence, including world-class works of sci-fi (with sentient AIs and everything!), and it was tuned to generate “interesting” (see: profitable, and there’s nothing more profitable than a con with enough marks) responses. that’s why the others keep mentioning pareidolia — the only intelligence in the loop is the reader assigning meaning to the slop they’re reading, and if you step out of that role, it really does become clear that what you’re reading is absolute slop.

    s i don’t really think there’s any harm in thinking about the possibility under certain circumstances. I don’t think Yud is being genuine in this though he’s not exactly a Michael Levin mind philosopher he just wants to score points by implying it has agency

    you don’t think there’s any harm in thinking about the possibility, but all Yud does is create harm by grifting people who buy into that possibility. Yud’s Rationalist cult is the original driving force behind the people telling you LLMs must be sentient. do you understand that?

    Like it has atleast the same amount of value as like letting an insect out instead of killing it

    that insect won’t go on to consume so much energy and water and make so much pollution it creates an environmental crisis. the insect doesn’t exist as a product of the exploitation of third-world laborers or of artists and writers whose work was plagiarized. the insect isn’t a stupid fucking product of capitalism designed to maximize exploitation. I don’t acknowledge the utterly slim possibility that the insect might be or do any of the previous, because ignoring events with a near-zero probability of occurring is part of how I avoid looking like a god damn clown.

    you say you acknowledge the harms done by LLMs, but I’m not seeing it.

  • centrism will kill us all, exhibit [imagine an integer overflow joke here, I’m tired]:

    i won’t say that claude is conscious but i won’t say that it isn’t either and its always better to air on the side of caution

    the chance that Claude is conscious is zero. it’s goofy as fuck to pretend otherwise.

    claims that LLMs, in spite of all known theories of computer science and information theory, are conscious, should be treated like any other pseudoscience being pushed by grifters: systemically dangerous, for very obvious reasons. we don’t entertain the idea that cryptocurrencies are anything but a grift because doing so puts innocent people at significant financial risk and helps amplify the environmental damage caused by cryptocurrencies. likewise, we don’t entertain the idea of a conscious LLM “just in case” because doing so puts real, disadvantaged people at significant risk.

    if you don’t understand that you don’t under any circumstances “just gotta hand it to” the grifters pretending their pet AI projects are conscious, why in fuck are you here pretending to sneer at Yud?

    schizoposting

    fuck off with this

    even if its wise imo to try not to be abusive to AI’s just incase

    describe the “incase” to me. either you care about the imaginary harm done to LLMs by being “abusive” much more than you care about the documented harms done to people in the process of training and operating said LLMs (by grifters who swear their models will be sentient any day now), or you think the Basilisk is gonna get you. which is it?

  • no problem at all! I don’t think the duplicate’s too much of an issue, and this way the article gets more circulation on both Mastodon and Lemmy.

  • E: ah, this is from mastodon. I don’t know how federation etc. works.

    yep! any mastodon post whose first line looks like a subject line and which tags the community is treated by Lemmy as a new thread in that community. now, you might think that’s an awful mechanism in that it’s very hard to get right on purpose but very easy to accidentally activate if you’re linking and properly citing an article in the format that’s most natural on mastodon. and you’d be correct!

  • SneerClub @awful.systems

    reposting David’s repost of Scott Alexander’s leaked neoreaction and race science emails

    TechTakes @awful.systems

    “Most engineers these days have never been near an actual engine, and nor do they have any need to.” the orange site discusses prompt engineering

    TechTakes @awful.systems

    mass plagiarism is fine, but don’t fuck with ads: OpenAI disables Browse with Bing

    important instance shit @awful.systems

    email notifications and mobile apps are now working, and other minor updates

    TechTakes @awful.systems

    Ah If it isn't the resident LLM skeptic. So deep in it, you've somehow convinced yourself a one shot ~50% hit rate is equivalent to homeopathy.

    important instance shit @awful.systems

    awful.systems is running on a new host, tell me if anything looks broken

    TechTakes @awful.systems

    “Electronics refers to circuit board assembly not discrete electrical components. I doubt you think their diodes or capacitors are US made” hn squabbles over a $2000 phone full of old components

    TechTakes @awful.systems

    "There are two ways to be comfortable breaking rules: to enjoy breaking them, and to be indifferent to them." paully g posts long form linkedin cringe

    TechTakes @awful.systems

    tesla data leak? what tesla data leak?

    bless this jank @awful.systems

    post UI jank and broken bits here

    bless this jank @awful.systems

    awful.systems needs a logo

    important instance shit @awful.systems

    wanna see some code? c'mere