Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)DI
Posts
9
Comments
140
Joined
2 yr. ago

  • Ironically, in a videogame someone like Musk would always be at most an NPC, and possibly not even that (just a set of old newspaper clippings / terminal entries in fallout / etc). Yudkowsky would be just a background story for explaining some fucked up cult.

    This is because they are, ultimately, uninteresting to simulate - their lives are well documented and devoid of any genuine challenge (they just get things by selection bias rather than any effort - simulating then is like simulating a lottery winner rather than a lottery). They exist to set up the scene for something interesting.

  • I think the question of "general intelligence" is kind of a red herring. Evolution for example creates extremely complex organisms and behaviors, all without any "general intelligence" working towards some overarching goal.

    The other issue with Yudkowsky is that he's an unimaginative fool whose only source of insights on the topic is science fiction, which he doesn't even understand. There is no fun in having Skynet start a nuclear war and then itself perish in the aftermath, as the power plants it depend on cease working.

    Humanity itself doesn't possess that kind of intelligence envisioned for "AGI". When it comes to science and technology, we are all powerful hivemind. When it comes to deciding what to do with said science and technology, we are no more intelligent than an amoeba, crawling along a gradient.

  • I don't think the quantum hype has much to do with quantum mechanics. It is a people phenomenon.

    In the times past, people who understand that stuff would be comfortably living the American dream and not pursuing some grifts. There would be a relatively sharp distinction between grifters and non grifters.

    With increased social stratification that time is long gone; unless you're part of 0.01% , however qualified you won't feel financially secure enough to not go along with the flow set by the money guys. And they are a lot less interested in listening; they are important people and the pay gap between them and a physicist is larger than the gap between the ceo and a part time cleaner used to be.

    The money guys on the other hand believe that they can just make things happen when they want to by pouring money into it and do not believe that details are important. In a sense they are right, because a lot of them do profit off pouring money at things that can't ultimately pan out, but which could be bought by a large corporation, using other people's money (then the ceo of said large corporation goes on to run their own startup).

    Then also the time of rapid growth for the software and electronics industry was obviously coming to a close, but nobody with money got any other ideas so they will push it as far as they can. That drives the hype bubbles.

  • To argue by analogy, it’s not like getting an artificial feather exactly right was ever a bottleneck to developing air travel once we got the basics of aerodynamics down.

    I suspect that "artificial intelligence" may be a bit more like making an artificial bird that self replicates, with computers and AI as it exists now being somewhere in-between thrown rocks and gliders.

    We only ever "beat" biology by cheating via removing a core requirement of self replication. An airplane factory that has to scavenge for all the rare elements involved in making a turbine, would never fly. We had never actually beaten biology. Supersonic aircraft may be closer to a rock thrown off the cliff than to surpassing biology.

    That "cheat code" shouldn't be expected to apply to skynet or ASI or whatever, because skynet is presumably capable of self replication. Would be pretty odd if "ASI" would be the first thing that we actually beat biology on.

  • The thing about synapses etc argument is that the hype crowd argues that perhaps the AI could wind up doing something much more effective than what-ever-it-is-that-real-brains-do.

    If you look at capabilities, however, it is inarguable that "artificial neurons" seem intrinsically a lot less effective than real ones, if we consider small animals (like e.g. a jumping spider or a bee, or even a roundworm).

    It is a rather unusual situation. When it comes to things like e.g. converting chemical energy to mechanical energy, we did not have to fully understand and copy muscles to be able to build a steam engine that has higher mechanical power output than you could get out of an elephant. That was the case for arithmetic, too, and hence there was this expectation of imminent AI in the 1960s.

    I think it boils down to intelligence being a very specific thing evolved for a specific purpose, less like "moving underwater from point A to point B" (which submarine does pretty well) and more like "fish doing what fish do". The submarine represents very little progress towards fishiness.

  • Hyping up AI is bad, so it’s alright to call someone a promptfondler for fondling prompt.

    I mostly see "clanker" in reference to products of particularly asinine promptfondling: spambot "agents" that post and even respond to comments, LLM-based scam calls, call center replacement, etc.

    These bots don't derive their wrongness from the wrongness of promptfondling, these things are part of why promptfondling is wrong.

    Doesn’t clanker come from some Star Wars thing where they use it like a racial slur against robots, who are basically sapient things with feelings within its fiction? Being based on “cracker” would be alright,

    I assume the writers wanted to portray the robots as unfairly oppressed, while simultaneously not trivializing actual oppression of actual people (the way "wireback" would have, or I dunno "cogger" or something).

    but the way I see it used is mostly white people LARPing a time and place when they could say the N-word with impunity.

    Well yeah that would indeed be racist.

    I’m seeing a lot of people basically going “I hate naggers, these naggers are ruining the neighborhood, go to the back of the bus nagger, let’s go lynch that nagger” and thinking that’s funny because haha it’s not the bad word technically.

    That just seems like an instance of good ol anti person racism / people trying to offend other people while not particularly giving a shit about the bots one way or the other.

  • we should recognize the difference

    The what now? You don't think there's a lot of homophobia that follows "castigating someone for what they do" format, or you think its a lot less bad according to some siskinded definition of what makes slurs bad that somehow manages to completely ignore anything that actually makes slurs bad?

    I think that’s the difference between “promptfondler” and “clanker”. The latter is clearly inspired by bigoted slurs.

    Such as... "cracker"? Given how the law protects but doesn't bind AI, that seems oddly spot on.

  • Note also that genuine labor saving stuff like say the Unity engine with Unity asset store, did result in an absolute flood of shovelware on Steam back in the mid 2010s (although that probably had as much having to do with Steam FOMO-ing about the possibility of not letting the next Minecraft onto Steam).

    As a thought experiment imagine an unreliable labor saving tool that speeds up half* of the work 20x, and slows down the other half 3x. You would end up 1.525 times slower.

    The fraction of work (not by lines but by hours) that AI helps with is probably less than 50% , and the speed up is probably worse than 20x.

    Slowdown could be due to some combination of

    • Trying to do it with AI until you sink too much time into that and then doing it yourself (>2x slowdown here).
    • Being slower at working with the code you didn't write.
    • It being much harder to debug code you didn't write.
    • Plagiarism being inferior to using open source libraries.

    footnote: "half" as measured by the pre-tool hours.

  • And yet you are the one person here who is equating Mexicans and Black people with machines. People with disabilities, too, huh. Lemme guess next time we're pointing and laughing at how some hyped-up "PhD level chatbot" can't count the Es in dingleberry, you'll be likening that to ableism.

    When you're attempting to humanize machines by likening the insults against machines to insults against people, this does more to dehumanize people than to humanize machines.

    edit: Also I never seen and couldn't find instances of "wireback" being used outside pro-bot sentiments and hand-wringing about how anti bot people are akhtually racist. Had you, or is it all second or third hand? It's entirely possible that it is something botlickers (can I say that or is that not OK?) came up with.

    edit: especially considering that these "anti-robot slurs" seem to originate in scifi stories where the robots are being oppressed, whereby the author is purposefully choosing that slur to undermine the position of anti robot characters in the story. It may well be that for the same reason that author has in choosing these slurs, they are rarely used (in the earnest).

  • To be honest, hand wringing over “clanker” being a slur and all that strikes me as increasingly equivalent to hand wringing over calling nazis nazis. The only thing that rubs me the wrong way is that I’d prefer the new so called slur to be “chatgpt”, genericized and negative connotated.

    If you are in the US, we’ve had our health experts replaced with AI, see the “MAHA report”. We’re one moron AI-pilled president away from a less fun version of Skynet, whereby a chatbot talks the president into launching nukes and kills itself along with a few billion people.

    Complaints about dehumanizing these things is even more meritless than a CEO complaining that someone is dehumanizing Exxon (which is at least made of people).

    These things are extension of those in power, not some marginalized underdogs like cute robots in scifi. As an extension of corporations, it already got more rights than any human - imagine what would happen to a human participant in a criminal conspiracy to commit murder and contrast that with what happens when a chatbot talks someone into a crime.

  • Python code really requires 100% branch coverage tests as an absolute minimim… with statically typed languages the compiler will catch some types of bugs in branches you don’t test, with python chances are it won’t.

    edit: basically think of non covered lines the way you think about files you didn't compile.

  • Yeah a new form of apologism that I started seeing online is “this isn’t a bubble! Nobody expects an AGI, its just Sam Altman, it will all pay off nicely from 20 million software developers worldwide spending a few grand a year each”.

    Which is next level idiotic, besides the numbers just not adding up. There’s only so much open source to plagiarize. It is a very niche activity! It’ll plateau and then a few months later tiny single GPU models catch up to this river boiling shit.

    The answer to that has always been the singularity bullshit where the biggest models just keep staying ahead by such a large factor nobody uses the small ones.

  • Lol I literally told these folks, something like 15 years ago, that paying to elevate a random nobody like Yudkowsky as the premier “ai risk” researcher, in so much that there is any AI risk, would only increase it.

    Boy did I end up more right on that than my most extreme imagination. All the moron has accomplished in life was helping these guys raise cash due to all his hype about how powerful the AI would be.

    The billionaires who listened are spending hundreds of billions of dollars - soon to be trillions, if not already - on trying to prove Yudkowsky right by having an AI kill everyone. They literally tout “our product might kill everyone, idk” to raise even more cash. The only saving grace is that it is dumb as fuck and will only make the world a slightly worse place.

  • To be entirely honest I don’t even like the arguments against EDT.

    Smoking lesion is hilarious. So theres a lesion that is making people smoke. It is also giving them cancer in some unrelated way which we don’t know, trust me bro. Please bro don’t leave this decision to the lesion, you gotta decide to smoke, it would be irrational to decide not to smoke if the lesion’s gonna make you smoke. Correlation is not causation, gotta smoke, bro.

    Obviously in that dumb ass hypothetical, the conditional probability is conditional on the decision, not on the lesion, and the smoking in cancer cases is conditional on the lesion, not on the decision. If those two were indistinguishable then the right decision would be not to smoke. And more generally, adopting causal models without statistical data to back them up is called “being gullible”.

    The tobacco companies actually did manufacture the data, too, thats where “type-A personality” comes from.

  • Tbh whenever I try to read anything on decision theory (even written by people other than rationalists), I end up wondering how do they think a redundant autopilot (with majority vote) would ever work. In an airplane, that is.

    Considering just the physical consequences of a decision doesn’t work (unless theres a fault, consequences don’t make it through the voting electronics, so the alternative decisions made for the alternative that there is no fault, never make it through).

    Each one simulating the two or more other autopilots is scifi-brained idiocy. Requiring that autopilots are exact copies is stupid (what if we had two different teams write different implementations, I think Airbus actually sort if did that).

    Nothing is going to be simulating anything, and to make matters even worse for philosophers amateur and academic alike, the whole reason for redundancy is that sometimes there is a glitch that makes them not compute the same values, so any attempt to be clever with “ha, we just treat copies as one thing” doesn’t cut it either.

  • Even to the extent that they are "prompting it wrong" it's still on the AI companies for calling this shit "AI". LLMs fundamentally do not even attempt to do cognitive work (the way a chess engine does by iterating over possible moves).

    Also, LLM tools do not exist. All you can get is a sales demo for the company stock (the actual product being sold), built to impress how close to AGI the company is. You have to creatively misuse these things to get any value out of them.

    The closest they get to tools is "AI coding", but even then, these things plagiarize code you don't even want plagiarized (because its MIT licensed and you'd rather keep up with upstream fixes).

  • But just hear me out: if you delete your old emails, you won’t be roped into paying for extra space, and Microsoft or Google will have a little less money to buy water with!

    Switch to Linux and avoid using any Microsoft products to conserve even more water.

  • TechTakes @awful.systems

    Do leaders even believe that generative AI is useful?

    TechTakes @awful.systems

    Meta was “allegedly” seeding porn to speed up their book downloads.

    SneerClub @awful.systems

    We did it. 2 people and many boats problem is a classic now.

    TechTakes @awful.systems

    AI solves every river crossing puzzle, we can go home now

    TechTakes @awful.systems

    Google's Gemini 2.5 pro is out of beta.

    TechTakes @awful.systems

    Musk ("xAI") now claims grok was hacked

    TechTakes @awful.systems

    Gemini seem to have "solved" my duck river crossing, lol.

    TechTakes @awful.systems

    Gemini 2.5 "reasoning", no real improvement on river crossings.

    SneerClub @awful.systems

    Some tests of how much AI "understands" what it says (spoiler: very little)