Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)YO
Posts
1
Comments
1,041
Joined
1 yr. ago

  • Counterpoint: to what extent are hyperkludges actually a unique thing versus an aspect of how technologies and tools are integrated into human context? Like, one of the original examples is the TCP/IP stack, but as anyone who has had to wrangle multiple vendors can attest a lot of the value in that standardization necessarily comes from the network effects - the fact that it's an accepted standard. The web couldn't function if you had a bespoke protocol stack hand-made to elegantly handle the specific problems of a given application not just because of the difficulty in building that much software (i.e. network effects on the design and construction side) but because of how unwieldy and impractical it would be to get any of those applications in front of people. The fit of those tools for a given application is secondary to how much more cleanly the entire ecosystem can operate because they are more limited in number.

    The OP also talks about how embedded the history of a given problem is in the solution which feels like the central explanation for this trend. In that sense a hyperkludge isn't a unique pattern that some things fall into and more a way of indicating a particularly noteworthy whorl in the fractal infinikludge that is all human endeavors.

  • I've watched a few of those "I taught an AI to play tag" videos from some time back, and while its interesting to see what kinds of degenerate strategies the computer finds (trying to find a way out of bounds being a consistent favorite after enough iterations) it's always a case of "wow I screwed up in designing the environment or rewards" and not "dang, look how smart the computer is!"

    As always with this nonsense, the problem is always that the machine is too dumb to be trusted rather than too smart and powerful. Like, identifying patterns that people would miss is arguably the biggest strength of machine learning in general, but that's not the same as those patterns being meaningful or useful.

  • This is my biggest gripe with that nonsense. If you make it hard to do something well, you won't end up with an elite series of uber-coders because there aren't enough of those people to do all the programming that people want to be done. Instead you'll see that much more software engineering done really goddamned badly and despite appearances at the time it turns out there is a maximum amount of shitty software the world can endure.

  • Surely it's better to specify those defaults in the config file and have the system just fail if the necessary flags aren't present. Having worked in support I can vouch for the amount of suffering that could be avoided if more systems actually failed if some important configuration isn't in place.

  • While I think I get OP's point, I'm also reminded of our thread a few months back where I advised being polite to the machines just to build the habit of being respectful in the role of the person making a request.

    If nothing else you can't guarantee that your request won't be deemed tricky enough to deliver to a wildly underpaid person somewhere in the global south.

  • SomeBODY once told me

    The world was gonna roll me

    I'm only a stochastic parrot

    She was looking kinda dumb

    Drawing those extra thumbs

    And insisting that the L was on your head

    Well, the slop starts coming and it don't stop coming

    Steal all the books so you hit the ground running

    Didn't make sense but I still got funds

    Stole so much art but it still looks dumb

    So much to steal, not much for free

    So what's wrong with my copyright cheat

    You'll never know where your power flowed

    Just wait on my uranium glow!

    Hey now, you're a slop star

    Regulators get played

    Hey now, you're a great mark

    But Sam Altman got paid

    All that matters is growth

    And that journalists all get rolled

  • Actually wait I'm pretty sure it's even worse because I'm terrible at reading logarithmic scales. It's roughly halfway between $1,000 and $10,000 on their log scale, which if I do the math while actually awake works out closer to $3,000.

  • Nobody outside the company has been able to confirm whether the impressive benchmark performance of OpenAI's o3 model represents a significant leap in actual utility or just a significant gap in the value of those benchmarks. However, they have released information showing that the most ostensibly-powerful model costs orders of magnitude more. The lede is in that first graph, which shows that for whatever performance gain o3 costs over $10 per request with the headline-grabbing version costing $1500 per request.

    I hope they've been able to identify a market willing to pay out the ass for performance that, even if it somehow isn't over hyped, is roughly equivalent to an average college graduate.

  • You could argue that another moral of Parfit's hitchhiker is that being a purely selfish agent is bad, and humans aren't purely selfish so it's not applicable to the real world anyway, but in Yudkowsky's philosophy—and decision theory academia—you want a general solution to the problem of rational choice where you can take any utility function and win by its lights regardless of which convoluted setup philosophers drop you into.

    I'm impressed that someone writing on LW managed to encapsulate my biggest objection to their entire process this coherently. This is an entire model of thinking that tries to elevate decontextualization and debate-team nonsense into the peak of intellectual discourse. It's a manner of thinking that couldn't have been better designed to hide the assumptions underlying repugnant conclusions if indeed it had been specifically designed for that purpose.

  • Oh no I'm in this sketch and I don't like it. Or at least, I would be. The secret is to acknowledge your lack of background knowledge or basic grounding in what you're talking about and then blunder forward based on vibes and values, trusting that if you're too far off base on the details you'll piss off someone (sorry skillissuer) enough to correct you.

  • I think that's something to keep an eye on. The existence of the AI doom cult does not preclude there being good-faith regulations that can significantly reduce these people's ability and incentives to do harm. Indeed the technology is so expensive and ineffective that if we can find a "reasonable compromise" plan to curb the most blatant kinds of abuse and exploitation we could easily see the whole misbegotten enterprise wither on the vine.

  • "...according to my machine learning model we actually have a strong fit in favor of shooting at CEOs. There's a 66% chance that each shot will either jam or fail to hit anything fatal, which creates a strong Bayesian prior in favor, or at least merits collecting further data to scale our models"

    "What do you mean I've defined the problem in order to get the desired result? Machine learning process said we're good. Why do you hate the future?"

  • It's definitely linked in with the problem we have with LLMs where they detect the context surrounding a common puzzle rather than actually doing any logical analysis. In the image case I'd be very curious to see the control experiment where you ask "which of these two lines is bigger?" and then feed it a photograph of a dog rather than two lines of any length. I'm reminded of how it was (is?)easy to trick chatGPT into nonsensical solutions to any situation involving crossing a river because it pattern-matched to the chicken/fox/grain puzzle rather than considering the actual facts being presented.

    Also now that I type it out I think there's a framing issue with that entire illusion since the question presumes that one of the two is bigger. But that's neither here nor there.

  • I think his criticism of the economics and business sense is pretty reasonable, even though he is definitely being pretty credulous about the capabilities of the underlying tech. One of the fun side effects of the diminishing returns in raw scaling is that the competitors are rapidly catching up with the capabilities of ChatGPT, which is going to be bad news for Saltman and the gang. What goes unaddressed is the bigger underlying problem; these systems don't actually do what they're being advertised for and burn an unsustainable and unconscionable amount of money (and actual resources in case anyone forgot) to do it. That's going to be the difference between OpenAI falling apart and being overtaken by another company with better monetization or the entire tech sector facing a recession, and I'm pretty sure the latter is more likely.

  • Unfortunately actually working in bio/med didn't go well despite training for it aggressively and working her ass off given that she graduated at the perfect time to compete for entry-level positions with recently laid-off people with 5+ years of experience. Between that and chronic illness in the family will all the associated experience with the failings of our medical system I'm actually pretty sympathetic to the biohackers from a purely ideological perspective, but these people are just begging for a disaster.

    Beyond reading through and enthusiastically agreeing with everything you had, she did say that if you're working on anything consumable or injectable using 3D printed parts at all is going to be a red flag. Your first two pieces of equipment should be an autoclave and a fume hood, at which point you're better off working with all glass for durability and not melting reasons. Making your work space actually sterile and sufficiently free of contaminants to do any of this jnt be first place is also going to be a pain, require a lot of tape and curtains and the like, and probably not work as well as you'd want.

    Also even working in a proper university lab with a fume hood and climate controls you still get sufficiently different results that the mk1 eyeball is utterly insufficient for identification. You'll learn worse than nothing.

  • I read through this with my wife who actually did a whole course of organic chemistry, pre-med, etc. Her reaction to your criticisms was largely "of course you would have to do that and that'd be a pain in the ass but it's definitely doable." And I feel like that's probably true but at the same time as a reasonably smart dude this is the first time I've heard the majority of these words.

    It feels like they're reacting to the same tendency in tech culture that I've complained about before where specialized knowledge and domain expertise are treated like arcane revelations from beyond. It's not that your average person is incapable of understanding or going through the relevant processes; Cs ,as the saying goes, get degrees and I'm sure many of the labs actually doing this work at scale are staffed largely by those C students. But it's also worth acknowledging that there is a level of investment in time and resources required to actually do it and this kind of "you have nothing to lose but your pharmacological chains" rhetoric is dramatically underselling how difficult those processes are and how bad the consequences can be if you screw it up. Anyone who wants to try should first read The Iliad, the Oedipus cycle, and at least one other epic-sized meditation on hubris. And the once you've forged ahead read the relevant books on actually following the process for the love of God.