Skip Navigation
Stubsack: weekly thread for sneers not worth an entire post, week ending 29th December 2024
  • Past a certain point it's easier for people to cut the genAI out of the picture and just actually draw the shit they're being asked for rather than badger the machine into creating something satisfactory, at which point irony will be well and truly dead.

  • OpenAI's Latest Model Shows AGI Is Inevitable. Now What?
  • So the guys who have been burning almost as much VC money as they have water and electricity in the name of building AGI have announced that they're totally gonna do it this time? Just a few more training runs man I swear this time we're totally gonna turn everyone into paperclips just let me have a few more runs.

  • Now called The Astrofortress
  • Gotta be cheaper than buying new planes which would also have new engines. Generally there needs to be a pretty substantial increase in capability before it's worth retiring an existing platform, especially in a logistics role where you don't get as much benefit from the bleeding edge because nobody's supposed to be shooting at you in the first place.

  • Now called The Astrofortress
  • I think the missing piece here is that B-52 isn't just a pretty good cargo hauler, it's a pretty good cargo hauler that we don't need to buy a whole new airframe to get. Think of it less as "we're commissioning these B-52s" and more as "hey look we found a way to use all these B-52s we already had" only this just keeps working forever.

  • Australia’s under-16 social media ban to use hand-waving to verify ages with AI
  • I mean, doesn't somebody still need to validate that those keys only get to people over 18? Either you have a decentralized authority that's more easily corrupted or subverted or else you have the same privacy concerns at certificate issuance rather than at time of site access.

  • Australia’s under-16 social media ban to use hand-waving to verify ages with AI
  • Why don't they just hire a wizard to cast an anti-tiktok spell over all of Australia instead? It would be just as workable and I know a guy who swears he can do it for cheaper than whatever server costs they're gonna try and push.

  • Anthropic and Apollo astounded to find that a chatbot will lie to you if you tell it to lie to you
  • Okay apparently it was my turn to subject myself to this nonsense and it's pretty obvious what the problem is. As far as citations go I'm gonna go ahead and fall back to "watching how a human toddler learns about the world" which is something I'm sure most AI researchers probably don't have experience with as it does usually involve interacting with a woman at some point.

    In the real examples that he provides, the system isn't "picking up the wrong goal" as an agent somehow. Instead it's seeing the wrong pattern. Learning "I get a pat on the head for getting to the bottom-right-est corner of the level" rather than "I get a pat on the head when I touch the coin." These are totally equivalent in the training data, so it's not surprising that it's going with the simpler option that doesn't require recognizing "coin" as anything relevant. This failure state is entirely within the realms of existing machine learning techniques and models because identifying patterns in large amounts of data is the kind of thing they're known to be very good at. But there isn't any kind of instrumental goal establishing happening here as much as the system is recognizing that it should reproduce games where it moves in certain ways.

    This is also a failure state that's common in humans learning about the world, so it's easy to see why people think we're on the right track. We had to teach my little on the difference between "Daddy doesn't like music" and "Daddy doesn't like having the Blaze and the Monster Machines theme song shout/sang at him when I'm trying to talk to Mama." The difference comes in the fact that even as a toddler there's enough metacognition and actual thought going on that you can help guide them in the right direction, rather than needing to feed them a whole mess of additional examples and rebuild the underlying pattern.

    And the extension of this kind of pattern misrecognition into sci-fi end of the world nonsense is still unwarranted anthropomorphism. Like, we're trying to use evidence that it's too dumb to learn the rules of a video game as evidence that it's going to start engaging in advanced metacognition and secrecy.

  • UK government wants to give AI companies free access to train on your creative works
  • That's the goal. The reality is that it doesn't actually reproduce the skills it imitates well enough to actually give capital access to them, but it does a good enough job imitating them that they're willing to give it a chance.

  • FireWall as a Service?
  • I mean a lot of the services that companies are using are cloud-hosted, meaning that especially if you have branch offices or a lot of remote workers a normal firewall in the datacenter introduces an unnecessary bottleneck. Putting the logical edge of your organization's network in the cloud too makes sense from a performance perspective in that case, and then turning the actual firewalls into SaaS seems much less absurd.

  • Stubsack: weekly thread for sneers not worth an entire post, week ending 23rd December 2024
  • Brief overlapping thoughts between parenting and AI nonsense, presented without editing.

    The second L in LLM remains the inescapable heart of the problem. Even if you accept that the kind of "thinking" (modeling based on input and prediction of expected next input) that AI does is closely analogous to how people think, anyone who has had a kid should be able to understand the massive volume of information they take in.

    Compare the information density of English text with the available data on the world you get from sight, hearing, taste, smell, touch, proprioception, and however many other senses you want to include. Then consider that language is inherently an imperfect tool used to communicate our perceptions of reality, and doesn't actually include data on reality itself. The human child is getting a fire hose of unfiltered reality, while the in-training LLM is getting a trickle of what the writers and labellers of their training data perceive and write about. But before we get just feeding a live camera and audio feed, haptic sensors, chemical tests, and whatever else into a machine learning model and seeing if it spits out a person, consider how ambiguous and impractical labelling all that data would be. At the very least I imagine the costs of doing so are actually going to work out to be less efficient than raising an actual human being and training them in the desired tasks.

    Human children are also not immune to "hallucinations" in the form of spurious correlations. I would wager every toddler has at least a couple of attempts at cargo cult behavior or inexplicable fears as they try to reason a way to interact with the world based off of very little actual information about it. This feeds into both versions of the above problem, since the difference between reality and lies about reality cannot be meaningfully discerned from text alone and the limited amount of information being processed means any correction is inevitably going to be slower than explaining to a child that finding a "Happy Birthday" sticker doesn't immediately make it their (or anyone else's) birthday.

    Human children are able to get human parents to put up with their nonsense ny taking advantage of being unbearably sweet and adorable. Maybe the abundance of horny chatbots and softcore porn generators is a warped fun house mirror version of the same concept. I will allow you to fill in the joke about Silicon Valley libertarians yourself.

    IDK. Felt thoughtful, might try to organize it on morewrite later.

  • Do knee X-rays show if you drink beer? Medical AI versus algorithmic shortcutting
  • This is what the AI-is-useful-actually argument obscures. There are parts of this technology that can do legitimately cool things! Machine learning identifying patterns in massive volumes of data that would otherwise be impractical to analyze is really cool and has a lot of utility. But once you start calling it "Medical AI" then people start acting like they can turn their human brains off. "AI" as a marketing term is not a tool that can help human experts focus their own analysis or enable otherwise-unfeasible kinds of statistical analysis. Will Smith didn't get into gunfights with humanoid iMacs because they were identifying types of bread too effectively. The whole point is that it's supposed to completely replace the role of a person in the relevant situations.

  • "Sam Altman is one of the dullest, most incurious and least creative people to walk this earth."
  • I'm glad I'm not the only one who picked up on that turn. The implication that what we need is an actual Bismark instead of a wannabe like we keep getting makes sense (I too would prefer if the levers of power were wielded by someone halfway competent who listens to and cares about people around them) but there are also some pretty strong reasons why we went from Bismark and Lincoln to Merkel and Trump, and also some pretty strong reasons why the road there led through Hitler and Wilson.

    Along with my comments elsewhere about how the dunce believes their area of hypothetical expertise to be some kind of arcane gift revealed to the worthy, I feel like I should clarify that not only do the current top of dolts not have it but that there is no secret wisdom beyond the ken of normal men. That is a lie told by the powerful to stop you fro tom questioning their position; it's the "because I'm your Dad and I said so" for adults. Learning things is hard and hard means expensive, so people with wealth and power have more opportunities to study things, but that lack of opportunity is not the same as lacking the ability to understand things and to contribute to a truly democratic process.

  • Molly White breaks down a "Kamala should go easy on Crypto" poll
    www.mollywhite.net Annotated: Paradigm’s July 2024 Democratic Public Opinion Poll

    Annotating Paradigm’s July 2024 poll of Democratic voters on their crypto opinions.

    Annotated: Paradigm’s July 2024 Democratic Public Opinion Poll

    I don't have much to add here, but I know when she started writing about the specifics of what Democrats are worried about being targeted for their "political views" my mind immediately jumped to members of my family who are gender non-conforming or trans. Of course, the more specific you get about any of those concerns the easier it is to see that crypto doesn't actually solve the problem and in fact makes it much worse.

    0
    InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)YO
    YourNetworkIsHaunted @awful.systems
    Posts 1
    Comments 524