Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)YO
帖子
1
评论
1,060
加入于
1 yr. ago

  • Is this National Design Studio actually part of the federal government, though? Or is this a further collapsing of the distinction between state and enterprise? Because honestly I could totally buy members of this administration looking for ways to use copyright law to go after people who make parodies or otherwise use US iconography without toeing the party line. I'm doing my damnedest not to go full tinfoil hat with this shit, but it's proving so hard.

  • Having now read it (I have regrets), I think it's even worse than you suggested. He's not trying to argue that women are attracted to dangerous men in order to prevent the danger from happening to them. He assumes that, based on "everyday experience" of how he feels when dealing with "high-status" men and then tries to use that as an extension of and evidence for his base-level theory of how the brain does consciousness. (I'm not going to make the obvious joke about alternative reasons why he has the same feeling around certain men that he does around women he finds attractive.) In order to get there he has to assume that culture and learning play no role in what people find attractive, which is just absurd on it's face and renders the whole argument not worth engaging with.

  • I’m assuming that certain pop-culture stereotypes, for example the idea that women tend to feel attraction towards taller men (other things equal), are indicative of timeless human universals, as opposed to being specific to my own culture

    lol. lmao.

    I wrote this post quickly and without thoroughly studying what people have historically written on this topic.

    What a coincidence! I read this post quickly and without thoroughly considering much of anything.

    I acknowledge that I haven’t provided any direct evidence here [...] But the former is at least an elegant story that fits in with other things I believe.

    This comes shockingly close to self-awareness.

  • I feel like this is some friggin' Kissinger "power is an aphrodisiac" nonsense. Which is hilarious because while yes Kissinger spent more time out on the town with beautiful women than you would expect for a Ben Stein-esque war criminal, when journalists at the time talked to those women they pretty consistently said that they enjoyed feeling like he respected them and wanted to talk about the world and listened to what they had to say. But that would be anathema to Rationalism, I guess.

  • Goddammit now I actually have to credit Gwern for something unambiguously positive in directing me to this story.

    I found myself appreciating it a lot even just on a relatively surface level. I must confess to having no experience with Proust or some of the other references it makes, but it sent my mind back to my own time in school and struck me with a very particular kind of social vertigo, thinking about all the people I vaguely knew but haven't spoken to or about since we were classmates. Like, people talk about the feeling that everyone around you is a full person with their own inner life and all that, and it feels similar to think how many people, especially in childhood, live their lives almost parallel to ours, intersecting only in passing.

    Also given how many rationalists seem utterly convinced that many of not most people are just NPCs who don't meaningfully exist when "off screen" I'm not surprised that they're excited to have this mess of an interpretation that sidesteps that whole concept.

    Ed: Also, the illusion sucks.

  • You know, this whole conversation reminds me of the discussion of moderation policy I remembered from a gaming blog I used to read somewhat religiously. I think the difference in priorities is pretty significant. In Shamus' policy the primary obligation of the moderator is to the community as a whole to protect it from assholes and shitweasels. These people will try to use hard-and-fast rules against you to thwart your efforts, and so are best dealt with by a swift boot. If they want to try again they're welcome to set up a new account or whatever and if they actually behave themselves then all the better. I feel like this does a far better job of creating a welcoming and inclusive community even when discussing contentious issues like the early stages of gamergate or the PC vs Console wars. Also it doesn't require David to drive himself fucking insane trying to build an ironclad legal case in favor of banning any particular Nazi, including nearly a decade of investigation and "light touch" moderation.

    Also in grabbing that link I found out that Shamus apparently died back in 2022. RIP and thanks for helping keep me from falling into the gamergate or Rationalist pipelines to fascism.

  • Promptfondlers are tragically close to the point. Like I was saying yesterday about translators the future of programming in AI hell is going to be senior developers using their knowledge and experience to fix the bullshit that the LLM outputs. What's going to happen when they retire and there's nobody with that knowledge and experience to take their place? I'll have sold off my shares by then, I'm sure.

  • The thing that kills me about this is that, speaking as a tragically monolingual person, the MTPE work doesn't sound like it's actually less skilled than directly translating from scratch. Like, the skill was never in being able to type fast enough or read faster or whatever, it was in the difficult process of considering the meaning of what was being said and adapting it to another language and culture. If you're editing chatbot output you're still doing all of that skilled work, but being asked to accept half as much money for it because a robot made a first attempt.

    In terms of that old joke about auto mechanics, AI is automating the part where you smack the engine in the right place, but you still need to know where to hit it in order to evaluate whether it did a good job.

  • I get the idea they're going for: that coding ability is a leading indicator for progress towards AGI. But even if you ignore how nonsensical the overall graph is the argument itself is still begging the question of how much actual progress and capability it has to write code rather than spitting out code-shaped blocks of text that can successfully compile.

  • NANDA claims that agentic AI — or the thing of that name that they’re selling — will definitely learn real good without training completely afresh.

    Given their web3 roots, I feel like we should point out that blockchain storage systems are famously cheap and efficient to update and modify, so this claim actually seems perfectly reasonable to me /s.

    Anyone who said this about their product would almost certainly by lying, but these guys are extra lying.

  • Oxford Economist in the NYT says that AI is going to kill cities if they don't prepare for change. (Original, paywalled)

    I feel like this is at most half the picture. The analogy to new manufacturing technologies in the 70s is apt in some ways, and the threat of this specific kind of economic disruption hollowing out entire communities is very real. But at the same time as orthodox economists so frequently do his analysis only hints at some of the political factors in the relevant decisions that are if anything more important than technological change alone.

    In particular, he only makes passing reference to the Detroit and Pittsburgh industrial centers being "sprawling, unionized compounds" (emphasis added). In doing so he briefly highlights how the changes that technology enabled served to disempower labor. Smaller and more distributed factories can't unionize as effectively, and that fragmentation empowers firms to reduce the wages and benefits of the positions they offer even as they hire people in the new areas. For a unionized auto worker in Detroit, even if they had replaced the old factories with new and more efficient ones the kind of job that they had previously worked that had allowed them to support themselves and their families at a certain quality of life was still gone.

    This fits into our AI skepticism rather neatly, because if the political dimension of disempowering labor is what matters then it becomes largely irrelevant whether LLM-based "AI" products and services can actually perform as advertised. Rather than being the central cause of this disruption it becomes the excuse, and so it just has to be good enough to create the narrative. It doesn't need to actually be able to write code like a junior developer in order to change the senior developer's job to focus on editing and correcting code-shaped blocks of tokens checked in by the hallucination machine. This also means that it's not going to "snap back" when the AI bubble pops because the impacts on labor will have already happened, any more than it was possible to bring back the same kinds of manufacturing jobs that built families in the postwar era once they had been displaced in the 70s and 80s.

  • Buttcoin @awful.systems

    Molly White breaks down a "Kamala should go easy on Crypto" poll