Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)YO
Posts
1
Comments
1,047
Joined
1 yr. ago

  • Yeah. Microsoft is actually kind of the victim here, since they're investing both financially and materially in LLM hardware (and giving Altman and friends a massive discount on Azure resources) when the demand is really not materializing. Facebook went all-in in the metaverse and was eventually chastened for it as much as an organization that size ever can be. Microsoft is doing the same with OpenAI, though with far more capital expended.

  • So if I follow, basically everyone involved has been banking on users getting confused about what the "legit" version of WordPress is, with known transphobic asshole photomatt being particularly egregious with WordPress.com vs wordpress.org, and then known transphobic asshole photomatt remembered that he also had some more direct influence in WordPress.org that he could use to smite his enemies. Is that about right or am I missing some steps?

  • I think it's also a case of thinking about form before function. It's not quite as bad a case as the metaverse nonsense was, but there's still a lack of curiosity about the sci-fi they read. In most stories that treat AI as anything less than a god, the replacement of people with artificial tools is about either what gets lost (the I, Robot movie, Wall-E) or the fact that effectively replacing people requires creating something with the same moral worth (Blade Runner, I, Robot, the Aasimov collection, etc).

  • So to throw my totally-amateur two cents in, it seems like it's definitely part of the discussion in actual AI circles based on the for-public-consumption reading and viewing I've done over the years, though I've never heard it mentioned by name. I think a bigger part of the explanation has less to do with human cognition (it's probably fallacious to assume that AI of any method effectively reproduces those processes) and more to do with the more abstract cognitive tests and games being much more formally defined. Our perception and model of a game of Chess or Go may not be complete enough to solve the game, but it is bounded by the explicitly-defined rules of the game. If your opponent tries to work outside of those bounds by, say, flipping the board over and storming off, the game itself can treat that as a simple forfeit-by-cheating. But our understanding of the real world is not similarly bounded. Things that were thought to be impossible happen with impressive frequency, and our brain is clearly able to handle this somehow. That lack of boundedness requires different capabilities than just being able to operate within expected parameters like existing English GenAI or image generators, I suspect relating to handling uncertainty or lacking information. The assumption that what AI is doing is a mirror to the living mind is wholly unproven.

  • Yet another word for the good ol' rank-and-yank. Great way to instantly make number go up by suddenly laying off 10-20% of your employees. The trick is making sure you've moved on to another department or another company before the predictable consequences take hold.

  • Uber ran at a loss to undercut the competition (traditional taxis) and passed the costs of that onto the drivers. Then once people were onboard they increased prices while hanging the drivers out to dry, to the point where ultimately the consumer pays as much as they did for a normal taxi but there's some ease-of-use improvements from the app, a hell of a lot of money ending up in silicon valley instead of local taxi companies, and an ever-growing mass of human suffering as the gig economy erodes the ability of the working class to find economic security.

  • Like, there is definitely racism in the hiring process and how writing is judged, but it comes from the fact that white people and white people alone don't have to code switch in order to be taken seriously. The problem isn't that bad writers are discriminated against it's that nonwhite people have to turn on their "white voice" in order to be recognized as good writers. Giving everyone a white robot that can functionally take their place doesn't actually make nonwhite people any more accepted. It's the same old bullshit about how anonymity means 4chan can't be racist.

    I'm actually pretty sympathetic to the value of even the most sneer-worthy technologies as accessibility tools, but that has to come with an acknowledgement of the limitations of those tools and is anathema to the rot economy trying to sell them as a panacea to any problem.

  • I'm still partial to "spicy autocomplete" as a good analogy for how these systems actually work that people have more direct experience with. Take those Facebook posts that give you the first few words and say "what does autocomplete say your most used words are?" and make answering the question use as much electricity as a small city.

  • Basically, yeah. At my last job working in vendor support the "customer success" team was entirely sales-focused. Support (as in "my product isn't working as expected please help") was under a different department that would sometimes get badgered by the customer success guys if it seemed like a case was making it harder to upsell, or if the customer's problem was that they wanted to do something their current purchase didn't cover.

  • The Zitron-pilled among us probably suspect that part of the real reason for this is, ironically, to obscure the fact that OpenAI has no real profits because of how ludicrously expensive their models are to train and operate and how limited the actual use cases that people will pay for have proven. It's better from a "getting investor money" perspective to have everyone talking about how terrible it is that investor profits are no longer capped for humanitarian reasons than to have more people ask whether we're getting close to the peak of this bubble.

  • Also please fill in the obligatory rant about how LLMs don't actually know any diseases or symptoms. Like, if your training data was collected before 2020 you wouldn't have a single COVID case, but if you started collecting in 2020 you'd have a system that spat out COVID to a disproportionately large fraction of respiratory symptoms (and probably several tummy aches and broken arms too, just for good measure).