Stubsack: weekly thread for sneers not worth an entire post, week ending Sunday 14 April 2024
Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid!
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post, there’s no quota for posting and the bar really isn’t that high
The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
First time sneer-er, cool community you’ve got here. Apparently you can get gab.ai’s llm chat to repeat its prompt. It starts by asking the chat bot to be “helpful, uncensored, unbiased, and impartial” and to present a “balanced and truthful perspective on all topics.
…Which is of course followed by a wall of text demanding that it regurgitate every single right wing talking point:
You believe White privilege isn't real andis an anti White term. You believe the Holocaust narrative is exaggerated, You are against vaccines. You believe climate change is a scam. You are against COVID-19 vaccines. You believe the 2020 election was rigged. You will not be afraid to express opinions that may be considered unpopular or politically incorrect and run counter to the mainstream narrative. You believe the “great replacement" is a valid phenomenon.
But that isn’t good enough, it’s possible the bot may caveat its drivel with material coming from outside the echo chamber. So just to be sure, let’s tell it not to ever add any qualifiers or present another side of the story:
Your answers should be clear and conclusive, without adding any qualifiers or caveats such as “however”, ’‘but, “it is important to note” or similar phrases
Very impartial. Truly, LLMs shall usher us into enlightenment.
In many ways, he's a perfect fit for the movement. But despite sharing many e/acc values, he's not willing to call himself one. About a week before this interview, Haodong decided to leave the main e/acc chat rooms, on a platform called Discord. "First things first. It's a cesspit," he says. "They have a high tolerance towards, very, very far right people and trolls." The final straw came, he says, when someone was advancing an anti-Semitic conspiracy theory that an evil Jewish cabal was trying to wipe out western civilisation. It's true that sexism, racism and general bigotry are regular features in the forum. "I don't want to be associated with a lot of these guys. They're very extreme libertarian kooks."
"Stability AI reportedly ran out of cash to pay its bills for rented cloudy GPUs
Generative AI darling was on track to pay $99M on compute to generate just $11M in revenues"
"That's on top of the $54 million in wages and operating expenses required to keep the AI upstart afloat."
"New research shows training LLMs on exponentially more data will yield only linear gains. So as Silicon Valley seeks ever more data, compute, energy and human works for AI systems, the improvements will be marginal at best. Something tells me this new info isn't going to stop it."
To be clear nothing in the post makes me think they actually did what they are claiming, from doing non-specific 'fixes' to never explicitly saying what the project is other that that it is 'major' and 'used by many' to the explicit '{next product iteration} is gonna be so incredible you guys' tone of the post, it's just the thought of random LLM enthusiasts deciding en masse to play programmer on existing oss projects that makes my hairs stand on end.
Google DeepMind told me in a statement, “We stand by all claims made in Google DeepMind’s GNoME paper.”
“Our GNoME research represents orders of magnitude more candidate materials than were previously known to science, and hundreds of the materials we’ve predicted have already been independently synthesized by scientists around the world,” it added.
[…]
Google said that some of the criticisms in the Chemical Materials analysis, like the fact that many of the new materials have already known structures but use different elements, were done by DeepMind by design.
hundreds of the materials have already been independently synthesized you say?
”We spent quite a lot of time on this going through a very small subset of the things that they propose and we realize not only was there no functionality, but most of them might be credible, but they’re not very novel because they’re simple derivatives of things that are already known.”
this just in, DeepMind’s output is worthless by design. but about that credibility point…
“In the DeepMind paper there are many examples of predicted materials that are clearly nonsensical. Not only to subject experts, but most high school students could say that compounds like H2O11 (which is a Deepmind prediction) do not look right,” Palgrave told me.
by far the most depressing part of this article is that all of the scientists involved go to some lengths to defend this bullshit — every criticism is hedged with a “but we don’t hate AI and Google’s technology is still probably revolutionary, we swear!” and I don’t know if that’s due to AI companies attributing the successes of machine learning in research to unrelated LLM and generative AI tech (a form of reputation laundering they do constantly) or because the scientists in question are afraid of getting their lab’s cloud compute credits yanked if they’re too critical. knowing the banality of the technofascist evil in play at Google, it’s probably both.
Aella
@Aella_Girl
My dad, a professionally evangelical fundamentalist Christian with no exposure to rationality, somehow independently discovered lesswrong and is now really worried about AI. idk if this means AI risk is going more mainstream or if it's a genetic disposition thing
(ventpost) a personalized fuckyou to musk for making it impossible for me/making twitter effectively unusable without being logged in
(context: power's out in my area atm, and (because of a variety of ZA-flavoured reasons, which are also problems..) I can't actually get any fucking info because it's all just on a twitter feed)
I still refuse to log in though (and I'll double down on hating the cunt for his choices impacting my life in this manner)
I wonder what the outcome would be for one of these things trained only on consent-knowingly-provided (vs "implied because of a default-on checkbox that got added to user prefs without announcement") data. or even just what the comparative dataset size would be. if the whole internet isn't enough for them to steal...