Stubsack: weekly thread for sneers not worth an entire post, week ending Sunday 13 October 2024
Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
speaking of the Godot engine, here’s a layered sneer from the Cruelty Squad developer (via Mastodon):
image description
a post from Consumer Softproducts, the studio behind Cruelty Squad:
weve read the room and have now successfully removed AI from cruelty squad. each enemy is now controlled in almost real time by an employee in a low labor cost country
i wouldn't want to sound like I'm running down Hinton's work on neural networks, it's the foundational tool of much of what's called "AI", certainly of ML
but uh, it's comp sci which is applied mathematics
Don't know how much this fits the community, as you use a lot of terms I'm not inherently familiar with (is there a "welcome guide" of some sort somewhere I missed).
The purpose of this project is not to restrict or ban the use of AI in articles, but to verify that its output is acceptable and constructive, and to fix or remove it otherwise.
I just... don't have words for how bad this is going to go. How much work this will inevitably be. At least we'll get a real world example of just how many guardrails are actually needed to make LLM text "work" for this sort of use case, where neutrality, truth, and cited sources are important (at least on paper).
I hope some people watch this closely, I'm sure there's going to be some gold in this mess.
the mozilla PR campaign to convince everyone that advertising is the lifeblood of commerce and that this is perfectly fine and good (and that everyone should just accept their viewpoint) continues
We need to stare it straight in the eyes and try to fix it
try, you say? and what's your plan for when you fail, but you've lost all your values in service of the attempt?
For this, we owe our community an apology for not engaging and communicating our vision effectively. Mozilla is only Mozilla if we share our thinking, engage people along the way, and incorporate that feedback into our efforts to help reform the ecosystem.
are you fucking kidding me? "we can only be who we are if we maybe sorta listen to you while we keep doing what we wanted to do"? seriously?
I could only find one positive response in the replies, and that one is getting torn to shreds as well:
I did also find a quote-tweet calling the current AI bubble an "anti-art period of time", which has been doing pretty damn well:
Against my better judgment, I'm whipping out another sidenote:
With the general flood of AI slop on the Internet (a slop-nami as I've taken to calling it), and the quasi-realistic style most of it takes, I expect we're gonna see photorealistic art/visuals take a major decline in popularity/cultural cachet, with an attendant boom in abstract/surreal/stylised visuals
On the popularity front, any artist producing something photorealistic will struggle to avoid blending in with the slop-nami, whilst more overtly stylised pieces stand out all the more starkly.
On the "cultural cachet" front, I can see photorealistic visuals becoming seen as a form of "techno-kitsch" - a form of "anti-art" which suggests a lack of artistic vision/direction on its creators' part, if not a total lack of artistic merit.
They're basically admitting they didn't pay an influencer to spread misinformation about public wifi in order to sell VPN products, they just stole her likeness, used her photo, and attributed completely made up quote to her.
But it was a joke guys! We did a satire! I’m totally certain I know what satire is!
Another upcoming train wreck to add to your busy schedule: O’Reilly (the tech book publisher) is apparently going to be doing ai-translated versions of past works. Not everyone is entirely happy about this. I wonder how much human oversight will be involved in the process.
This is gut instinct, but I'm starting to get the feeling this AI bubble's gonna destroy the concept of artificial intelligence as we know it.
Mainly because of the slop-nami and the AI industry's repeated failures to solve hallucinations - both of those, I feel, have built an image of AI as inherently incapable of humanlike intelligence/creativity (let alone Superintelligencetm), no matter how many server farms you build or oceans of water you boil.
Additionally, I suspect that working on/with AI, or supporting it in any capacity, is becoming increasingly viewed as a major red flag - a "tech asshole signifier" to quote Baldur Bjarnason for the bajillionth time.
Eagan Tilghman, the man behind the slaughter animation, may have been a random indie animator, who made Springtrapped on a shoestring budget and with zero intention of making even a cent off it, but all those mitigating circumstances didn't save the poor bastard from getting raked over the coals anyway. If that isn't a bad sign for the future of AI as a concept, I don't know what is.
A cafe run by immortality-obsessed multi-millionaire Bryan Johnson is reportedly struggling to attract customers with students at the crypto-funded Network School in Singapore preferring the hotel’s breakfast buffet over “bunny food.”
Plenty of agreement, but also a lot of "what is reasoning, really" and "humans are dumb too, so it's not so surprisingly GenAIs are too!". This is sure a solid foundation for multi-billion startups, yes sirree.
And on the subject of AI: strava is adding ai analytics. The press release is pretty waffly, as it would appear that they’d decided to add ai before actually working out what they’d do with it so, uh, it’ll help analyse the reams of fairly useless statistics that strava computes about you and, um, help celebrate your milestones?
Pulling out a specific paragraph here (bolding mine):
I was glad to see some in the press recognizing this, which shows something of a sea change is underfoot; outlets like the Washington Post, CNN, and even Inc. Magazine all published pieces sympathizing with the longshoremen besieged by automation—and advised workers worried about AI to pay attention. “Dockworkers are waging a battle against automation,” the CNN headline noted, “The rest of us may want to take notes.” That feeling that many more jobs might be vulnerable to automation by AI is perhaps opening up new pathways to solidarity, new alliances.
To add my thoughts, those feelings likely aren't just that many more jobs are at risk than people thought, but that AI is primarily, if not exclusively, threatening the jobs people want to do (art, poetry, that sorta shit), and leaving the dangerous/boring jobs mostly untouched - effectively the exact opposite of the future the general public wants AI to bring them.
Many thanks to @blakestacey and @YourNetworkIsHaunted for your guidance with the NSF grant situation. I've sent an analysis of the two weird reviews to our project manager and we have a list of personnel to escalate with if we can't get any traction at that level. Fingers crossed that we can be the pebble that gets an avalanche rolling. I'd really rather not become a character in this story (it's much more fun to hurl rotten fruit with the rest of the groundlings), but what else can we do when the bullshit comes and finds us in real life, eh?
It WAS fun to reference Emily Bender and On Bullshit in the references of a serious work document, though.
Edit: So...the email server says that all the messages are bouncing back. DKIM failure?
Edit2: Yep, you're right, our company email provider coincidentally fell over. When it rains, it pours (lol).
Edit3: PM got back and said that he's passed it along for internal review.
And now, another sidenote, because I really like them apparently:
This is gut instinct like my previous sidenote, but I suspect that this AI bubble will cause the tech industry (if not tech as a whole) to be viewed as fundamentally hostile to artists and fundamentally lacking in art skills/creativity, if not outright hostile to artists and incapable of making (or even understanding) art.
Beyond the slop-nami flooding the Internet with soulless shit whose creation was directly because of tech companies like OpenAI, its also given us shit like:
And, because this is becoming so common, another sidenote from me:
With the large-scale art theft that gen-AI has become thoroughly known for, how the AI slop it generates has frequently directly competed with its original work (Exhibit A), the solid legal case for treating the AI industry's Biblical-scale theft as copyright infringement and the bevvy of lawsuits that can and will end in legal bloodbaths, I fully expect this bubble will end up strengthening copyright law a fair bit, as artists and megacorps alike endeavor to prevent something like this ever happening again.
Precisely how, I'm not sure, but to take a shot in the dark I suspect that fair use is probably gonna take a pounding.
Amazon asked Chun to dismiss the case in December, saying the FTC had raised no evidence of harm to consumers.
ah yes, the company that's massively monopolized nearly all markets, destroyed choice, constantly ships bad products (whose existence is incentivised by programs of its own devising), and that has directly invested in enhanced price exploitation technologies? that one? yeah, totes no harm to consumers there
@BlueMonday1984 Yeah, I don't get it. If you want to be a "hacktivist", why not go after one of the MILLIONS of organizations making the planet a worse place?
Today's entry in the wordpress saga: seizing plugins from devs. The author of this one appears to be affiliated with wpengine, which possibly signals more events like this in the future.
We have been made aware that the Advanced Custom Fields plugin on the WordPress directory has been taken over by WordPress dot org.
A plugin under active development has never been unilaterally and forcibly taken away from its creator without consent in the 21 year history of WordPress.
This week's Mystery AI Hype Theater 3000 really hit home. It's about a startup trying to sell "The AI Scientist." It even does reviews!
Can “AI” do your science for you? Should it be your co-author? Or, as one company asks, boldly and breathlessly, “Can we automate the entire process of research itself?”
Major scientific journals have banned the use of tools like ChatGPT in the writing of research papers. But people keep trying to make “AI Scientists” a thing. Just ask your chatbot for some research questions, or have it synthesize some human subjects to save you time on surveys.
Alex and Emily explain why so-called “fully automated, open-ended scientific discovery” can’t live up to the grandiose promises of tech companies. Plus, an update on their forthcoming book!