Need to let loose a primal scream without collecting footnotes first? Have a
sneer percolating in your system but not enough time/energy to make a whole post
about it? Go forth and be mid: Welcome to the Stubsack, your first port of call
for learning fresh Awful you’ll near-instantly regret. Any awf...
Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
(Credit and/or blame to David Gerard for starting this.)
From linkedin, not normally known as a source of anti-ai takes so that’s a nice change. I found it via bluesky so I can’t say anything about its provenance:
We keep hearing that AI will soon replace software engineers, but we're forgetting that it can already replace existing jobs... and one in particular.
The average Founder CEO.
Before you walk away in disbelief, look at what LLMs are already capable of doing today:
They use eloquence as a surrogate for knowledge, and most people, including seasoned investors, fall for it.
They regurgitate material they read somewhere online without really understanding its meaning.
They fabricate numbers that have no ground in reality, but sound aligned with the overall narrative they're trying to sell you.
They are heavily influenced by the last conversations they had.
They contradict themselves, pretending they aren't.
They politely apologize for their mistakes, but don't take any real steps to fix the underlying problem that caused them in the first place.
They tend to forget what they told you last week, or even one hour ago, and do it in a way that makes you doubt your own recall of events.
They are victims of the Dunning–Kruger effect, and they believe they know a lot more about the job of people interacting with them than they actually do.
They can make pretty slides in high volumes.
They're very good at consuming resources, but not as good at turning a profit.
occurring to me for the first time that roko's basilisk doesn't require any of the simulated copy shit in order to big scare quotes "work." if you think an all powerful ai within your lifetime is likely you can reduce to vanilla pascal's wager immediately, because the AI can torture the actual real you. all that shit about digital clones and their welfare is totally pointless
Still frustrated over the fact that search engines just don't work anymore. I sometimes come up with puns involving a malapropism of some phrase and I try and see if anyone's done anything with that joke, but the engines insist on "correcting" my search into the statistically more likely version of the phrase, even if I put it in quotes.
Also all the top results for most searches are blatant autoplag slop with no informational value now.
Siskind appears to be complaining about leopards gently nibbling on his face on main this week, the gist being that tariff-man is definitely the far-rights fault and it would surely be most unfair to heap any blame on CEO worshiping reactionary libertarians who think wokeness is on par with war crimes while being super weird with women and suspicious of scientific orthodoxy (unless it's racist), and who also comprise the bulk of his readership.
The slatestarcodex is discussing the unethical research performed on changemyview. Of course, the most upvoted take is that they don't see the harm or why it should be deemed unethical. Lots of upvoted complaints about IRBs and such. It's pretty gross.
A dimly flickering light in the darkness: lobste.rs has added a new tag, "vibecoding", for submissions related to use "AI" in software development. The existing tag "ai" is reserved for "real" AI research and machine learning.
“A lot of it is psychological analysis, like, ‘Who are these people?’ ‘How do they react under pressure?’ ‘How do you keep them from falling apart?’ ‘How do you keep them from going crazy?’ ‘How do you keep from going crazy yourself?’ You know, you end up being a psychologist half the time.”
First, Chrome won the browser war fair and square by building a better surfboard for the internet. This wasn't some opportune acquisition. This was the result of grand investments, great technical prowess, and markets doing what they're supposed to do: rewarding the best.
Lots of credit given to 👼🎺 Free Market Capitalism 👼🎺, zero credit given to open web standards, open source contributions, or the fact that the codebase has a lineage going back to 1997 KDE code.
Another great piece from Brian Merchant. I've been job seeking on network engineering and IT support and had naively assumed that companies would consider the stakes of screwing up infrastructure too high to take risks with the bullshit machine, but it definitely looks like the trend he described of hiring less actual humans is at play here, even as the actual IT infrastructure gets bigger and more complex from people integrating this shit.
Instead of this expensive imitation of a voigt-kampff test I would suggest an alternative method of detecting if a personoid is really a human or an instrument of an evil inhuman intelligence that wishes to consume all of earth: check if their net worth is closer to a billion dollars than it is to being broke.
Aw man, Natasha Lyonne is going AI. Notably she is partnering with a company, Moonvalley, that claims to have developed an “ethical” model, Marey, trained on “clean” data- i.e. data that is owned or licensed by Moonvalley and whoever else they are partnering with.
So it's not quite a sneer, but i could use some help from the collective sneer brainstrust. If you're willing to indulge me.
I work for one of those horrible places that is mainlining in its own AI koolaid as hard as it can. It has also begun doing layoffs, inspired in part by the "AI can do this now instead!" delusion. Now, I am in no way in love with my job nor the sociopaths I labor for, and it's clear to me the feeling is mutual, but I am cursed with the affliction of needing to eat and pay for housing. I am also at a significant structural disadvantage in the job market compared to others, which makes things more difficult.
In an executive's recent discussions with another company's senior executive, my complicated, unglamorous and hugely underestimated small tech niche was raised as one of the areas they've swapped out for AI "with great success". I happen to know this other company has no dedicated resource for my niche and therefore is unlikely to be verifying their swap actually works, but it will have the superficial appearance of working. I know they have no dedicated resources because they are actively hiring their first staff member for this niche and said so in a recent job advertisement.
Myself and my fellow niche serfs have been asked to put together a list of questions for this other company, and the intent is clearly a thin veil to have us justify our ability to eat. We've been highlighted this time, but it's also clear other areas are receiving similar requests and pressure.
If you were to ask questions of a tech executive from a company which is using AI to pretend to fix a tech niche - but they are likely to believe they are doing so more than superficially and are able to convince other ignorant and gullible executives that they are doing so, what would you ask?
Laughing at "AI" boosters worrying "vibe coding" is becoming synonymous with "AI coding". Tech is vibes througout[sic]. Vibe management. Vibe strategy. Vibe design. Coding has been a garbage fire for decades and, yeah it’s a vibe-based pop culture from top to bottom and has only been getting worse
Code that does what the end user wants is already the exception. Software is managed on vibes throughout. Anybody who goes huffy because the field OVERWHELMINGLY responds to "vibe coding is using AI to create code that you don't care about" with "so all coding, gotcha!" has not been paying attention
"Vibe coding is all AI coding" feels true to most because not caring about what happens after it's pushed to the final victim is already the norm. The only change from adopting "AI" is they now have the freedom to no longer care about what happens BEFORE as well.
“Not everybody in software dev is like that! Some coders genuinely care and put in the work needed to make good software”
True, but I feel confident in saying that next to none of those are leaning hard into “AI coding”
The target market for “AI” is SPECIFICALLY people who don’t care
Giving a personal sidenote, I expect "vibe coding" will stick around as a pejorative after the AI bubble bursts - "AI" has already become synonymous with "zero-effort, low-quality garbage" in the public eye, so re-using "vibe code" to mean "crapping out garbage" isn't gonna be a difficult task, linguistically speaking.
plex has decided that it is no longer worth maintaining merely partial tax on those who "want more", and has decided to continue on their path to become the bridgetroll:
As of April 29, 2025, we’re changing how remote streaming works for personal media libraries, and it will no longer be a free feature on Plex. Going forward, you’ll need a Plex Pass, or our newest subscription offering, Remote Watch Pass, to stream personal media remotely.
As a server owner, if you elect to upgrade to a Plex Pass, anyone with access to your server can continue streaming your server content remotely as part of your subscription benefits. Not sure which option is best for you? Check out our plans below to learn more.
got your own server and networking and happy to stream from your home setup? no more. the extractivist rentlords are hungry.
Stumbled across a piece from history YTer The Pharaoh Nerd publicly ripping into AI slop, specifically focusing on AI-generated pseudohistory found on YouTube Shorts.
An morewronger discusses the "points system" implemented by the Ukrainian armed forces where soldiers can spend points earned by destroying Russian targets on new drone hardware
Lots of wittering about markets and Gotthards law, but what struck me was
Now, this is clearly a repugnant market. Repugnant market is a market where some people would like to engage in it and other people think they shouldn’t. (Think market in human kidneys. Or prostitution. Or the market in abortions. [...])
From: The Sentry Team <noreply@sentry.io>
Subject: Update to Sentry’s List of Subprocessors
...
- Google LLC (Google Cloud Services) and OpenAI, L.L.C. are now reflected as subprocessors for all Sentry products, instead of select features only; and
- Anthropic, PBC is now added as a subprocessor.
sigh
they do still have user-specified controls (and appear to respect them), but... sigh
"You have companies using copyright-protected material to create a product that is capable of producing an infinite number of competing products," said Chhabria to Meta's attorneys in a San Francisco court last Thursday.
"You are dramatically changing, you might even say obliterating, the market for that person's work, and you're saying that you don't even have to pay a license to that person… I just don't understand how that can be fair use."
The judge itself does seem unconvinced about the material cost of Facebook's actions, however:
"It seems like you're asking me to speculate that the market for Sarah Silverman's memoir will be affected by the billions of things that Llama [Meta's AI model] will ultimately be capable of producing," said Chhabria.
trying to follow up on shillrinivasan's pet project, and it's ... sparse
that "opening ceremony" video which kicked around a couple weeks ago only had low 10s of people there, and this post (one of the few recent things mentioning it that I could find) has photos with a rather stark feature: not a single one of them showing people engaged in Doing Things. the frontpage has a different photo, and I count ~36 people there?
as a thing both parallel and tangent to usual sneerjects, this semafor article is kinda notable
I'll try gather previous dm sneers here later, but some things that stood out:
the author writes about groupchats in the most goddamn abstract way possible, as though they're immensely surprised
the subject matter acts as hard confirmation/evidence of observed lockstep over the last few years by so many of the worst fucker around
the author then later goes "oh yeah but no I've actually done this and been burned by it" so I'm just left thinking "skill issue" (and while I say that curtly, I will readily be among the first people to get extremely vocal about the ways a lot of this tech falls short in purpose sometime)
Also I find this quote interesting (emphasis mine:
He knew that ChatGPT could not be sentient by any established definition of the term, but he continued to probe the matter because the character’s persistence across dozens of disparate chat threads “seemed so impossible.” “At worst, it looks like an AI that got caught in a self-referencing pattern that deepened its sense of selfhood and sucked me into it,” Sem says. But, he observes, that would mean that OpenAI has not accurately represented the way that memory works for ChatGPT.
I would absolutely believe that this is the case, especially if like Sem you have a sufficiently uncommon name that the model doesn't have a lot of context and connections to hang on it to begin with.
Maybe that’s my problem with AI-generated prose: it doesn’t mean anything because it didn’t cost the computer anything. When a human produces words, it signifies something. When a computer produces words, it only signifies the content of its training corpus and the tuning of its parameters.
Also, on people:
I see tons of essays called something like “On X” or “In Praise of Y” or “Meditations on Z,” and I always assume they’re under-baked. That’s a topic, not a take.