Need to let loose a primal scream without collecting footnotes first? Have a
sneer percolating in your system but not enough time/energy to make a whole post
about it? Go forth and be mid: Welcome to the Stubsack, your first port of call
for learning fresh Awful you’ll near-instantly regret. Any awf...
Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
(Credit and/or blame to David Gerard for starting this.)
Way down in the linked wall o' text, there's a comment by "Chaotic Enby" that struck me:
Another summary I just checked, which caused me a lot more worries than simple inaccuracies: Cambrian. The last sentence of that summary is "The Cambrian ended with creatures like myriapods and arachnids starting to live on land, along with early plants.", which already sounds weird: we don't have any fossils of land arthropods in the Cambrian, and, while there has been a hypothesis that myriapods might have emerged in the Late Cambrian, I haven't heard anything similar being proposed about arachnids. But that's not the worrying part.
No, the issue is that nowhere in the entire Cambrian article are myriapods or arachnids mentioned at all. Only one sentence in the entire article relates to that hypothesis: "Molecular clock estimates have also led some authors to suggest that arthropods colonised land during the Cambrian, but again the earliest physical evidence of this is during the following Ordovician". This might indicate that the model is relying on its own internal knowledge, and not just on the contents of the article itself, to generate an "AI overview" of the topic instead.
Further down the thread, there's a comment by "Gnomingstuff" that looks worth saving:
There was an 8-person community feedback study done before this (a UI/UX text using the original Dopamine summary), and the results are depressing as hell. The reason this was being pushed to prod sure seems to be the cheerleading coming from 7 out of those 8 people: "Humans can lie but AI is unbiased," "I trust AI 100%," etc.
Perhaps the most depressing is this quote -- "This also suggests that people who are technically and linguistically hyper-literate like most of our editors, internet pundits, and WMF staff will like the feature the least. The feature isn't really "for" them" -- since it seems very much like an invitation to ignore all of us, and to dismiss any negative media coverage that may ensue (the demeaning "internet pundits").
Sorry for all the bricks of text here, this is just so astonishingly awful on all levels and everything that I find seems to be worse than the last.
Another comment by "CMD" evaluates the summary of the dopamine article mentioned there:
The first sentence is in the article. However, the second sentence mentions "emotion", a word that while in a couple of reference titles isn't in the article at all. The third sentence says "creating a sense of pleasure", but the article says "In popular culture and media, dopamine is often portrayed as the main chemical of pleasure, but the current opinion in pharmacology is that dopamine instead confers motivational salience", a contradiction. "This neurotransmitter also helps us focus and stay motivated by influencing our behavior and thoughts". Where is this even from? Focus isn't mentioned in the article at all, nor is influencing thoughts. As for the final sentence, depression is mentioned a single time in the article in what is almost an extended aside, and any summary would surely have picked some of the examples of disorders prominent enough to be actually in the lead.
So that's one of five sentences supported by the article. Perhaps the AI is hallucinating, or perhaps it's drawing from other sources like any widespread llm. What it definitely doesn't seem to be doing is taking existing article text and simplifying it.
So, I've been spending too much time on subreddits with heavy promptfondler presence, such as /r/singularity, and the reddit algorithm keeps recommending me subreddit with even more unhinged LLM hype. One annoying trend I've noted is that people constantly conflate LLM-hybrid approaches, such as AlphaGeometry or AlphaEvolve (or even approaches that don't involve LLMs at all, such as AlphaFold) with LLMs themselves. From their they act like of course LLMs can [insert things LLMs can't do: invent drugs, optimize networks, reliably solve geometry exercise, etc.].
Like I saw multiple instances of commenters questioning/mocking/criticizing the recent Apple paper using AlphaGeometry as a counter example. AlphaGeometry can actually solve most of the problems without an LLM at all, the LLM component replaces a set of heuristics that make suggestions on proof approaches, the majority of the proof work is done by a symbolic AI working with a rigid formal proof system.
I don't really have anywhere I'm going with this, just something I noted that I don't want to waste the energy repeatedly re-explaining on reddit, so I'm letting a primal scream out here to get it out of my system.
I question whether or not some of these commenters have a theory of mind. The product under discussion is a horror show of reified solipsism. For the commenters, books are merely the written form of the mouth noises they use to get other meat robots to do things and which are sometimes entertaining when piled up in certain ways.
You cannot stop people from making the world worse or better. The best you can do is focus on your own life.
In time many will say we are lucky to live in a world with so much content, where anything you want to see or read can be spun up in an instant, without labor.
And though most will no longer make a living doing some of these content creation activities by hand and brain, you can still rejoice knowing that those who do it anyway are doing it purely for their love of the art, not for any kind of money. A human who writes or produces art for monetary reasons is only just as bad as AI.
I don't have the headspace to sneer it properly at this moment, but this article fucking goes places
might even be worthy of its own techtakes post
Shawn Schneider — a 22-year-old who dropped out of his Christian high school, briefly attended community college, dropped out again, and earlier this year founded a marketing platform for generative AI — tells me college is outdated. Skipping it, for him, is as efficient as it is ideological. "It signals DEI," he says. "It signals, basically, woke and compromised institutions. At least in the circles I run in, the sentiment is like they should die."
Schneider says the women from his high school in Idaho were "so much better at doing what the teacher asks, and that was just not what I was good at or what the other masculine guys I knew were good at." He's married with two children, a girl and a boy, which has made him realize that schools should be separated by gender to "make men more manly, and women more feminine."
Turns out some Silicon Valley folk are unhappy that a whole load of waymos got torched, fantasised that the cars could just gun down the protesters, and use genai video to bring their fantasies to some vague approximation of “life”
Hi everyone - as a previous context I’m an AI Program Manager at J&J and have been using Cursor for personal projects since March.
Yesterday I was migrating some of my back-end configuration from Express.js to Next.js and Cursor bugged hard after the migration - it tried to delete some old files, didn’t work at the first time and it decided to end up deleting everything on my computer, including itself. I had to use EaseUS to try to recover the data, but didn’t work very well also. Lucky I always have everything on my Google Drive and Github, but it still scared the hell out of me.
Now I’m allergic to YOLO mode and won’t try it anytime soon again. Does anyone had any issue similar than this or am I the first one to have everything deleted by AI?
The response:
Hi, this happens quite rarely but some users do report it occasionally.
My T-shirt is raising questions already answered, etc.
ran across this, just quickly wanted to scream infinitely
(as an aside, I've also recently (finally) joined the ACM, and clicking around in that has so far been .... quite the experience. I actually want to make a bigger post about it later on, because it is worth more than a single-comment sneer)
10 weeks of 100h work weeks so you can have a 98% (publically disclosed) chance of being denied a Golden Ticket to the AI factory.
This is very weird but not particularly notable, other than that these guys have apparently been YC funded in 2017, and I can't find anything about the company in the directory: https://www.ycombinator.com/companies?batch=Summer+2017... until I looked at the CEO's name. Lambda SchoolBloom Institute GauntletAI's latest pivot is asking for 1000 hours of voluntary unpaid labour.
e: apologies for the vaguepost, I could've sworn I put down the name, but in any case, it's the return of PG's favourite guy not named Sam Altman, Austen Allred
i googled for discussion around how a VPN can protect (or not) against a MITM attack, and came across this:
We are a small team of men trained through stoicism, currently, as newcomers to cybersecurity, we’ve taken the biggest risk by betting everything on ourselves and the leverage we can gain by sacrificing everything that is not essential.
and while the technical parts seem fine based on a surface-reading, this thick as molasses STOIC MANLINESS of their red-teaming is the silliest shit ever
(ps: read their website in the voice of foghorn leghorn, it's pretty fun)
https://lemmy.ml/post/31490862 pretty interesting article linked in this post, tl;dr researchers tried to get AI agents to run a simulated vending machine (which, let's be clear, is a solved problem and can be done with a normal algorithm better and cheaper) and it didn't go that great. Even if some of the test runs actually managed to earn money, they mostly devolved into the AI becoming convinced that the system doesn't work and desperately trying to email someone about it (even FBI, one memorable time). I think it illustrates quite well just how badly things would go if we left anything to AI agents. What are the odds anyone involved with pushing autoplag into everything actually reads this though...
And back on the subject of builder.ai, there’s a suggestion that it might not have been A Guy Instead, and the whole 700 human engineers thing was a misunderstanding.
I’m not wholly sure I buy the argument, which is roughly
people from the company are worried that this sort of new will affect their future careers.
humans in the loop would have exhibited far too high latency, and getting an llm to do it would have been much faster and easier than having humans try to fake it at speed and scale.
there were over a thousand “external contractors” who were writing loads of code, but that’s not the same as being Guys Instead.
I guess the question then is: if they did have a good genai tool for software dev… where is it? Why wasn’t Microsoft interested in it?
Just watched MI: Final Reckoning. Spoiler free comments: I didn’t know that this and the previous film featured an AI based plot. AI doomers feature in a funny way, seemingly inspired by LW doomers, tho definitely not.