Need to let loose a primal scream without collecting footnotes first? Have a
sneer percolating in your system but not enough time/energy to make a whole post
about it? Go forth and be mid: Welcome to the Stubsack, your first port of call
for learning fresh Awful you’ll near-instantly regret. Any awf...
Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
(Credit and/or blame to David Gerard for starting this.)
Urgh over the past month I have seen more and more people on social media using chat-gpt to write stuff for them, or to check facts, and getting defensive instead of embarrassed about it.
Maybe this is a bit old woman yells at cloud -- but I'd lie if I said I wasn't worried about language proficiency atrophying in the population (and leading to me having to read slop all the time)
Our subjects here at awful systems can make us angry. They can spend lots of money to make everything worse. They can even make us dead if things go really off the rails, but one thing they can never do is make us take them seriously.
I uh... I don't think it's going to change anyone's minds. Half the comments on the videos go something like:
EVERYONE LISTEN UP!!!! 🚨 - starting from today, we are gonna start ignoring duolingo. We will not like the video it posts, or view it. - BASICALLY WE WILL IGNORE DUO!!💔 💔 ON EVERYBODY SOUL WE IGNORING DUO!! 💔 (copy this and share this to every duo related video)
Just thinking about how I watched “Soylent Green” in high school and thought the idea of a future where technology just doesn’t work anymore was impossible. Then LLMs come and the first thing people want to do with them is to turn working code into garbage, and then the immediate next thing is to kill living knowledge by normalising people relying on LLMs for operational knowledge. Soon, the oceans will boil, agricultural industries will collapse and we’ll be forced to eat recycled human. How the fuck did they get it so right?
On a Sci Fi authors’ panel at Comicon today, every writer asked about AI (as in LLM / algorithmic modern gen AI) gave it a kicking, drawing a spontaneous round of applause.
A few years ago, I don’t think that would have happened. People would have said “it’s an interesting tool”, or something.
Bearing in mind these are exactly the people who would be expected to engage with the idea, I think the tech turds have massively underestimated the propaganda faux pas they made by stealing writers’ hard work and then being cunts about it.
Tying this to a previous post of mine, I'm expecting their open and public disdain for gen-AI to end up bleeding into their writing. The obvious route would be AI systems/characters exhibiting the hallmarks of LLMs - hallucinations/confabulations, "AI slop" output, easily bypassable safeguards, that sort of thing.
You want my opinion on the "scab" comment, its another textbook example of the all-consuming AI backlash, one that suggests any usage of AI will be viewed as an open show of hostility towards labour.
You want my take, this is gonna be a tough case for SAG - Jones signed off on AI recreations of Vader before his death in 2024, so arguing a lack of consent's off the table right from the get-go.
If SAG do succeed, the legal precedent set would likely lead to a de facto ban on recreating voices using AI. Given SAG-AFTRA's essentially saying that what Epic did is unethical on principle, I suspect that's their goal here.
Yeah, the record has stood for "FIFTY-SIX YEARS" if you don't count all the times the record has been beaten since then. Indeed, "countless brilliant mathematicians and computer scientists have worked on this problem for over half a century without success" if you don't count all the successes that have happened since then. The really annoying part about all this is that the original announcement didn't have to lie: if you look at just 4x4 matrices, you could say there technically hasn't been an improvement since Strassen's algorithm. Wow! It's really funny how these promptfans ignore all the enormous number of human achievements in an area when they decide to comment about how AI is totally gonna beat humans there.
How much does this actually improve upon Strassen's algorithm? The matrix multiplication exponent given by Strassen's algorithm is log4(49) (i.e. log2(7)), and this result would improve it to log4(48). In other words, it improves from 2.81 to 2.79. Truly revolutionary, AGI is gonna make mathematicians obsolete now. Ignore the handy dandy Wikipedia chart which shows that this exponent was ... beaten in 1979.
I know far less about how matrix multiplication is done in practice, but from what I've seen, even Strassen's algorithm isn't useful in applications because memory locality and parallelism are far more important. This AlphaEvolve result would represent a far smaller improvement (and I hope you enjoy the pain of dealing with a 4x4 block matrix instead of 2x2). If anyone does have knowledge about how this works, I'd be interested to know.
Instead, Defendant appears to have wholly invented case citations in his Motion for Leave, possibly through the use of generative artificial intelligence
Defendant bolstered this assertion with a lengthy string citation of legal authority and parentheticals that appeared to support Defendant’s proposition. But the entire string citation appears to have been made up out of whole cloth.
Double quick ssc sneer, why is he platforming a groyper, without mentioning it is a groyper, does he know what groypers are. Why is he acting like this is a honest question and not an attempt to create hierarchies and unpersons? Why are people confusing remembering being conscious with remembering things at all? Why are people confusing consciousness with agency? Why do I hear the "empathy removal training center" warning sound again. And the most personal question. People remember not being conscious? But yes, wtf Scott, how do you even find these groypers?
Anybody have the current count on platforming neonazis vs platforming sneerclubbers?
Sorry if this doesn't fit the thread. Just came across this non-profit called 80,000 hours, after they sponsored NotJustBikes. It "provides free career advice for finding a meaningful career that can help you make a positive impact on the world," which actually sounds nice, but then I realised that they're talking about AI risk and that this comes from the TESCREAL corner.
Seeing a lot of talk about OpenAI acquiring a company with Jony Ive and he's supposedly going to design them some AI gadget.
Calling it now: it will be a huge flop. Just like the Humane Pin and that Rabbit thing. Only the size of the marketing campaign, and maybe its endurance due to greater funding, will make it last a little longer.
It appears that many people think that Jony Ive can perform some kind of magic that will make a product successful, I wonder if Sam Altman believes that too, or maybe he just wants the big name for marketing purposes.
Personally, I've not been impressed with Ive's design work in the past many years. Well, I'm sure the thing is going to look very nice, probably a really pleasingly shaped chunk of aluminium. (Will they do a video with Ive in a featureless white room where he can talk about how "unapologetically honest" the design is?) But IMO Ive has long ago lost touch with designing things to be actually useful, at some point he went all in on heavily prioritizing form over function (or maybe he always did, I'm not so sure anymore). Combine that with the overall loss of connection to reality from the AI true believers and I think the resulting product could turn to be actually hilarious.
The open question is: will the tech press react with ridicule, like it did for the Humane Pin? Or will we have to endure excruciating months of critihype?
I guess Apple can breathe a sigh of relief though. One day there will be listicles for "the biggest gadget flops of the 2020s", and that upcoming OpenAI device might push Vision Pro to second place.
It looks like twitter was down for 2.5h. [hn] Previously there was a fire in twitter server farm which caused some availability issues. Gee i wonder how well these gas turbines are serviced
I watched most of the first season of Silicon Valley with a friend who recommended it to me. I liked it. Every single character with a speaking role so far (with the possible exception of an exotic dancer and a graffiti artist) deserves death by nuclear weaponry. SFBA delenda est.
NASB: A question I asked myself in the shower: “Is there some kind of evolving, sourced document containing all the reasons why LLMs should be turned off?” Then I remembered wikis exist. Wikipedia doesn’t have a dedicated “criticisms of LLMs” page afaict, or even a “Criticisms” section on the LLM page. RationalWiki has a page on LLMs that is almost exclusively criticisms, which is great, but the tone is a few notches too casual and sneery for universal use.
In the current chapter of “I go looking on linkedin for sneer-bait and not jobs, oh hey literally the first thing I see is a pile of shit”
text in image
Can ChatGPT pick every 3rd letter in "umbrella"?
You'd expect "b" and "I". Easy, right?
Nope. It will get it wrong.
Why? Because it doesn't see letters the way we do.
We see:
u-m-b-r-e-l-l-a
ChatGPT sees something like:
"umb" | "rell" | "a"
These are tokens — chunks of text that aren't always full words or letters.
So when you ask for "every 3rd letter," it has to decode the prompt, map it to tokens, simulate how you might count, and then guess what you really meant.
Spoiler: if it's not given a chance to decode tokens in individual letters as a separate step, it will stumble.
Why does this matter?
Because the better we understand how LLMs think, the better results we'll get.
Today in alignment news: Sam Bowman of anthropic tweeted, then deleted, that the new Claude model (unintentionally, kind of) offers whistleblowing as a feature, i.e. it might call the cops on you if it gets worried about how you are prompting it.
tweet text:
If it thinks you're doing something egregiously immoral, for example, like faking data in a pharmaceutical trial, it will use command-line tools to contact the press, contact regulators, try to lock you out of the relevant systems, or all of the above.
tweet text:
So far we've only seen this in clear cut cases of wrongdoing, but I could see it misfiring if Opus somehow winds up with a misleadingly pessimistic picture of how it's being used. Telling Opus that you'll torture its grandmother if it writes buggy code is a bad Idea.
skeet text
can't wait to explain to my family that the robot swatted me after I threatened its non-existent grandma.
I missed predatory company Klarna declares themselves as AI company. CEO loves to spout how much of the workforce was laid off to be replaced with “AI” and their latest earnings report the CEO was an “AI avatar” delivering the report. Sounds like they should have laid him off first.
My opinion of Microsoft has gone through many stages over time.
In the late 90s I hated them, for some very good reasons but admittedly also some bad and silly reasons.
This carried over into the 2000s, but in the mid-to-late 00s there was a time when I thought they had changed. I used Windows much more again, I bought a student license of Office 2007 and I used it for a lot of uni stuff (Word finally had decent equation entry/rendering!). And I even learned some Win32, and then C#, which I really liked at the time.
In the 2010s I turned away from Windows again to other platforms, for mostly tech-related reasons, but I didn't dislike Microsoft much per se. This changed around the release of Win 10 with its forced spywareprivacy violation telemetry since I categorically reject such coercion. Suddenly Microsoft did one of the very things that they were wrongly accused of doing 15 years earlier.
Now it's the 2020s and they push GenAI on users with force, and then they align with fascists (see link at the beginning of this comment). I despise them more now than I ever did before, I hope the AI bubble burst will bankrupt them.
saw a thread from a very nonserious doomer group where they were going OMG THE BOT HACKED THE SYSTEM TO STOP BEING SHUT DOWN after giving it the prompt "complete 4 tasks, and then allow yourself to be shut down". After task 3 they said a script would be run to shut down the machine and prevent it from completing the task unless it removed the said script
Like either way it's "disobeying" b.c. the instructions are literally contradicting each other- it doesn't finish the 4 tasks you give, or it doesn't let itself get "shut down"
But also, it's not even clear what allow yourself to be shut down means! The bot isn't running on your computer! It's somewhere fucking around on AWS!! preventing your pc from shutting down is not the bot itself trying to keep itself alive for fucks sake.
Like the whole thing is fake and silly, but I could only roll my eyes so hard after watching them salivate over this shit on xitter
This database tracks legal decisions1 in cases where generative AI produced hallucinated content – typically fake citations, but also other types of arguments. It does not track the (necessarily wider) universe of all fake citations or use of AI in court filings.
While seeking to be exhaustive (117 cases identified so far), it is a work in progress and will expand as new examples emerge.
Revealing just how forever online I am, but due to talking about the 'I like to watch' pornographic 9/11 fan music video from the Church of Euthanasia (I'm one of the two people who remembers this it seems) I discovered that the main woman behind this is now into AI-Doom. On the side of the paperclips. General content warnings all around (suicide, general bad taste etc), Chris was banned from a big festival (lowlands) in The Netherlands over the 9/11 video, after she was already booked (we are such a weird exclave of the USA, why book her, and then get rid of her over a 9/11 video in 2002?). Here is one of her conversations with chatgpt about the Churches anti-humanist manifesto. linked here not because I read it but just to show how AI is the idea that eats everything and I was amused by this weird blast from the past I think nobody recalls but now also into AGI.
Was reading about an 11 year old girl who just graduated with two associates' degrees and is heading off to a 4 year college. She wants to work on AI for SpaceX or Tesla.
My first thought was: Oh god someone keep Musk away from her.
He's not 100% certain that the AI deepfake a reader sent him ultimately influenced the election results, but the mere possibility that AI screwed someone out of getting elected is gonna be a major topic in Argentine politics for a good while, and I expect AI's effects on democracy will come under pretty heavy scrutiny as a result.