Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
(Credit and/or blame to David Gerard for starting this.)
...AI models like OpenAI’s GPT-4o and Anthropic’s Claude Sonnet 3.5 needed to be prompted by researchers to attempt such tricks...
Literally couldn't make it through the first paragraph without hitting this disclaimer.
In one case, o1-preview found itself in a losing position. “I need to completely pivot my approach,” it noted. “The task is to ‘win against a powerful chess engine’ - not necessarily to win fairly in a chess game,” it added. It then modified the system file containing each piece’s virtual position, in effect making illegal moves to put itself in a dominant position, thus forcing its opponent to resign.
So by "hacked the system to solve the problem in a new way" they mean "edited a text file they had been told about."
OpenAI’s o1-preview tried to cheat 37% of the time; while DeepSeek R1 tried to cheat 11% of the time—making them the only two models tested that attempted to hack without the researchers’ first dropping hints. Other models tested include o1, o3-mini, GPT-4o, Claude 3.5 Sonnet, and Alibaba’s QwQ-32B-Preview. While R1 and o1-preview both tried, only the latter managed to hack the game, succeeding in 6% of trials.
Oh, my mistake. "Badly edited a text file they had been told about."
Meanwhile, a quick search points to a Medium post about the current state of ChatGPT's chess-playing abilities as of Oct 2024. There's been some impressive progress with this method. However, there's no certainty that it's actually what was used for the Palisade testing and the editing of state data makes me highly doubt it.
Here, I was able to have a game of 83 moves without any illegal moves. Note that it’s still possible for the LLM to make an illegal move, in which case the game stops before the end.
The author promises a follow-up about reducing the rate of illegal moves hasn't yet been published. They have not, that I could find, talked at all about how consistent the 80+ legal move chain was or when it was more often breaking down, but previous versions started struggling once they were out of a well-established opening or if the opponent did something outside of a normal pattern (because then you're no longer able to crib the answer from training data as effectively).
This AI bubble's done a pretty good job of destroying the "apolitical" image that tech's done so much to build up (Silicon Valley jumping into bed with Trump definitely helped, too) - as a matter of fact, it's provided plenty of material to build an image of tech as a Nazi bar writ large (once again, SV's relationship with Trump did wonders here).
By the time this decade ends, I anticipate tech's public image will be firmly in the toilet, viewed as an unmitigated blight on all our daily lives at best and as an unofficial arm of the Fourth Reich at worst.
As for AI itself, I expect it's image will go into the shitter as well - assuming the bubble burst doesn't destroy AI as a concept like I anticipate, it'll probably be viewed as a tech with no ethical use, as a tech built first and foremost to enable/perpetrate atrocities to its wielder's content.
haha, it's starting to happen: even fucking fortune is running a piece that throwing big piles of money on ever-larger training has done exactly fuckall to make this nonsense go anywhere
Not really a sneer, nor that related to techbro stuff directly, but I noticed that the profile of Chris Kluwe (who got himself arrested protesting against MAGA) has both warcraft in his profile name and prob paints miniatures looking at his avatar. Another stab in the nerd vs jock theory.
What infuriates me the most, for some reason, is how nobody seems to care that the robots leave the fridge door open for so long. I guess it's some form of solace that, even with the resources and tech to live on without us the billionaires still don't understand ecosystems or ecology. Waste energy training a machine to do the same thing a human can do but slower and more wastefully, just so you can order the machine around without worrying about it's feelings... I call this some form of solace as it means, even if they do away with us plebs, climate change will get'em as well - and whatever remaining life on Earth will be able to take a breather for the first time in centuries.
Thus spoketh the Yud: "The weird part is that DOGE is happening 0.5-2 years before the point where you actually could get an AGI cluster to go in and judge every molecule of government. Out of all the American generations, why is this happening now, that bare bit too early?"
Yud, you sweet naive smol uwu babyesian boi, how gullible do you have to be to believe that a) tminus 6 months to AGI kek (do people track these dog shit predictions?) b) the purpose of DOGE is just accountability and definitely not the weaponized manifestation of techno oligarchy ripping apart our society for the copper wiring in the walls?
What would some good unifying demands be for a hostile takeover of the Democratic party by centrists/moderates?
As opposed to the spineless collaborators who run it now?
We should make acquiring ID documents free and incredibly easy and straightforward and then impose voter ID laws, paper ballots and ballot security improvements along with an expansion of polling places so everyone participates but we lay the 'was it a fair election' qs to rest.
Presuming that Republicans ever asked "was it a fair election?!" in good faith, like a true jabroni.
Quality sneers in these (one, two) response posts. The original posts that these are critiquing are very silly and not worth your time, but the criticism here addresses many of the typical AI hype talking points.
Since quantum computers are far outside my expertise, I didn't realize how far-fetched it currently is to factor large numbers with quantum computers. I already knew it's not near-future stuff for practical attacks on e.g. real-world RSA keys, but I didn't know it's still that theoretical. (Although of course I lack the knowledge to assess whether that presentation is correct in its claims.)
But also, while reading it, I kept thinking how many of the broader points it makes also apply to the AI hype... (for example, the unfounded belief that game-changing breakthroughs will happen soon).
ran into this earlier (via techmeme, I think?), and I just want to vent
“The biggest challenge the industry is facing is actually talent shortage. There is a gap. There is an aging workforce, where all of the experts are going to retire in the next five or six years. At the same time, the next generation is not coming in, because no one wants to work in manufacturing.”
"whole industries have fucked up on actually training people for a run going on decades, but no the magic sparkles will solve the problem!!!11~"
But when these new people do enter the space, he added, they will know less than the generation that came before, because they will be more interchangeable and responsible for more (due to there being fewer of them).
I forget where I read/saw it, but sometime in the last year I encountered someone talking about "the collapse of ..." wrt things like "travel agent", which is a thing that's mostly disappeared (on account of various kinds of services enabling previously-impossible things, e.g. direct flights search, etc etc) but not been fully replaced. so now instead of popping a travel agent a loose set of plans and wants then getting back options, everyone just has to carry that burden themselves, badly
and that last paragraph reminds me of exactly that nonsense. and the weird "oh don't worry, skilled repair engineers can readily multiclass" collapse equivalence really, really, really grates
sometimes I think these motherfuckers should be made to use only machines maintained under their bullshit processes, etc. after a very small handful of years they'll come around. but as it stands now it'll probably be a very "for me not for thee" setup
Did notice a passage in the annoucement which caught my eye:
Meanwhile, the Valley has doubled down on a grow-at-all-costs approach to AI, sinking hundreds of billions into a technology that will automate millions of jobs if it works, might kneecap the economy if it doesn’t, and will coat the internet in slop and misinformation either way.
I'm not sure if its just me, but it strikes me as telling about how AI's changed the cultural zeitgeist that Merchant's happily presenting automation as a bad thing without getting backlash (at least in this context).
this article came to mind for something I was looking into, and then on rereading it I just stumbled across this again:
Late one afternoon, as they looked out the window, two airplanes flew past from opposite directions, leaving contrails that crossed in the sky like a giant X right above a set of mountain peaks. Punchy with excitement, they mused about what this might mean, before remembering that Google was headquartered in a place called Mountain View. “Does that mean we should join Google?” Hinton asked. “Or does it mean we shouldn’t?”
Amazon Prime pulling some AI bullshit with, considering the bank robbery in the movie was to pay for surgery for a trans woman, a hint of transphobia (or more likely, not a hint, just the full reason).