Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
(Credit and/or blame to David Gerard for starting this.)
Having plotted many graphs on "war" and "genocide" in my two books on violence, I closely tracked the definitions, and it's utterly clear that the war in Gaza is a war (e.g., the Uppsala Conflict Program, the gold standard, classifies the Gaza conflict as an "internal armed conflict," i.e., war, not "one-sided violence," i.e., genocide).
You guys! It's totes not genocide if it happens during a war!!
To be fair, you have to have a really high IQ to understand why my ouija board writing " A " " S " " S " is not an existential risk. Imo, this shit about AI escaping just doesn't have the same impact on me after watching Claude's reasoning model fail to escape from Mt Moon for 60 hours.
Ezra Klein is the biggest mark on earth. His newest podcast description starts with:
Artificial general intelligence — an A.I. system that can beat humans at almost any cognitive task — is arriving in just a couple of years. That’s what people tell me — people who work in A.I. labs, researchers who follow their work, former White House officials. A lot of these people have been calling me over the last couple of months trying to convey the urgency. This is coming during President Trump’s term, they tell me. We’re not ready.
Oh, that's what the researchers tell you? Cool cool, no need to hedge any further than that, they're experts after all.
The only country to effectively challenge [US] dominance is China, in large part because it rejected US assertions about the internet. The Great Firewall, often solely pegged as an act of censorship, was an important economic policy to protect local competitors until they could reach the scale and develop the technical foundations to properly compete with their American peers. In other industries, it’s long been recognized that trade barriers were an important tool — such that a declining United States is now bringing in its own with the view they’re essential to projects its tech companies and other industries.
I will say, it does strike me as telling that Paris was able to present the unofficial mascot of Chinese censorship this way without getting any backlash.
wowee where to even start here? this is basically just another fucking neoreactionary screed. as usual, some of the issues identified in the piece are legitimate concerns:
Wanna each start a business, pass dollars back and forth over and over again, and drive both our revenues super high? Sure, we don’t produce anything, but we have companies with high revenues and we can raise money based on those revenues...
... nothing I saw in Silicon Valley made any sense. I’m not going to go into the personal stories, but I just had an underlying assumption that the goal was growth and value production. It isn’t. It’s self licking ice cream cone scams, and any growth or value is incidental to that.
yet, when it comes to engaging with this issues, the analysis presented is completely detached from reality and void of any evidence of more than a doze seconds of thought. his vision for the future of America is not one that
kicks the can further down the road of poverty, basically embraces socialism, is stagnant, is stale, is a museum
but one that instead
attempt[s] to maintain an empire.
how you may ask?
An empire has to compete on its merits. There’s two simple steps to restore american greatness:
Brain drain the world. Work visas for every person who can produce more than they consume. I’m talking doubling the US population, bringing in all the factory workers, farmers, miners, engineers, literally anyone who produces value. Can we raise the average IQ of America to be higher than China?
Back the dollar by gold (not socially constructed crypto), and bring major crackdowns to finance to tie it to real world value. Trading is not a job. Passive income is not a thing. Instead, go produce something real and exchange it for gold.
sadly, Hotz isn't exactly optimistic that the great american empire will be restored, for one simple reason:
Keeping up a personal schtick of mine, here's a random prediction:
If the arts/humanities gain a significant degree of respect in the wake of the AI bubble, it will almost certainly gain that respect at the expense of STEM's public image.
Focusing on the arts specifically, the rise of generative AI and the resultant slop-nami has likely produced an image of programmers/software engineers as inherently incapable of making or understanding art, given AI slop's soulless nature and inhumanly poor quality, if not outright hostile to art/artists thanks to gen-AI's use in killing artists' jobs and livelihoods.
what's that sound? is it the sound of a previous post coming past? naaaah, I'm sure it can't be that. discord's a Bro™️, and discord super totes Won't Fuck The Users®️, I'm sure I'll shortly be told by some vapid fencesitter that this will all be Perfectly Okay!
Fellas, 2023 called. Dan (and Eric Schmidt wtf, Sinophobia this man down bad) has gifted us with a new paper and let me assure you, bombing the data centers is very much back on the table.
"Superintelligence is destabilizing. If China were on the cusp of building it first, Russia or the US would not sit idly by—they'd potentially threaten cyberattacks to deter its creation.
@ericschmidt @alexandr_wang and I propose a new strategy for superintelligence. 🧵
Some have called for a U.S. AI Manhattan Project to build superintelligence, but this would cause severe escalation. States like China would notice—and strongly deter—any destabilizing AI project that threatens their survival, just as how a nuclear program can provoke sabotage.
This deterrence regime has similarities to nuclear mutual assured destruction (MAD). We call a regime where states are deterred from destabilizing AI projects Mutual Assured AI Malfunction (MAIM), which could provide strategic stability.
Cold War policy involved deterrence, containment, nonproliferation of fissile material to rogue actors. Similarly, to address AI's problems (below), we propose a strategy of deterrence (MAIM), competitiveness, and nonproliferation of weaponizable AI capabilities to rogue actors.
Competitiveness: China may invade Taiwan this decade. Taiwan produces the West's cutting-edge AI chips, making an invasion catastrophic for AI competitiveness. Securing AI chip supply chains and domestic manufacturing is critical.
Nonproliferation: Superpowers have a shared interest to deny catastrophic AI capabilities to non-state actors—a rogue actor unleashing an engineered pandemic with AI is in no one's interest. States can limit rogue actor capabilities by tracking AI chips and preventing smuggling.
"Doomers" think catastrophe is a foregone conclusion. "Ostriches" bury their heads in the sand and hope AI will sort itself out. In the nuclear age, neither fatalism nor denial made sense.
Instead, "risk-conscious" actions affect whether we will have bad or good outcomes."
Dan literally believed 2 years ago that we should have strict thresholds on model training over a certain size lest big LLM would spawn super intelligence (thresholds we have since well passed, somehow we are not paper clip soup yet). If all it takes to make super-duper AI is a big data center, then how the hell can you have mutually assured destruction like scenarios? You literally cannot tell what they are doing in a data center from the outside (maybe a building is using a lot of energy, but not like you can say, "oh they are running they are about to run superintelligence.exe, sabotage the training run" ) MAD "works" because it's obvious the nukes are flying from satellites. If the deepseek team is building skynet in their attic for 200 bucks, this shit makes no sense. Ofc, this also assumes one side will have a technology advantage, which is the opposite of what we've seen. The code to make these models is a few hundred lines! There is no moat! Very dumb, do not show this to the orangutan and muskrat. Oh wait! Dan is Musky's personal AI safety employee, so I assume this will soon be the official policy of the US.
Today’s magic economy-ending words are “data centre asset-backed securities” :
Wall Street is once again creating and selling securities backed by everything—the more creative the better...Data-center bonds are backed by lease payments from companies that rent out computing capacity
New thread from Ed Zitron, focusing on the general trashfire that is CoreWeave. Jumping straight to the money-shot, he noted how the company is losing money selling shovels in the gold rush:
You want my off-the-cuff prediction, CoreWeave will probably be treated as the Leyman Brothers of the 2020s, an unofficial mascot of everything wrong with Wall Street (if not the world) during the AI bubble.
Strongly recommended reading overall, and strongly recommended you check out Techdirt - they've been doing some pretty damn good reporting on the current shitshow we're living through.
So many projects and small websites I’m aware of are being overtaxed by shitty LLM scrapers these days, it feels like an intentional attack. I guess the idea of ai can’t fail, it can only be failed; and so its profiteers must sabotage anything that indicates it’s not beneficial/necessary.
This just hit my inbox (and spurred me to unsubscribe from future BuiltIn slop) and man, so tired of this sort of mindless drivel. Like companies still have trouble with basic application and management processes, but magic robot will fix? Fucking hell.
got a question (brought on by this). anyone here know if zitron's talked about his history of how he got to where he is atm wrt tech companies?
there's something that's often bothered me about some of his commentary. an example I could point to: some of the things that he comments on and "doesn't seem to get" (and has stated so) are .... not quite mysteries of the universe, just some specifics in dysfunction in the industry. but those things one could understand by, y'know, asking around a bit. (so in this example, I dunno if that's on him not engaging far/deeply enough in research, or just me being too-fucking-aware of broken industry bullshit. hard to get a read on it)
but things like what's highlighted in thread do leave open the possibility of other nonsense too
New piece from Brian Merchant, focusing on Musk's double-tapping of 18F. In lieu of going deep into the article, here's my personal sidenote:
I've touched on this before, but I fully expect that the coming years will deal a massive blow to tech's public image, expecting them to be viewed as "incompetent fools at best and unrepentant fascists at worst" - and with the wanton carnage DOGE is causing (and indirectly crediting to AI), I expect Musk's governmental antics will deal plenty of damage on its own.
18F's demise in particular will probably also deal a blow on its own - 18F was "a diverse team staffed by people of color and LGBTQ workers, and publicly pushed for humane and inclusive policies", as Merchant put it, and its demise will likely be seen as another sign of tech revealing its nature as a Nazi bar.
(this is compounded by how some segment of heavy ai boosters/users - former cryptobros, but not only them - were already immersed in this particular bubble)