Intel’s Q1 2025 earnings press release talked up their new AI-enabled chips. But these are not selling. [Intel] In the earnings call, CFO Dave Zinsner mentioned they had “capacity constraints in In…
I thought that was pretty obvious by now? Based on how much the companies are trying force feed people their latest version through constant notifications about assistants, assisted search, etc.
It's one of the greatest flaws of relying on social media for market research: Tech-bros being overly loud about things like AI, NFTs, etc. trick companies into thinking more people are interested.
Now they've invested tons of money and people aren't biting, so they're constantly nagging people to engage so they can justify their expenditure.
Intel, AMD, and Microsoft are all going down a dead-end road called x86_64, especially on portable devices.
Apple and Google took a turn ages ago, towards an alternative called aarch64. Originally just for phones, but now for everything.
VR headsets, Raspberry Pis, IoT devices, etc. also tend to run aarch or aarch64.
Microsoft has been trying to follow suit, but it hasn’t gone well so far. Windows for ARM (the aarch64 version of Windows) is supremely unpopular, for a lot of (mostly good) reasons.
So people avoid the devices or ditch them because none of their apps run natively. But Microsoft basically has no choice but to keep pushing.
So the end result is, Microsoft is subsidizing tons of excellent hardware that will never be used for Windows cuz it’s just not ready yet.
But Linux is!
Edit:
Funny thing is, ARM (company behind aarch64) keeps shooting themselves in the foot, to the point where lots of companies are hedging their bets with a dark horse called RISC-V that never had a snowball’s chance in Hell before, but now could possibly win.
And if Microsoft still hasn’t built a new home on aarch64 by the time that happens, they may accidentally be in the best position to capitalize on it.
ARM architecture 64 bit. It’s the style of CPU in your phone and MacBooks, known for being energy efficient and it’s performance is getting better too.
The big downside though is that loads of old Windows apps aren’t going to run on these as effortlessly as they would on conventional x86-64 CPUs from Intel and AMD.
This isnt really a good thing. It means that consumers prefer to use cloud AI (which is a privacy nightmare) compared to running a local LLM, which is more privacy preserving
That's a really weird take. Like really weird, because it presupposes that everybody wants to use degenerative AI at all.
Which is emphatically not the case. There's even studies showing that most people play with degenerative AI for a while, all impressed by it, before trailing off as it turns out that it kind of sucks at everything people try to use it for.
Degenerative AI is the crypto/web3 of the current set of techbrodude nitwits. A solution in search of a problem. And it will go the way of crypto/web3.
OK, so I ran this past a techie colleague. Here's how he summarized this for me.
@jagged_circle@feddit.nl is drawing a superficial parrallel between CPU speculation and LLM/AI unpredictability without acknowledging the crucial differences in determinism, transparency, and user experience.
He’s relying on the likelihood that others in the conversation may not know the technical details of "CPU speculation", allowing him to sound authoritative and dismissive (“this is old news, you just don’t get it”).
By invoking an obscure technical concept and presenting it as a “gotcha,” he positions himself as the more knowledgeable, sophisticated participant, implicitly belittling others’ concerns as naïve or uninformed.
He is in short using bad faith argumentation. He’s not engaging with the actual objection (AI unpredictability and user control), but instead is derailing the conversation with a misleading-to-flatly-invalid analogy that serves more to showcase his own purported expertise than to clarify or resolve the issue.
The techniques he's using are:
Jargon as Gatekeeping:
Using technical jargon or niche knowledge to shut down criticism or skepticism, rather than to inform or educate.
False Equivalence:
Pretending two things are the same because they share a superficial trait, when their real-world implications and mechanics are fundamentally different.
Intellectual One-upmanship:
The goal isn’t to foster understanding, but to “win” the exchange and reinforce a sense of superiority.
Explaining his bad objection in plain English, he's basically saying "You’re complaining about computers guessing? Ha! They’ve always done that, you just don’t know enough to appreciate it." But in reality, he’s glossing over the fact that:
CPU speculation is deterministic, traceable, and (usually) invisible to the user.
LLM/AI “guessing” is probabilistic, opaque, and often the source of user frustration.
The analogy is invalid, and the rhetorical move is more about ego than substance.
TL;DR: @jagged_circle@feddit.nl is using his technical knowledge not to clarify, but to obfuscate and assert dominance in the conversation without regard to truth, a pretty much straightforward techbrodude move.