Intel’s Q1 2025 earnings press release talked up their new AI-enabled chips. But these are not selling. [Intel] In the earnings call, CFO Dave Zinsner mentioned they had “capacity constraints in In…
One of the mistakes they made with AI was introducing it before it was ready (I’m making a generous assumption by suggesting that “ready” is even possible). It will be extremely difficult for any AI product to shake the reputation that AI is half-baked and makes absurd, nonsensical mistakes.
This is a great example of capitalism working against itself. Investors want a return on their investment now, and advertisers/salespeople made unrealistic claims. AI simply isn’t ready for prime time. Now they’ll be fighting a bad reputation for years. Because of the situation tech companies created for themselves, getting users to trust AI will be an uphill battle.
Apple Intelligence and the first versions of Gemini are the perfect examples of this.
iOS still doesn’t do what was sold in the ads, almost a full year later.
Edit: also things like email summary don’t work, the email categories are awful, notification summaries are straight up unhinged, and I don’t think anyone asked for image playground.
I’m making a generous assumption by suggesting that “ready” is even possible
To be honest it feels more and more like this is simply not possible, especially regarding the chatbots. Under those are LLMs, which are built by training neural networks, and for the pudding to stick there absolutely needs to have this emergent magic going on where sense spontaneously generates. Because any entity lining up words into sentences will charm unsuspecting folks horribly efficiently, it’s easy to be fooled into believing it’s happened. But whenever in a moment of despair I try and get Copilot to do any sort of task, it becomes abundantly clear it’s unable to reliably respect any form of requirement or directive. It just regurgitates some word soup loosely connected to whatever I’m rambling about. LLMs have been shoehorned into an ill-fitted use case. Its sole proven usefulness so far is fraud.
There was research showing that every linear jump in capabilities needed exponentially more data fed into the models, so seems likely it isn't going to be possible to get where they want to go.
do you have any articles on this? i have heard this claim quite a few times, but im wondering how they put numbers on the capabilities of those models.
(I’m making a generous assumption by suggesting that “ready” is even possible)
It was ready for some specific purposes but it is being jammed into everything. The problem is they are marketing it as AGI when it is still at the random fun but not expected to be accurate phase.
The current marketing for AI won't apply to anything that meets the marketing in the foreseeable future. The desired complexity isn't going to exist in silicone at a reasonable scale.