Intel CEO sees 'less need for discrete graphics' and now we're really worried about its upcoming Battlemage gaming GPU and the rest of Intel's graphics roadmap
Honestly, graphics have been good enough for a long time now.
I'm currently re-playing Skyrim on an ultra-light convertible laptop attached to an external monitor.
And it looks beautiful.
After several days, I noticed it was still on power-saving + silent cooling mode.
But they need graphics. They should make M Pro/Max-ish integrated GPUs like AMD is already planning on doing, with wide busses, instead of topping out at bottom-end configs.
They could turn around and sell them as GPU-accelerated servers too, like the market is begging for right now.
Intel's CEO says 'large' integrated GPUs are the way forward.
You didn't even have to click on the article it was in the preview text. And that's exactly what Intel has been doing with their 100 and 200 series CPUs (that's what they're called right?). The 140v that's in the lunar lake while not cleanly beating AMD's 890 is putting up a pretty good fight. And that's in the super hamstrung for power Lunar Lake CPUs, with Arcs horribly unoptimized silicon and drivers. https://www.youtube.com/watch?v=eg74aUQGdSg
If Intel can figure out how to slim down the silicon for battlemage to make it more efficient (space and power wise) then they could have some actual competition for AMD.
Strix Halo (256 bit LPDDR5X, 40 AMD CUs) is where I'd start calling integrated graphics "large." Intel is going to remain a laughing stock in the gaming world without bigger designs than their little 128-bit IGPs.
Intel sees the AI market as the way forward. NVIDIA's AI business eclipses its graphics business by an order of magnitude now, and Intel wants in. They know that they rule the integrated graphics market, and can leverage that position to drive growth with things like edge processing for CoPilot.
The localllama crowd is supremely unimpressed with Intel, not just because of software issues but because they just don't have beefy enough designs, like Apple does, and AMD will soon enough. Even the latest chips are simply not fast enough for a "smart" model, and the A770 doesn't have enough VRAM to be worth the trouble.
They made some good contributions to runtimes, but seeing how they fired a bunch of engineers, I'm not sure that will continue.
People running LLMs aren't the target. People who use things like ChatGPT and CoPilot on low power PCs who may benefit from edge inference acceleration are. Every major LLM dreams of offloading compute on the end users. It saves them tons of money.