I mean you can dissipate all the heat of a 2kW electric space heater with a single fan. 500W isnt that much compared to GPU farms with a bunch of gpus in a single rack slot.
Considering that any server with one of these is likely to have two of them, that's quite a lot of heat to dissipate.
A cpu also generally needs to be kept cooler than a space heater.
My home server has a row of surprisingly powerful and small fans and that's just for a few years old dual Xeon system.
I have never personally been (knowing) near a GPU farm but I have been behind a crazy ass router (Cisco ASR 9000 something) that's like 10+ U. The airflow behind the router is crazy.
No, but considering how rarely I use the full power of my CPU, I doubt it would make a big difference. Which means that I could probably halve the TDP of my CPU, but "about the same efficiency as my throttled desktop CPU" is still pretty alright for a server.
Like many people are saying that's 2.6W per core. Which is actually very good.
My laptop is running an Intel CPU with a TDP of 45W, which doesn't seem as bad as that one until you realize that it's only 6 cores meaning it uses 7.5W per core. If we multiply by the number of cores this CPU has we get 1440W if it were as inefficient as my Intel CPU, and that's a very conservative estimate which assumes my CPU is as efficient as intel claims the Intel Core i7-9750H is, it actually might be much worse considering how hot this laptop gets, especially when gaming (though I don't game on this laptop anymore for that reason).
Bottom line, this is a very efficient CPU, but it's also an insanely ovepowered CPU that most people will not use or need. Only datacenters and extremely dedicated power users need a CPU anywhere near this powerful.
It's a nice easy unit to compare against because all of the ones I've seen draw basically exactly 1000 watts
It's also less than double what my desktop draws (11700k, rtx 3060) and (those aren't particularly demanding components, and I only get 16 threads) (that's basically exactly 6x more power draw per core, although the cores themselves perform differently ofc)
It is slightly silly to have that many cores tho. I guess the main reason to not just use a gpu would be because pcie doesn't have enough bandwidth, or if you need a ton of RAM? For a pure compute application I don't think there are many cases where a GPU isn't the obvious choice when you're going to have almost 400 threads anyways. An A100 has half the tdp and there's no way the epyc chip can even come close in performance (even if you assume the cpu can use avx512 while the gpu can't use it's tensor cores, it having about a third of the memory bandwidth isn't exactly encouraging about the level of peak compute they're expecting)
I stopped reading your comment when you suggested calling 1000w as a toaster. Most murican thing. Why don't you also measure the size of the CPU in buses?
That's an EPYC. It's a datacenter CPU, and it's priced accordingly. Nobody uses these at home outside of hardcore homelab enthusiasts with actual rack setups.
Like others have noted, it's 2-3 Watt's per core, that's pretty incredible given how it encompasses all the extra things the CPU does/supports and the inherent cost of it being not a big ol chip.
Specifically, they support substantially more memory at 12 channels, compared to the typical 2 and 128(+) lanes of PCIe 5 connectivity!
Because these systems are so dense, data centres can condense N servers into just a couple. And now, you only need 1 set of ancillary components like network cards or fans.
So, they're significantly more efficient from a few perspectives.