Is there any GPU that stands out to guys as a giod value, or do you believe that everybody should skip them?
I'm liking the 5070 Ti with 16GB 256-bit transfer speed 896 GB/s for $750USD. The 5080 for $1000USD has 16GB 256-bit for 960 GB/s. I don't see value for the extra $250.
The both have 2x 9th Gen NVENC encoder. The 5080 has 2x 6th Gen decoder, 5070 Ti has 1x 6th Gen decoder. I can use that for OBS recording while watching other videos.
Assuming you’re primarily interested in gaming performance; wait for reliable 3rd party non-DLSS benchmarks.
From the Nvidia presentation, the 5070ti looks great, but the performance uplift over previous gen in their slides pretty much only applies to games with frame generation. Not every game will implement DLSS 4 let alone DLSS. You may still need the better rasterisation of the 5080 depending on your monitor resolution and desired fps.
The msrp is good, maybe too good, we just need to wait and see the actual prices and availability.
I dont care about frame generation but it might be a decent last resort for when the gpus are old. Having a small latency, some visual fuckery and a "playable" game is preferable over not being able to play the game.
The biggest advantage of dlss 4.0 is their new ray reconstruction(transformer model) that will improve image quality but this feature is coming out on older gpus too.
We need to wait for benchmarks. Any card can be good or bad, it just depends on its price and performance. If the 5070 is 20% faster than a 4070super but also costs 20% more, then it isnt really that relevant, is it? I expect we will see something like that.
On your 4th point, that's what hapoened with the top Turing series cards. Definitely better performance, but costed more, so people had no interest in them. Espcally with the performance of Pascal.
Honestly the presentation was very underwhelming. Improvements in raster seem fairly small and don't warrant an upgrade. DLSS still lacks in visual quality and has annoying artifacts, and I worry that the industry will use it as an excuse to release poorly optimised games. Counting DLSS frames as part of the frame count is just misleading.
NVENC is cool, but I don't use that often enough for it to be a selling point to me.
I've been enjoying the memes about the presentation though, because what the fuck was that mess.
Their own demos show things like the game is running at 30ish fps, but we are hallucinacting that up to 240!
Ok....great.
I will give you that that is wonderful for games that do not really depend on split second timing / hit detection and/or just have a pause function as part of normal gameplay.
Strategy games, 4x, city/colony builders, old school turn based RPGs... slow paced third person or first person games...
Sure, its a genuine benefit in these kinds of games.
But anything that does involve split second timing?
Shooters? ARPGs? Fighting games?
Are these just... all going to be designed around the idea that actually your input just has a delay?
That you'll now be unable to figure out if you missed a shot or got shot from a guy behind a wall... due to network lag, or your own client rendering just lied to you?
...
I am all onboard with intelligent upscaling of frames.
If you can render natively at 5 or 10 or 15 % of the actual frame you see, and then upscale those frames and result in an actually higher true FPS?
It's comforting to find other people who have a strong hate on for frame generation. I person have no interest in upscaling, but frame generation is a conjob. It does nothing for latency, so players so more frames but the same input lag. That sounds discombobulating, or disjointed.
I’m planning on getting the 5090 FE provided scalpers don’t ruin it (which is the most likely scenario by the way). I have a 3080 and have recently upgraded to 4k OLED, going from 3k to 4k has taken a serious blow to the FPS and I’d like to play some of the newer games at high settings and high FPS so I’m due for an update.
I have a no problem with DLSS on the games I played so far. Whichever artifacts it may have they “disappear” during gameplay. You are not counting pixels while you are playing. I know this is not the solution we want but we need to be realistic here, there is a reason why CGI can’t be rendered in real time. Certainly we are still far away from that type of quality yet the games we play are rendering in real time. We cannot afford the size and price of CGI workstations so we have to rely on these “gimmicks” to make for it.
I don't think a 6090 or future card will be enough for 4k without some short of DLSS or frame generation shenanigans, because by the time those cards releases, graphics would have "evolved" to a point where they will be, once again, no longer enough. The eternal obsolescence cycle...
I prefer the FE because is the only 2 slots card that will be available at launch and I don't have the need for the extra fans and size. FE are normally on par, if not better than they AIB counterpart as long as you stick to air cooled.
For starters, there’s more to gpu performance than memory speed and quantity.
believe that everybody should skip them
This strikes me as a bit weird. Everyone uses graphics cards for different things, everyone has different priorities, and most people who have a PC have different hardware.
I’ve got clients who edit video for work, and others who do it as a hobby. In the professional sphere, render times can have a pretty direct relationship with cashflow, so having the ‘best’ can mean the hardware pays for itself several times over.
I’ve got clients who only play one game and find it runs great on their current setup, others who are always playing the latest games and want them to perform well, and still others who play a game professionally/competitively and need every frame they can get. Some are happy at 1080p, others prefer 4k, and some may want to drive a high-end VR headset.
For some people, taking advantage of a new GPU might also require a new PSU of even a total platform upgrade.
To one person, a few hundred dollars is disposable income whereas to another it might represent their ability to eat that month.
These are all variables that will influence what is appropriate for one person or another.
If someone were to have ~$600 to spend, be in need of an upgrade to meet the requirements of an upcoming game they want to play at launch, and have a platform that will support it, I’m likely to recommend an RTX5070 to them.
If someone were to be happy enough with their current performance, I’m likely to recommend they wait and see what AMD puts out - or potentially even longer.
Personally, I’ve always waited until a game I’m excited for performs poorly before upgrading.
I think at this point if you put a gun to my head and told me to either buy an Nvidia card or never play a video game again I'd get a lot more reading done.