NVIDIA Announces GeForce RTX 4090 ($1,599) and GeForce RTX 4080 16/12 GB ($1,199/$899) Graphics Cards: Up to 4X Faster, Powered by 3rd Gen RTX Arch...

I mentioned earlier how we might see another card more powerful than this one later on and it turns out the workstation version was announced by PNY yesterday as well. I'm not sure if we're going to post about it today so I won't go into too much detail about it but it does feature an AD102 GPU similar to the 4090 but with what could be a fully unlocked chip featuring more cores. There's already speculation NV is planning on releasing a gaming version down the road. My guess is in about a year or so unless AMD releases something that can directly compete with the 4090 as is in which case we'd see either a price cut or another release at that time.
 
I honestly think I'll be passing on this gen. It sounds like this whole release will be just as bad or worse than the 30 series cards.
I am passing, for now, but I'm also posting something right now that might catch your interest.

The nice part about having any of the best of the previous gen is that you end up with the fastest card for that gen, then get the AIB OC version which gives even more, and ultimately closes that gap that much more when the next comes out so you can enjoy waiting to see the whole launch rollout.
 
I think I can use some more horsepower for my 3440*1440p at 144 hz monitor, but I might need a bigger case with all these behemoths.

Also it seems we Europeans are taking it up the *** as prices ar 22% higher then in the US, may have to skip this one or hope to grab a cheap 3090ti
 
A thought I had was that IF Nvidia's claims on performance hold up across the board (and they are not some sort of distortions that depend on DLSS3 or something like that) then this really is a bigger performance increase than we have been used to from generation to generation in the last decade or so.

Maybe the price increases are justified? We are used to getting at most 30% one generation to the next for the same level of card, but they are talking about 2x performance increases when on the 4080 compared to the 3080ti model.

If true, and not all smoke and mirrors, that is actually quite a big deal.

I'd like to see apples to apples benchmarks though.

A wide variety of titles with no DLSS in 30xx series compared to a wide variety of titles in 40xx series, also with DLSS disabled.

Then I'd also like to see a raster graphics vs raster graphics comparison. Not every title uses RT to the gills.

My trust level is pretty low when it comes to vendor benchmarks at this point, especially Nvidia, but if it holds up, and the claims are not based on DLSS and RT smoke and mirrors, I could be convinced to spend the money.

The DLSS3 frame insertion does not excite me though. It will almost certainly be accompanied by input lag.

As always, the proof is in the pudding. I'll make up my mind when I see real data.
 
I was wrong. The RTX 30 series and RTX 40 series both have the Optical Flow Accelerators. It is just that the RTX 40 series has 2x the performance and 2x the accuracy, with its Optical Flow Accelerator. It is this performance improvement and accuracy improvement that NVIDIA is using to allow DLSS3 Frame Generation. Technically, it could be supported on RTX 20 series with a lot of software and development work. Therefore, the end result is that it is locked to RTX 40 series at the moment.
 
The DLSS3 frame insertion does not excite me though. It will almost certainly be accompanied by input lag.

NVIDIA is using its Reflex technology in DLSS3 Frame Generation in order to reduce latency, in essence removing the render queue from the pipeline to reduce input lag.
 
I was wrong. The RTX 30 series and RTX 40 series both have the Optical Flow Accelerators. It is just that the RTX 40 series has 2x the performance and 2x the accuracy, with its Optical Flow Accelerator. It is this performance improvement and accuracy improvement that NVIDIA is using to allow DLSS3 Frame Generation. Technically, it could be supported on RTX 20 series with a lot of software and development work. Therefore, the end result is that it is locked to RTX 40 series at the moment.
That was my understanding too, hence my expectation that it would work on Ampere even if it was not as fast.
Unfortunately that probably means there will no more improvements for DLSS 2 either.
 
That was my understanding too, hence my expectation that it would work on Ampere even if it was not as fast.
Unfortunately that probably means there will no more improvements for DLSS 2 either.

In a meeting I just attended, they stated that work will continue on improving DLSS 2 (super-resolution) image quality. It's still a part of RTX 40 series, as they can run DLSS 2 of course, so DLSS 2 improvements will benefit RTX 20, 30, and 40 series image quality. Inside games, with RTX 40 series cards, you'll still be able to set the DLSS quality mode as usual. What will be new is a toggle switch to turn on the Frame Generation, if the game supports it, which is DLSS 3.
 
I think I can use some more horsepower for my 3440*1440p at 144 hz monitor, but I might need a bigger case with all these behemoths.

Also it seems we Europeans are taking it up the *** as prices ar 22% higher then in the US, may have to skip this one or hope to grab a cheap 3090ti

I paid 879€ for my Asus TUF 3080 and now FE models are supposedly 1.469€ for the 16GB model or 1.099€ for the 12GB model, /faints
 
but they are talking about 2x performance increases
I just can't see that.

Maybe, in some very specific circumstances with tricks (DLSS, etc). But an across the board 100% speed increase, without the subsequent increase in cores or power draw --- even considering a process node improvement, there just isn't enough in the specifications to really justify an across-the-board 100% increase in performance -- unless you have ~miraculous~ architectural improvements... which is why I am significantly doubting this claim.

25% - I would readily believe. 100% - not so much
 
The 12G being a different, lower performing chip than the 16G is also dirty pool - now that it's confirmed.
 
I was wrong. The RTX 30 series and RTX 40 series both have the Optical Flow Accelerators. It is just that the RTX 40 series has 2x the performance and 2x the accuracy, with its Optical Flow Accelerator. It is this performance improvement and accuracy improvement that NVIDIA is using to allow DLSS3 Frame Generation. Technically, it could be supported on RTX 20 series with a lot of software and development work. Therefore, the end result is that it is locked to RTX 40 series at the moment.
I recall a few months back, one of NVIDIA's researchers demonstrated this tech WORKING on a 2000 series card, so, yeah...

Good ole NVIDIA non-greasing here!
 
NVIDIA is using its Reflex technology in DLSS3 Frame Generation in order to reduce latency, in essence removing the render queue from the pipeline to reduce input lag.

Yeah, that *reads/sounds* wonderful, huh?

Just like its DLSS 1.0 did with its perfect Edge sharpening pass filter. /s 😏
 
I just can't see that.

Maybe, in some very specific circumstances with tricks (DLSS, etc). But an across the board 100% speed increase, without the subsequent increase in cores or power draw --- even considering a process node improvement, there just isn't enough in the specifications to really justify an across-the-board 100% increase in performance -- unless you have ~miraculous~ architectural improvements... which is why I am significantly doubting this claim.

25% - I would readily believe. 100% - not so much

Well, they are going from Samsung 8N to TSMC 4N. In theory this allows them to use 50% less power for the same performance, maybe even more as these are less gate size and more marketing numbers these days, and TSMC is generally regarded as performing better for the same "number" than Samsung is.

At the same time the TDP is going from 350W for the 3080ti to 320W with the 16GB model of the 4080.

In an ideal world that would predict 50% improvement per watt from process size, but 8.5% reduction in wattage, which results in a theoretical max performance increase of ~38%.

Unless - that is - something about the architecture fundamentally changed that allows it to perform better, but at this point, that kind of major advance seems less and less likely.

Also, while in the 32nm and earlier era scaling generally was pretty linear with power use with each process node, since then, as processes have become smaller and smaller, that scaling has been much much worse, so the real improvement is likely much smaller than the 38%

So I agree, their 2x numbers are probably complete bullshit leaning on their DLSS3/AI bullshit to make them seem better than they are. I'm just kind of playing devils advocate, and being a proponent for seeing the real numbers.
 
NVIDIA is using its Reflex technology in DLSS3 Frame Generation in order to reduce latency, in essence removing the render queue from the pipeline to reduce input lag.

My humble eyes will believe it when my humble eyes see it.

Predicting the next frame solely based on the current frame is going to be iffy at best. Usually you have to wait for the next frame, then interpolate between them, but if you do that, input lag is horrendous.

I expect something like this will invariably be choppy and awkward as the system regularly predicts the wrong future, and then snaps back to reality with the next fully rendered frame.
 
which results in a theoretical max performance increase of ~38%.
There is also core count - the 4080 is at 9,728, the 3080Ti (since that was the TDP comparison) at 10,240

So there is a small drop in cores there too - 5% if my math isn't screwy. But - I think nVidia is drawing all the comparisons versus the 3080, non-Ti. There was a 12% increase in cores from the 3080, and at 320W, no increase in TDP (assuming they didn't change the definition like Intel does)

But... I don't know if the jump from 8N to 4N is linear - that number has become a marketing number, I wouldn't necessarily assume that it's an across-the-board 50% reduction.

So -- even with all of that, I'd still more or less agree with the 38% figure as a theoretical max, and more or less lines up with an about 25% real world figure - versus the 3080 non-Ti.

I'm not interesting in purchasing another card from this generation, but I am extremely interested in seeing benchmarks.
 
My humble eyes will believe it when my humble eyes see it.

Predicting the next frame solely based on the current frame is going to be iffy at best. Usually you have to wait for the next frame, then interpolate between them, but if you do that, input lag is horrendous.

I expect something like this will invariably be choppy and awkward as the system regularly predicts the wrong future, and then snaps back to reality with the next fully rendered frame.
Yeah, I think it may do frame insertion so as to allow VSync/VRR to be smoother -- which tech like that has been around forever, and used a lot for motion blur reduction.

Seems like a big part of temporal antialiasing also deals with vector prediction - and yeah, in erratic visuals it can actually make things worse; but most visuals follow fairly predictable patterns so it ~mostly~ works out to a net benefit. But DLSS has done things like this since 1.0, and FSR has since 2.0 -- apart from just the old school temporal AA which was introduced back in 2012.
 
Thinking about the price increase some more — will be interesting to see what AMD does with their pricing. I fully expect them to come in ~just under~ like they did last generation, and then both companies to adjust via discounts and giveaways based on sales and margin.

I’m hoping for race to the bottom
 
Become a Patron!
Back
Top