NVIDIA’s 7 Nm Ampere GPUs Will “Boost Performance by 50% and Halve Power Consumption”

Tsing

The FPS Review
Staff member
Joined
May 6, 2019
Messages
11,255
Points
83
According to a client note from one of Taiwan's biggest financial groups (Yuanta Securities Investment Consulting), NVIDIA's next-generation "Ampere" GPUs will offer incredible performance at half the power cost.

Yuanta claims that the transition to 7-nanometer technology has allowed NVIDIA to produce GPUs that are 50 percent faster (than Turing, presumably) "while halving power consumption."

NVIDIA will reportedly launch its first Ampere cards in the second half of this year.

Ampere fits in Nvidia's two-year span for launching a new GPU microarchitecture. If Yuanta's time frame is accurate, Nvidia will likely launch the Ampere at the annual SIGGRAPH conference, which is held in August.
 
Yeah sounds like more shareholder bluster with no real data to back it up. I have to look at everything nVidia announces through the lens of being publicly traded without any significant competitors.

Maybe it will be awesome, for a product that isn't even expected to be introduced until August you can say a lot of things now without fear of being wrong since nothing will land in the hands of reviewers for months. And these are only two important metrics, which are being hinted at in total absence of possibly the most important metric: price.
 
I agree. Usually we see 20%-30% at most. If true then it'll be awesome to see but hopefully doesn't come with a 50% price increase as well.
 
I could use something faster then my 2080 for 1440p at 144 hz, but not at insane prices. We will see what's what soon anough, would be nice to see some company advance at a good pace with little to no competition.
 
I agree. Usually we see 20%-30% at most. If true then it'll be awesome to see but hopefully doesn't come with a 50% price increase as well.
Wasn’t Maxwell a huge leap over Kepler? I’m pretty sure we saw 50%+ gains when comparing 980 to 780 and 980 Ti to 780Ti.
 
Wasn’t Maxwell a huge leap over Kepler? I’m pretty sure we saw 50%+ gains when comparing 980 to 780 and 980 Ti to 780Ti.

That was also during a time when AMD had the performance crown, GCN was new and very threatening, AMD was running wild in the lower tiers with Southern Islands, and nVidia hadn’t yet asserted their total dominance (or perhaps, AMD hadn’t got stuck in the GCN rut) — although that would come to pass fairly well with Maxwell and cemented with Pascal.

And then nVidia largely glosses over Volta, and gave us the anemic All-RTX emphsis of Turing.

Even if these numbers were true, nVidia would be idiotic to release them as-is. They are only competing with themselves... so either the prices go through the roof (because if your 50% faster than a 2080Ti, then you can afford to charge 50+% more than the current cost of a 2080Ti), or you pull an Intel and dribble the tech out over a long period of time - each release is just fast enough to drive an upgrade cycle — but only just.
 
Even if these numbers were true, nVidia would be idiotic to release them as-is. They are only competing with themselves... so either the prices go through the roof (because if your 50% faster than a 2080Ti, then you can afford to charge 50+% more than the current cost of a 2080Ti), or you pull an Intel and dribble the tech out over a long period of time - each release is just fast enough to drive an upgrade cycle — but only just.
The only segment nvidia is competing with themselves is the super high end $700+ segment.

The largest benefit of such an improvement to ray tracing performance is going to be on the low-mid end and nvidia wants those cards to be extremely attractive for buyers.

The largest question is going to be if they can/will implement RTX on the next entry level gaming card (whatever the next 1650 will be called).
 
The only segment nvidia is competing with themselves is the super high end $700+ segment.

The largest benefit of such an improvement to ray tracing performance is going to be on the low-mid end and nvidia wants those cards to be extremely attractive for buyers.

The largest question is going to be if they can/will implement RTX on the next entry level gaming card (whatever the next 1650 will be called).

That is something I had not considered and is entirely plausible: 50+% ray tracing improvement, with raster improvement undefined....
 
Nvidia wants to make amd spend as much time tweaking their next card release as they possibly can. Announcements like this are built to make your competitor dou le check themselves and delay release. Let's see what time shows though. I'll buy the best one for gaming at a price point I like.
 
It's could be 50% faster assuming they're using the smaller node to add 50% more transistors. Could also be 50% faster at something specific (eg ray tracing). Both 50% faster and 30% lower power in a general sense seems unlikely.
 
It's could be 50% faster assuming they're using the smaller node to add 50% more transistors. Could also be 50% faster at something specific (eg ray tracing). Both 50% faster and 30% lower power in a general sense seems unlikely.

I'm betting its more of a 50% more transistor budget type of thing. What concerns me is the cost.
 
^^Why not both? 50% performance increase for 100% cost increase. ;)
 
Become a Patron!
Back
Top