NVIDIA GeForce RTX 50 Series “Blackwell” Graphics Cards Will Reportedly Not Include an AD104 Successor at Launch

Peter_Brosdahl

Moderator
Staff member
Joined
May 28, 2019
Messages
9,032
Points
113
As we approach the one-year anniversary of the launch of its RTX 40 series, more rumors continue to surface about the NVIDIA GeForce RTX 50 series. It has been all but officially announced that the next generations of GPUs will be named after American mathematician David Blackwell. The latest rumors suggest that NVIDIA is, at least for now, planning on skipping the X04 part which is usually used for its most popular segment. The leaker known as "panzerlied" on Chiphell shared that the next lineup will omit the GPU commonly used in NVIDIA's x80, x70, and x70 Ti cards.

See full article...
 
Last edited:
Yea after seeing and number crunching the 2070 to 4060 upgrade path... My suspicion is the next gen cards will see the high end get new chips and designs. The mid tier will see a more advanced process to create more efficient versions of the previous 4080/4090 cards with some minor bump in mhz allowing a generational uplift over the 40 series while not throwing out what is still likely to be a very performant card design.

That's my expectation at least...

In effect rebrand the 4080/4090 one tier down to fulfill that market. Use ACTUAL newer cards as the new head of the pack.
 
Yea after seeing and number crunching the 2070 to 4060 upgrade path... My suspicion is the next gen cards will see the high end get new chips and designs. The mid tier will see a more advanced process to create more efficient versions of the previous 4080/4090 cards with some minor bump in mhz allowing a generational uplift over the 40 series while not throwing out what is still likely to be a very performant card design.

That's my expectation at least...

In effect rebrand the 4080/4090 one tier down to fulfill that market. Use ACTUAL newer cards as the new head of the pack.
Wouldn't it be more profitable/efficient to create new cards that replace 4080 / 4090 (& maybe 4070 ti)

Rebranding would be useful for 4070 (=$400 5060), 4060 ti (=$300 5050 ti), 4060 (=$220 5050)
 
Meaning they already know AMD way behind. Why are they behind though... Since I know nothing about these things, in ignorance I blame them not going big enough, whatever that could mean.
 
Wouldn't it be more profitable/efficient to create new cards that replace 4080 / 4090 (& maybe 4070 ti)
That would be my assumption as well - the relatively older process node may save them some money on production, but generally costs are per mm^2, and those older high end chips are absolutely massive when compared to what newer technology on newer nodes can do. That's how cost for performance is typically brought down generation over generation.
 
That would be my assumption as well - the relatively older process node may save them some money on production, but generally costs are per mm^2, and those older high end chips are absolutely massive when compared to what newer technology on newer nodes can do. That's how cost for performance is typically brought down generation over generation.
You are both correct. It would make more sense to make the old CPU's on the newer process node especially if the success rate per wafer is better.
I'm thinking they keep the old design on the newer process node and the newer design is for the higher end GPU's.
 
You are both correct. It would make more sense to make the old CPU's on the newer process node especially if the success rate per wafer is better.
I'm thinking they keep the old design on the newer process node and the newer design is for the higher end GPU's.
Yeah, but if you have the new design already, and it's just a matter of using fewer cores/cache and/or memory channels - why would you even bother to re-spin the old design on a new node?
 
Yeah, but if you have the new design already, and it's just a matter of using fewer cores/cache and/or memory channels - why would you even bother to re-spin the old design on a new node?
Because I don't think the new design is going to be worth the design effort to de core. They've already done that work and know what to make. Unless the lower stages are just errors on the higher end parts.
 
You are both correct. It would make more sense to make the old CPU's on the newer process node especially if the success rate per wafer is better.
I'm thinking they keep the old design on the newer process node and the newer design is for the higher end GPU's.
Its certainly not that simple.
A design has to have a significant amount of re-work (read cost) before it can be produced on a different process (even between similar process/nodes from different vendors). You cant just take something that was Samsung 8nm and copy paste it to TSMC 7nm w/o alot of time, effort, and cost. This time, effort, and cost is likely better spent on an improved design.

Also parts of the chip do not scale down in a linear fashion so a big chip on 7nm doesnt mean its 1/2 the size on 3/4nm. Examples of this from what I have read are SRAM (cache), and things like IO interfaces (memory, PCIE, etc). This is where AMD has played it smart by producing parts of the chip that don't scale down well on an older cheaper process while producing parts that do scale down well on newer more expensive processes getting the performance/density/power benefits and optimizing cost.

The costs per wafer are rising significantly as we move to newer processes/nodes, bucking the trend we have seen in the past. I think there will be a finer balance companies have to strike with their designs for maximizing profit.
 
Become a Patron!
Back
Top