NVIDIA GeForce RTX 3080 Ti Rumored With Same CUDA Core Count As GeForce RTX 3090 (10,496) and 20 GB of GDDR6X VRAM

Tsing

The FPS Review
Staff member
Joined
May 6, 2019
Messages
11,075
Points
83
nvidia-geforce-rtx-3080-close-up-1024x576.jpg
Image: NVIDIA



Serial leaker kopite7kimi has returned with some new alleged details pertaining to NVIDIA’s rumored GeForce RTX 3080 Ti, which could be fast tracked as a response to AMD’s surprisingly beastly Radeon RX 6000 Series.



According to kopite7kimi, the GeForce RTX 3080 Ti will boast a CUDA Core count of 10,496, which happens to be the same impressive amount as the GeForce RTX 3090.



It falls short of the BFGPU in the VRAM department, however: instead of 24 GB of GDDR6X, the GeForce RTX 3080 Ti will only have 20 GB to work with.



kopite7kimi also claims that the GeForce RTX 3080 Ti will mirror...

Continue reading...
 
NVIDIA is no longer to be found for anything until 2021.

But, these so-called "leaks" (same for Intel) usually come AFTER or BEFORE a great AMD announcement / launch.

Interesting!
 
So, it'll have the same CUDA core count, but less bandwidth... so it'll be less than 10% faster for sure :). What price point do they slot this into? Above or below 6900xt? And even if this is "rushed" through, stock will be abysmal.
 
8 new cards from Nvidia... Very very soon.
 
This is more or less the card that many are waiting for; most due to stock availability of current releases of course, but personally speaking, this is exactly the card I want.
 
So the RTX 3080Ti was "confirmed" then cancelled and back again?
 
no the 3080 with 20GB vram was cancelled and now supposedly a Ti will be announced
some rumors claimed the 20gb version was the Ti

anyway they are just rumors
 
What's the point of this "release" if its availability isn't going to be until Q2 2021?? Does anyone actually THINK they'll be able to get their hands on one at release?
 
VRAM anxiety hitting it's peak.

If only Nvidia would have been smart enough to understand that no one cares about the ram specs, just put a lot in there!
 
Is the 3000 series bandwidth starved then?
Should have gone with hbm or something.
 
Is the 3000 series bandwidth starved then?
Should have gone with hbm or something.
given that memory OC doesn't increase performance much if at all, I'm gonna say no.
 
NVIDIA is no longer to be found for anything until 2021.

But, these so-called "leaks" (same for Intel) usually come AFTER or BEFORE a great AMD announcement / launch.

Interesting!

NVIDIA has been known to pull this sort of thing before. It's not enough to have the better product. Business is a chess game and NVIDIA is very good at it.

I also want to say, I actually believe this rumor. I normally wait and see with these things, but I've been thinking this would happen for quite some time now. A lot of the rumors we've heard about in the GPU industry didn't really make sense by themselves. Add them together and they pointed to an inevitable RTX 3080 Ti with 20GB of VRAM.
 
Is the 3000 series bandwidth starved then?
Should have gone with hbm or something.

No. The RTX 3080 is a cut down GPU with only 10GB of GDDR6X RAM to reduce costs so that they can be sold for $699 at a profit.
 
The bandwidth wouldn't be any higher with HBM2. It would actually be lower.
Somehow I didn't realize that we hit that point; but also, HBM would increase cost dramatically too. If only because the interposer will have to be massive. AMD seems to have learned that lesson the hard way...
 
Somehow I didn't realize that we hit that point; but also, HBM would increase cost dramatically too. If only because the interposer will have to be massive. AMD seems to have learned that lesson the hard way...
The size is definitely an issue, especially with these 800+ mm² chips. HBM requires more stacks to increase bandwidth, and the only way to beat the bandwidth of GDDR6X is to use four stacks. With the current version of HBM2 that would get you about 1200 GB/s of bandwidth. Three stacks would get you about 920 GB/s, which is just short of the 936 GB/s the 3090 has with GDDR6X.
 
Last edited:
I think that the bandwidth would probably be there, but with that die size, the risk just wouldn't be worth it.

There's a point where HBM should reduce costs; compact packages, simplified power and cooling planes, and so on, but I'm guessing that the lower performance simply wouldn't be acceptable.

The best use of HBM so far for graphics, IMO, was the module that AMD built for an Intel CPU package. I'm going to have to go look up the name of that now, since they only did it once, but such a solution perhaps slightly scaled up would make the perfect 'APU' product.

Though given how AMDs drivers, middleware, and third-party software compatibility are going, best hope is probably for an Intel GPU unfortunately.
 
Become a Patron!
Back
Top