AMD Radeon RX 6800 XT’s Ray-Tracing Performance Falls Significantly Short of NVIDIA RTX, According to Early Tests

Th leaks show it outperforming the 2080ti, but falling short on the 3080... If indicative of actual performance it's about in line with my expectations... Ok for a first try, but not as good as Nvidias second try. Will have to see how it matures a bit and real games get tested. I don't see how it would lower IQ for a given setting... If your casting out the same rays using the same api, you should be getting the same results. If you meant you have to lower quality settings to get the same speeds this would make sense, but it's more of an either or, not both.
I'm mostly excited to see how it does in games that use it for lighting as that's when RT makes the biggest difference to me.


IMO nvidias 2nd try is not much better if at all compared with its 1st try, seeing that games have pretty much the same performance hit when enabling RT. So goes for DLSS which gives about the same performance gains.

I guess rumors on AMD RT performance/IQ are based on the crappy RT demo AMD showed a few months ago. But based on what they showed on the presentation, I'd say it looks about the same as RTX.
 
IMO nvidias 2nd try is not much better if at all compared with its 1st try, seeing that games have pretty much the same performance hit when enabling RT. So goes for DLSS which gives about the same performance gains.

I guess rumors on AMD RT performance/IQ are based on the crappy RT demo AMD showed a few months ago. But based on what they showed on the presentation, I'd say it looks about the same as RTX.
The performance penalty is always going to be similar. NVIDIA have doubled the ray tracing performance over their first try.
 
The performance penalty is always going to be similar. NVIDIA have doubled the ray tracing performance over their first try.
my math doesn't compute. If RTX performance has improved by 2X, shouldn't performance hit be much lower? For example in control the RTX2080Ti has about 40% performance drop enabling RT effects, which is about the same as the RTX 3070 or 3080.
 
my math doesn't compute. If RTX performance has improved by 2X, shouldn't performance hit be much lower? For example in control the RTX2080Ti has about 40% performance drop enabling RT effects, which is about the same as the RTX 3070 or 3080.
You are correct, the only way they should remain at 40% is if the 3080 was 2x the speed of the 2080ti, which it isn't. Otherwise the % difference should be better, although really not by as much as you're probably thinking.

Example:
Game runs at 80fps without RT and -40% (48fps) with RT
This gives you a frame time of 12.5ms (1000ms/80fps) without RT and 20.833ms (1000ms/48fps) with RT. This equals 8.33ms of time that RT is taken.

If you have a 3080 that's 40% faster, we'll say it runs at 112fps without RT. If RT was twice as fast, it should only take ~4.17ms (8.33ms/2) for RT..
So, frametime without RT would be 8.93ms (1000ms/112fps)... add in RT time and we get 8.93 + 4.17 = 13.1ms which would end up at (1000/13.1) 76fps... This is about 32% slower... even thought RT time was cut in half. So while it shouldn't be 40% slower, it isn't as big of a change as you're thinking (double RT performance doesn't mean 20% performance hit). Since RT was a small slice of the actual frame time, doubling it doesn't make as much of a difference as would doubling raster performance.

These are similar #'s (maybe a little high actually so the longer raster takes, the less difference RT makes) to what you might see, so you can see that even doubling RT performance still leaves us with a > 30% hit to performance in this made up example.

Anyways, hope that wasn't to confusing :) Made sense when I started writing it, lol.
 
You are correct, the only way they should remain at 40% is if the 3080 was 2x the speed of the 2080ti, which it isn't. Otherwise the % difference should be better, although really not by as much as you're probably thinking.

Example:
Game runs at 80fps without RT and -40% (48fps) with RT
This gives you a frame time of 12.5ms (1000ms/80fps) without RT and 20.833ms (1000ms/48fps) with RT. This equals 8.33ms of time that RT is taken.

If you have a 3080 that's 40% faster, we'll say it runs at 112fps without RT. If RT was twice as fast, it should only take ~4.17ms (8.33ms/2) for RT..
So, frametime without RT would be 8.93ms (1000ms/112fps)... add in RT time and we get 8.93 + 4.17 = 13.1ms which would end up at (1000/13.1) 76fps... This is about 32% slower... even thought RT time was cut in half. So while it shouldn't be 40% slower, it isn't as big of a change as you're thinking (double RT performance doesn't mean 20% performance hit). Since RT was a small slice of the actual frame time, doubling it doesn't make as much of a difference as would doubling raster performance.

These are similar #'s (maybe a little high actually so the longer raster takes, the less difference RT makes) to what you might see, so you can see that even doubling RT performance still leaves us with a > 30% hit to performance in this made up example.

Anyways, hope that wasn't to confusing :) Made sense when I started writing it, lol.
It was :unsure::unsure:. but no need to overthink it right now.
On a side note, it seems RTX is quite faster on blender with ampere vs turing... Food for thought.
 
As a consumer I want a card that meets a few criteria.

1. 1440p at 75hz with game on maximum settings.
2. Future improvements with updates and learning.
3. No hardware limitations as games use more and more video memory.
4. Actually have a product on store shelves to buy.


If amd can have stock on shelves to meet buyer demand they will need to write a love letter to Nvidia for all of the market foreplay getting consumers wanting the newer hardware bad. Because come the review releases ill be looking to buy and if amd is on par.. I will buy them... provided they are on store shelves
 
Become a Patron!
Back
Top