Godfall RX 6900 XT vs RTX 3090 FSR Ray Tracing Performance

I think, speaking strictly technically - nVidia clearly has a stronger hardware solution. But - AMD has ~a~ solution now, and it's prominently featured in both major consoles (that seem to be selling like hotcakes but no one can get their hands on).

Speaking personally - RT just doesn't excite me. It's one of those technologies where I can see a difference if I really look in static screenshots, but when I'm playing a game - right now it just doesn't make a difference and the only appreciable difference between RT on and off is that I can notice the framerate hit.

Better Raytracing simply isn't a consideration on if I buy a piece of hardware, or a game. Well, let me take a step back - because that's assuming you even have a choice in hardware. Right now, it's take what you can get with respect to hardware.

Until we get more titles where raytracing is either required or makes a significant impact, I'd expect it to remain a parlor trick. And even if/when it does catch on, I expect it to become something ubiquitously supported in the background, like physics libraries are today. No one cares which hardware can run physics any better than the other now, but 20 years ago we sure geeked out about it.
I have the opposite experience after playing games with eg. DXR GI, my brain gets "annoyed" when observing normal SSAO effects in other games...something is "off" (as in the lighting is more fake).
I think Digital Foundry kinda summed it up in eg. this video:

A bit more than a "parlor trick".
 
If you think consoles will "hold" back DXR implementation I suspect you have misunderstood the benefits/scalability of implementing DXR in games.
I think a lot of us still have flashbacks to being stuck at DX9 forever because the consoles had a long stretch. Sure, DX10 and 11 did come about, but games just rarely took advantage of it because the consoles couldn’t.
 
To each their own - it doesn’t make the game better for me.
Unless you set all graphical settings to LOW when you play, I am going to ignore you subjective non-argument...feel free to engage in a technical debate, but "I like pong-style graphics" has no relevance in regard to ray tracing and the perfomance gap (and why that gap is there) between vendors in DXR 🥳

I think a lot of us still have flashbacks to being stuck at DX9 forever because the consoles had a long stretch. Sure, DX10 and 11 did come about, but games just rarely took advantage of it because the consoles couldn’t.
I remember the DX10 launch quite well....hardware way before any games.
The G80 came...ran supreme in DX9 games and after a while a lot of DX10 games came out (DX 10 adoption rate was slower than DXR) it was supreme in DX10 too.

DX11 was a bit more "meh", but the multi-threaded driver issue did give AMD some issues.

DX12 was more "huh"....until DXR hit. Before DXR DX12 was lower performance compared to DX11, bye to multi-GPU support and the low level codepath was certainly not for every developer.
DXR was the saving grace for DX12...hence the adoptation of DXR is better than the one of DX10.

And here we are...were games does this:
1642145296707.png

DXR on PC, nothing on console...this is not a novel trend...this generation of consoles are to weak for full DXR implementation...and for once...developers are not holding back DXR to "spare the consoles"...(except some shady business from the Unreal Enigne 5 team to try and hide the perfomance gap between PC vs. consoles aka "politics").
 
Unless you set all graphical settings to LOW when you play, I am going to ignore you subjective non-argument...feel free to engage in a technical debate, but "I like pong-style graphics" has no relevance in regard to ray tracing and the perfomance gap (and why that gap is there) between vendors in DXR
I don't think that's exactly what I meant there. I suspect you knew that, but that's ok.

There is nothing in Cyberpunk's RT implementation, or any other game that I'm aware of, that makes playing the game any different with it on or off - apart from it being more shiny. It's like the early days of PhysX, where if you had it, Batman's cape would fly a bit better and explosions were spectacular, but it didn't change the gameplay. Then, eventually, we got games that relied heavily on physics engines and wouldn't exist without them - Kerbal, GTA5, Portal, ... it's a pretty long list today and games almost universally have some form of hardware-accelerated physics in their implementation.

We are ways from that level with RT - that's what I was trying to convey. Today's RT is all just more shiny metal and mudpuddle reflections, it has a ways to go to get to something impactful. Not ... Pong?
 
I'd also say - Physics is an example of a tech that caught on, and caught on pretty big. It became pretty standard in every game. Shaders are another.

But you have a lot that just ended up bullet points on the side of the box so that one manufacturer could claim an edge over another. How many proprietary implementations of Anti-aliasing have we seen come and go? I don't think anyone will argue the merits of AA, but there have been so many various implementations of it, many of which with vendor lock-ins, that have come and gone. And now that's extending to upscaling algorithms. Same thing with GPU-accelerated sound - it was a thing for a while if anyone recalls. You can probably put SLI/Crossfire on that list now.

Raytracing may be the holy grail of rendering, but that doesn't mean that today is the day for it to emerge, or that it will displace rasterizing entirely in the rendering pipeline. I don't think it will fail entirely, I think (my personal opinion) that it will follow a similar path as physics, and that eventually it will just become transparent, but I don't think it will completely displace rasterization. I just think we will stop ranking video cards on just RT merits -- either it becomes transparent enough that it gets rolled into the overall performance metric (akin to shaders), or it fades into something that doesn't heavily leverage the hardware (which is the route physics has).

Today - my opinion, Raytracing performance is just another marketing point. It makes games prettier, but it doesn't change gameplay. If one card performs better with RT than another, it doesn't change your ability to play a game, just turn on some options. RT support is hardly universal across all games, so even the titles to where it can be used are fairly selective. If you have a situation where all other things are equal; sure, you'd rather have better RT performance than worse. But I wouldn't sacrifice rasterization performance, or VRAM capacity, or price to get better RT performance. Even though RT may very well be the Next Big Thing -- if it does, the video cards that are out once it gets there will be vastly superior for it when we get there, which keeps RT performance today a marginal metric at best.

At least for me. Nothing to do at all with Pong here.
 
Last edited:
Nice article. I have been looking for a recent apples to apples comparison since I have been looking for a new gpu lately. I am leaning towards a 6900 by now. Mostly because of the ~$1000 price gap between in stock 6900 and 3080ti/3090 of course. But I have been thinking about testing team red again. I believe my previous card from them had ATi branding.
 
I disagree.
Godfall only uses DXR for Raytraced Shadows, meaning it is a DXR-lite game, where AMD does better than games using multiple DXR effects.
If you look at titles doing a LOT more DXR...the pattern is vey clear.
NVIDIA has a major performance benefit when multiple DXR effects are used at the same time (Shadows, reflections, Lighting, GI) over AMD.
Games like CyberPunk 2077, Metro Exodus and Control eg.

If anything this shows AMD tansk performance when not doing "DXR-lite".
For RT, there is also a lot of compute operations where Nvidia double FP32 per cuda core gives a great benefit if fed. AMD advantage is if DXR 1.1 is used and Infinity Cache is effectively used, as in multiple shaders within a single shader keeping instructions/assets local in cache for compute operations, keeping the shaders fully used and busy. Nvidia biggest advantage is FP32 capability while AMD in keeping the shaders well fed for instructions (compute mostly) for RT using Infinity Cache. Comparing Older RT games to newer more AMD aware games show that AMD hardware can do RT, better than Turing by a significant margin. It is not just casting rays, BVH etc. for RT, once the hits/data is determined, it is also how well the arch computes the final output as well.
 
Comparing Older RT games to newer more AMD aware games show that AMD hardware can do RT, better than Turing by a significant margin.
You'll want to substantiate this - do you have a reference, hopefully more than one, that clearly shows the differences?

Also, it was clear from the outset with Turing that RT was a compromise. Old node and so on, and Nvidia took heat for that.

But we're not comparing RDNA2 to Turing - we're comparing it to Ampere.
 
You'll want to substantiate this - do you have a reference, hopefully more than one, that clearly shows the differences?

Also, it was clear from the outset with Turing that RT was a compromise. Old node and so on, and Nvidia took heat for that.

But we're not comparing RDNA2 to Turing - we're comparing it to Ampere.
Watch whole video, if Lazy, 5 min on should give relevant detail dealing with AMD hardware: (Have to watch on YouTube)


If you want even a dryer version again straight from AMD mouth, how programming particularly for a given hardware base here you go: Note, Nvidia also gives tools, best practice etc. that works best with Nvidia RT hardware. Hence RT performance is also very much dependent on how well it is coded for the hardware. Older RT games are in general not coded best for AMD hardware hence a much larger discrepancy. Nvidia also benefits from DXR 1.1 usage on the newer games that use it.

 
Not to discount the veracity of the videos, but they are from AMD and not from an independent tester. I'd have the same question if they were from Nvidia, or now that Intel is getting into the game, from Intel too.
 
Not to discount the veracity of the videos, but they are from AMD and not from an independent tester. I'd have the same question if they were from Nvidia, or now that Intel is getting into the game, from Intel too.
You expect an independent tester to rewrite code to see if it actually makes a difference? Not likely. Developers would be putting up a fuss if what AMD is promoting or Nvidia makes zero difference. The performance difference using RT between Nvidia and AMD is closer with newer games more of aware of the hardware -> more recent RT titles that also PS5 and/or XBox series X/S optimized (similar RNDA 2 hardware). You don't have to look at what I say, just look at performance differences. Both Nvidia and AMD takes a hit but now the hit to AMD is much less than it use to be with DXR 1 written games favoring Nvidia.
 
Both Nvidia and AMD takes a hit but now the hit to AMD is much less than it use to be with DXR 1 written games favoring Nvidia.
You do realize that the logic you're using here also implies that AMD RDNA2 is simply not as capable of running DXR 1 than Nvidia's Ampere?
 
You do realize that the logic you're using here also implies that AMD RDNA2 is simply not as capable of running DXR 1 than Nvidia's Ampere?
Exactly implied and true. DXR 1 games will not effectively use the Infinity Cache while DXR 1.1 games if program to use the RNDA2 hardware can.
 
For RT, there is also a lot of compute operations where Nvidia double FP32 per cuda core gives a great benefit if fed. AMD advantage is if DXR 1.1 is used and Infinity Cache is effectively used, as in multiple shaders within a single shader keeping instructions/assets local in cache for compute operations, keeping the shaders fully used and busy. Nvidia biggest advantage is FP32 capability while AMD in keeping the shaders well fed for instructions (compute mostly) for RT using Infinity Cache. Comparing Older RT games to newer more AMD aware games show that AMD hardware can do RT, better than Turing by a significant margin. It is not just casting rays, BVH etc. for RT, once the hits/data is determined, it is also how well the arch computes the final output as well.

I would like to see a game that uses MORE than one DXR effect that follow the "pattern" you just picked out from free fantasy.
Trend:
The more DXR effects in-game, the more AMD suffers compared to NVIDIA.

DXR 1.1 was developed to make up for the defficiencies (lack of hardware BHV acceleration) of the RDNA2/Console GPU's....not to be "better" than DXR 1 (DXR 1.1 Inline ray tracing does not use separate dynamic shaders or shader tables etc. DXR 1.1 hides access to the acceleration structure (data structure traversal, box, triangle intersection etc.) vs DXR 1.
 
I don't think that's exactly what I meant there. I suspect you knew that, but that's ok.

There is nothing in Cyberpunk's RT implementation, or any other game that I'm aware of, that makes playing the game any different with it on or off - apart from it being more shiny. It's like the early days of PhysX, where if you had it, Batman's cape would fly a bit better and explosions were spectacular, but it didn't change the gameplay. Then, eventually, we got games that relied heavily on physics engines and wouldn't exist without them - Kerbal, GTA5, Portal, ... it's a pretty long list today and games almost universally have some form of hardware-accelerated physics in their implementation.

We are ways from that level with RT - that's what I was trying to convey. Today's RT is all just more shiny metal and mudpuddle reflections, it has a ways to go to get to something impactful. Not ... Pong?

That same flawed argumentation can be used about:
T&L
AA
AF
Basically ANY image quaility uplift can be "adressed" the same way and is a fallacy to the technology and the benefits it brings to the table.

And I have used reflections in windows in-game to spot hostiles...I call that a change of gameplay...due to accurate world reflection (not possible with SSR)
 
Become a Patron!
Back
Top