Rumor: Cyberpunk 2077 to Gain NVIDIA Neural Radiance Caching Support for Faster Path Tracing

Tsing

The FPS Review
Staff member
Joined
May 6, 2019
Messages
13,407
Points
113
CD PROJEKT RED is planning to boost the performance of Cyberpunk 2077's Ray Tracing: Overdrive Mode with a new algorithm from NVIDIA dubbed Neural Radiance Caching, according to the latest rumors shared by the CapFrameX account. The technology appears to work by leveraging the Tensor Cores in GeForce RTX GPUs to predict radiance in scenes, and while a 37-page PDF that NVIDIA researchers released during GTC ("Advancing Real-Time Path Tracing with Neural Radiance Cache") doesn't deliver much in terms of comparative benchmarks, it does tease a least one other benefit in the form of reduced noise. Path tracing, otherwise known as full ray tracing, is now available to test as part of Cyberpunk 2077's Ray Tracing: Overdrive Mode, although a GeForce RTX 4090 comes highly recommended for those who desire playable frame rates.

See full article...
 
From the images it looks like performance may not necessarily increase but quality sure as hell does.
 
AMD announces FSR Redstone for RDNA4: Neural Radiance Caching, ML Ray Regeneration and Frame Generation

"The next step is FSR Redstone, a new version of super resolution and frame generation. This version will focus on three key features: Neural Radiance Caching, machine learning-accelerated Ray Regeneration, and machine learning accelerated Frame Generation."


"Hall began by saying something surprising. FSR Redstone is implemented so that it can run even if the GPU does not have AI acceleration capabilities at runtime. Despite being a machine learning/AI-based technology, it does not require AI acceleration on the GPU side."


"FSR Redstone was developed using AMD ML2CODE (Machine Learning to Code), a research project from ROCm. The core part of the neural rendering technology is converted into optimized Compute Shader code by utilizing ML2CODE."


ML2CODE is included in AMD's GPU computing (GPGPU) platform, ROCm 6.1 and later.

Rather than executing trained AI cores at runtime, it optimizes them as existing Compute Shader code and enables native execution.


 
That seems very impressive. I wonder what the compute cost of translation is if any.
 
That seems very impressive. I wonder what the compute cost of translation is if any.
Chris Hall should have the answers
Screenshot_20250912-201328_Opera Touch.jpg


 
Become a Patron!
Back
Top