Todd Howard Responds to Starfield’s Alleged Lack of PC Optimization: “Upgrade Your PC”

Microsoft does matter when we are talking about Starfield. :LOL: But seriously, why so cryptic? Care to reveal what programs are these, where lives depend on milliseconds lost in code execution?
I work for the big alarm company. We process millions of signals (a day). One piece of that process taking 20ms to complete is an issue. We try to keep the time from a signal hitting the edge to being on an agents screen as minimal as possible. Currently for an active alarm not awaiting some sort of automated confirmation we are looking at sub 300ms (Easily half that depending). And if we could make that better we would. Some of that is the database triggers that have to process to move the alarm to the right place for the application to pick it up.

We also have crazy high levels of redundancy and fault tolerance built into the environment that we build and maintain.

So the time it takes for a holdup alarm to come into the company, make it into our servers, and lead to a police dispatch being as minimal as possible is literally lives on the line.

Not to mention other monitoring like carbon monoxide and fire. (Carbon Monoxide detection is our top life saver every year. If you have a car with a garage you use as a garage get it. It WILL save your life especially with all of the remote start for gas, and burning batteries for electric.)
 
Chips and Cheese on RDNA 3's relative "overperformance" in Starfield

there’s really nothing wrong with Nvidia’s performance in this game, as some comments around the internet might suggest. Lower utilization is by design in Nvidia’s architecture. Nvidia SMs have smaller register files and can keep less work in flight. They’re naturally going to have a more difficult time keeping their execution units fed. Cutting register file capacity and scheduler sizes helps Nvidia reduce SM size and implement more of them. Nvidia’s design comes out top with kernels that don’t need a lot of vector registers and enjoy high L1 cache hitrates.

If we look at the frame as a whole, the RTX 7900 XTX rendered the frame in 20.2 ms, for just under 50 FPS. Nvidia’s RTX 4090 took 18.1 ms, for 55.2 FPS. A win is a win, and validates Nvidia’s strategy of using a massive shader array even if it’s hard to feed. Going forward, AMD will need more compute throughput if they want to contend for the top spot.

https://chipsandcheese.com/2023/09/...erformance-on-nvidias-4090-and-amds-7900-xtx/

Nvidia’s design comes out top with kernels
  1. that don’t need a lot of vector registers and
  2. enjoy high L1 cache hitrates.
Where above conditions are not met (as in starfield) then AMD is able to catch up.
 
Last edited:
Chips and Cheese on RDNA 3's relative "overperformance" in Starfield

there’s really nothing wrong with Nvidia’s performance in this game, as some comments around the internet might suggest. Lower utilization is by design in Nvidia’s architecture. Nvidia SMs have smaller register files and can keep less work in flight. They’re naturally going to have a more difficult time keeping their execution units fed. Cutting register file capacity and scheduler sizes helps Nvidia reduce SM size and implement more of them. Nvidia’s design comes out top with kernels that don’t need a lot of vector registers and enjoy high L1 cache hitrates.

If we look at the frame as a whole, the RTX 7900 XTX rendered the frame in 20.2 ms, for just under 50 FPS. Nvidia’s RTX 4090 took 18.1 ms, for 55.2 FPS. A win is a win, and validates Nvidia’s strategy of using a massive shader array even if it’s hard to feed. Going forward, AMD will need more compute throughput if they want to contend for the top spot.

https://chipsandcheese.com/2023/09/...erformance-on-nvidias-4090-and-amds-7900-xtx/

Nvidia’s design comes out top with kernels
  1. that don’t need a lot of vector registers and
  2. enjoy high L1 cache hitrates.
Where above conditions are not met (as in starfield) then AMD is able to catch up.
Interesting insight thanks for sharing that.
 
Yeah, my 4090 has done fairly well with it, even more so when I got DLSS 3 and framegen running on it.

However, when it comes to my 3090 TI's it takes every trick in the book. Now granted the 4090 is something like 30% better than them but ouch it's a shock to see them hit so hard by this game when they do so well with many other games w/o needing as much tweaking from mods and config files. I can't wait for all the updates to come out and help them improve a bit.
 
Apparently as per MLID, Bethesda took AMD's help to port the game from PS5 (vulkan) to xbox (directx 12)

Imo, we must give credit to nvidia for making ada lovelace so great, that it runs much better than ampere, even though it is not optimized at the game's launch
 
Yeah, Ada is probably the biggest generational leap we've seen in a long time for NV, at least at the top level. There's always the various claims of improvements but they rarely pan out to be completely true. I love my 3090 Ti's, especially the hybrid EVGA, but I'm still surprised how much more powerful, and efficient, the MSI 4090 Liquid Suprim is. If I wasn't paying off a t.v. right now I'd be tempted to get another but I keep telling myself to just wait it out until 2025 when the next gen rolls out.
 
My grandparents house is worth millions when adjust for inflation. My car tires worth thousands! Adjusted for inflation is a BS argument. Easer to say actual costs then to now.
Yeah, that's what inflation means, the same things get more expensive over time. Doesn't mean your old bias ply tires from 1953 are worth thousands now :p

But If I wanted to buy the same category car tires I bought in 2013 today, I'd have to pay almost twice the money. That doesn't mean the tires will be certified for twice the speed and load :D

Now apply the same logic to GPUs, voila!
 
Oh yea generational improvement isn't the only piece pushing costs as high as they are. Not in a long shot. A large part of that is a publicly owned company demanding as much profit as they can squeeze from the market.
 
Titan was more of a "prosumer" card, but those that bought Titans for gaming expected bleeding edge performance.
The 4090 / 3090 series are basically the replacements of the Titan, there was no 2090 or 1090.
 
Become a Patron!
Back
Top