The Starfield discussion thread.

As far as CPU performance goes, here is some comprehensive testing on a German site:


This video tries to explain the rather odd CPU scaling with different CPU's:


Conclusion? It appears to scale very very similarly to AIDA64 Synthetic memory bandwidth figures, which is kind of hilarious.

This also might just mean that the combination of quad channel RAM and a very large (for the time) L3 cache may actually save things for my Threadripper 3960x in this title :p

I'm tempted to buy the game just to test it now.

If I do, it will have to wait. I'm not buying the $100 Premium version just to get early access. Totally not worth it to me.
 
There are community demos already in circulation...
Personally I wouldn't want to do that. I'll give it a few weeks to stabalize before I think about sinking in the money to buy it... or a 7900xtx video card. ;)
 
So,

I went ahead and did it. I can wait to play a new hyped game release, but start asking interesting questions about hardware performance and I find it nearly impossible to resist researching it.

So, I went ahead and spent the ridiculous $100 on the premium version that gave me instant access.

The theory of RAM bandwidth being important seems to be true, at least up to a point.

So I maxed out graphics settings (just in case the higher settings add more AI Models or something that hits the CPU) but lowered the resolution and scaling down to their lowest settings to the point where I was probably rendering at something akin to the good old 320x200 DOS resolution to make sure that I was CPU limited, not GPU limited.

The Threadripper 3960x with quad channel DDR4-3600 (CL16) in the toughest scenes walking around New Atlantis gives me about a 62-65fps average, much more than it's Ryzen 3xxx contemporaries, giving it similar performance to a generation newer Ryzen 9 5900x. Then I turned off SMT and tried again. This boosted average framerate to between 73 and 75FPS, boosting performance to above what one would expect from a 5800X3D and to about Ryzen 5 7600 levels (according to that German sites CPU Testing) we are talking almost two generations faster comparison wise than a Zen 2 CPU has any right to be, all likely due to a combination of the Threadrippers greater L3 cache and faster quad channel RAM.

And New Atlantis (at least thus far early in game) is the most intense place on the CPU (and GPU). Out where you are more likely to have combat, fighting pirates on planets, and inside buildings on planets we are talking a 95 to 110 fps before the CPU limit sets in.

So, the Threadripper is definitely an exception when it comes to getting 60+ minimum freames playability on Zen 2 / Ryzen 3xxxx cores, and while that video suggesting performance scales with AIDA64 memory bandwidth benches isn't quite right, as there is definitely a point where the bottleneck shifts from RAM bandwidth to the CPU cores themselves, having faster RAM definitely helps in this game.

Which brings me to the CPU load in this game.

It is insane.

I fist came across this when watching Daniel Owen's CPU benchmark video below, showing hexacores and octacores with completely pinned CPU cores, 100% on all cores, which is really out of the ordinary.

So, I ran the task manager on a second screen and did a run through in the city, taking a screenshot of the task manager (after a split second of alt tabbing away to take the screenshot:

First with SMT on:
Starfield_Load.png

Now with SMT off:

Starfield_Load_SMT_OFF.png

It's absolutely insane. The game is loading up 24 cores. I have never seen anything like it before.

I have to admit, this makes me suspicious. Having played around a little bit the graphics in Starfield are OK. They are not amazing like Cyberpunk 2077. Heck even some decade old games might look a little better.

So why then the extreme system load? Where is it coming from? It's not the far away distant planets. They will add system load when on screen rendering, but when they are far away, the only load they add to the system is storage space needed, that is loaded when you get there.

Also, if the game loads up 24 cores on my CPU, or fully loads the 6 cores up to 100% just to get by on an i5-9600k, how can it run at all on a 4C8T Ryzen 3300x? If it needs this level of processing power, an older 4C8T CPU ought to absolutely choke on it, and not run at all.

So I guess my question is, if the game runs OK on older CPU's with fewer cores, how come the game needs so much more processing power on my machine with a Threadripper? It's almost as if these extra threads that are running aren't actually needed for the game to run.

Which begs the question, what the hell are they doing?

Is Bethesda stealing my CPU cycles and thus my electric bill to do some sort of mining or distributed computing scheme to add an extra revenue stream?

I don't mean to get all conspiracy minded, but this is very suspicious. If they are stealing our CPU cycles (and thus our electric bill) for something that is not directly game related, then Bethesda needs to burn.

I have reached the end of my ability to look into/postulate what is going on, but there is something very weird here, and I hope someone with more abilities picks it up and looks into it.
 

Related article: https://www.eurogamer.net/digitalfoundry-2023-starfield-the-digital-foundry-tech-review

I went ahead and did it. I can wait to play a new hyped game release, but start asking interesting questions about hardware performance and I find it nearly impossible to resist researching it.

So, I went ahead and spent the ridiculous $100 on the premium version that gave me instant access.
You could have used a "community demo" to check out the game's performance (as well as the gameplay) before making a purchasing decision.
 
CPU benchmarks as well, as this title is reportedly much more CPU dependent than recent titles we have seen.
It has its moments. I'm about 3-4 hours in now and my 5800X3D mostly hangs ~ 20-40%. I've seen a few spikes here and there but nothing really sustained during game play. For s&giggles I'm going to try that new DLSS 3 Framegen mod that came out yesterday, tonight, and see how the 4090 does with it. My guess is if it can hold a near-constant 120ish FPS I might see some more effort on the CPU's part.
 
This boosted average framerate to between 73 and 75FPS, boosting performance to above what one would expect from a 5800X3D
This is actually what I was getting on my 5800X3D in New Atlantis using the DLSS (no framegen but using the 3.5 dll) mod. It varied from 62-83 depending on what I was doing so 70s are the median for me. This was on the CRG9 rig with the OCd 3090 Ti (avg 2145-2160 MHz), stock 10,500 MHz mem clocks. On my 4090 rig, using the same DLSS mod, it was avg. ~80-100 in the same area.
 
It's absolutely insane. The game is loading up 24 cores. I have never seen anything like it before.
Now you've got me curious. I haven't been checking TM to see the core loads and even though AB can do that as well I usually like to keep it simple since I don't normally like to have the whole side of the screen filled with metrics while I'm playing. I'll take a look later and let you know what I see on my rig(s) since all three have a 5800X3D. I still need to install it on my laptop which has a 11400(?) 8c/16t in it.
 
I don't think task manager is reliable I'd check core load with a different tool to see if it's not acting weird.
 
You might be seeing some additional load just managing threads across 24 or 48 processing threads. The thread management and reassembly at the code level does take some CPU cycles to run. Not to mention managing distributed processes across multiple ccx chiplets.

Though if I were to guess it's back end AI running the npcs and various markets and other backend systems that can be enhanced with more compute overhead that are not directly impacting the active game.

Pull down something like tcpview and look for connections and traffic. If you can narrow down the ports you can use ethereal to capture traffic for a few minutes then use a geolocator to see where the traffic is actually going.
 
Well, not really sure what to say and honestly I could've spun myself around in all the testing I've done between the 3 rigs plus all the different mods I've been using but here's what I just saw. I'm at the point after leaving the 1st meeting with Constellation and when you leave them it's nightfall.

I'm playing in 4k, using the DLSS 3+Framegen mod, and NVIDIA Freestyle w/ the 4090 rig. Playing at 4K w/ Ultra settings with motion blur, sharpening, DOF, dynamic res, all off. Vsync off.

Walking around and FPS were all over the place. From 59 to 90 but mostly held in the 65-75 range still and in some places it hung in the low 80s CPU usage was up though at around 62% most of the time with some spikes towards 67%. I checked TM and I agree that can be unreliable but still okay for a basic reference and yep, all cores/threads were loaded up. Not necessarily 100% but still busier than with most games. I also agree that it's probably due to all the NPCs and other things in the area.

I forgot to mention in my previous reply that the 3090 Ti was doing its thing at 5120x1440 and using the High/med settings as recommended in the Hardware Unboxed video. I do recommend what they say as high/med for a lot of things do seem more optimized than the ultra/high defaults.
 
Alrighty, since things were getting kind of weird with the DLSS mods I went ahead and deleted the files in the root game directory and had Steam reacquire them. Rebooted and started it again. Same game settings as I mentioned above but using FSR2 @ 100% render scale. Basically same FPS as before (60s-80s) but CPU usage dropped a bit. Here's a 4K snapshot.
 

Attachments

  • Starfield_2023_09_07_19_40_11_817-4k.jpg
    Starfield_2023_09_07_19_40_11_817-4k.jpg
    1.4 MB · Views: 6
Become a Patron!
Back
Top