Intel Launches What It’s Calling the World’s Fastest Gaming Processor: the 10th Gen Core i9-10900K

Tsing

The FPS Review
Staff member
Joined
May 6, 2019
Messages
12,595
Points
113
intel-i9-10900k-white-pc-1024x576.jpg
Image: Intel



Intel has officially launched its lineup of 10th Gen Core S-series desktop CPUs, which is headlined by “the world’s fastest gaming processor” – the Core i9-10900K. This is a 10C/20T chip that can hit clocks of up to 5.3 GHz with the company’s Thermal Velocity Boost technology. It also supports DDR-2933 memory.



“The i9-10900K processor powers the ultimate gaming experience, allowing more tuning control, faster multitasking and smoother gameplay,” wrote Intel. “The new Intel Turbo Boost Max Technology 3.0 provides automatic performance boosts on lightly threaded applications, while per-core hyperthreading control allows experienced overclockers to decide which threads to turn on or off on a per-core basis.” Here are some of the CPU’s claimed improvements:



Up to 187...

Continue reading...


 
I mean, in situations not bound by the GPU, they are probably right in most titles, but it is by an absolutely TINY margin...

Since most gaming is GPU bound, and you probably wouldn't even notice the difference when not bound by the GPU, it's a mostly academic difference.

It stinks that they are likely going to convince so many low knowledge buyers to fall for their marketing though.
 
14 will be available at launch.

10 of those will be issued for review samples.
 
Hey be fair... Thousands will be available at launch. Overclocking won't be supported. Turning on XMR memory profiles will be considered overclocking. Vendors will flip a permanent bit on the chips if they are overclocked ever even by accident their by disabling the warranty.

What? They've done everything short of a permanent bit flip before.
 
I mean, in situations not bound by the GPU, they are probably right in most titles, but it is by an absolutely TINY margin...

Since most gaming is GPU bound, and you probably wouldn't even notice the difference when not bound by the GPU, it's a mostly academic difference.

It stinks that they are likely going to convince so many low knowledge buyers to fall for their marketing though.

This isn't entirely true. I've tested it multiple times, and at higher resolutions CPU performance absolutely impacts your frame rates in games. While averages are largely unaffected, minimum frame rates and even frame times are effected to a degree. As I've stated before, my AMD Threadripper 2920X with a 4.2GHz all core overclock would drop to 26FPS in Destiny 2 at 3840x2160. In contrast, the Core i9 9900K @ 5.0GHz never dropped below 56FPS. This isn't to say that the 2920X system wasn't playable, but it routinely dropped into the 45FPS realm which was noticeable in firefights. Conversely, the 9900K @ 5.0GHz's drop to 56FPS was so rare that it only showed up in benchmarks.

This is in the same system with the same hardware with only the motherboard and CPU being changed. The 9900K provided a smooth and enjoyable gaming experience while the Threadripper 2920X simply didn't. With PBO on, the 2920X achieved a minimum of 36FPS which was better, but it didn't impact the overall experience as much as I'd hoped. I was perfectly happy with my Threadripper 2920X outside of gaming, but as a gaming CPU it was lacking for sure. At 1920x1080, I might not have noticed as much. At 4K and later 3440x1440, I absolutely found the CPU inadequate as it couldn't deliver the experience I desired.

Now, having said that, I'm not sure what kind of impact we would see from the 10900K over something like a 9900K.
 
This isn't entirely true. I've tested it multiple times, and at higher resolutions CPU performance absolutely impacts your frame rates in games. While averages are largely unaffected, minimum frame rates and even frame times are effected to a degree. As I've stated before, my AMD Threadripper 2920X with a 4.2GHz all core overclock would drop to 26FPS in Destiny 2 at 3840x2160. In contrast, the Core i9 9900K @ 5.0GHz never dropped below 56FPS. This isn't to say that the 2920X system wasn't playable, but it routinely dropped into the 45FPS realm which was noticeable in firefights. Conversely, the 9900K @ 5.0GHz's drop to 56FPS was so rare that it only showed up in benchmarks.

This is in the same system with the same hardware with only the motherboard and CPU being changed. The 9900K provided a smooth and enjoyable gaming experience while the Threadripper 2920X simply didn't. With PBO on, the 2920X achieved a minimum of 36FPS which was better, but it didn't impact the overall experience as much as I'd hoped. I was perfectly happy with my Threadripper 2920X outside of gaming, but as a gaming CPU it was lacking for sure. At 1920x1080, I might not have noticed as much. At 4K and later 3440x1440, I absolutely found the CPU inadequate as it couldn't deliver the experience I desired.

Now, having said that, I'm not sure what kind of impact we would see from the 10900K over something like a 9900K.

That is really strange. Almost makes me wonder if something else was going on with the test system, related to chipset drivers, or GPU driver compatibility. (Did the Threadripper 2xxx have that same infinity fabric issue with differences in latency and between different chips that the original Threadripper had? I can't remember. I know mine doesn't suffer from this issue, but I can't remember if they fixed that in the 2xxx series or in the 3xxx series)

In all of my own testing across the countless systems I have built over th eyears, with different configurations for both myself and others, I have never seen a single instance where added resolution (or any GPU setting at all for that matter) had any impact on CPU load.

The two have always been 100% independent from each other in every test I have ever run. You know like this:

Test a GPU with the fastest CPU on the market to minimize CPU impacts. Note performance.

Test a CPU with minimal GPU settings in order to minimize GPU effects on the numbers.

Now, when you combine the two in the same system, the performance you get is the minimum of the two. On a functioning system, at a fixed frame rate, I've never seen CPU be impacted by any graphics settings, including resolution. The only time I've seen any CPU impact from changing graphics settings is once you turn the graphics settings up, and you become GPU limited, which thus lowers the overall system framerate, and thus puts less load on the CPU.

Now I've never played (or even heard of) the Destiny series, but since getting my Threadripper 3960x I have run several titles, all of them at 4k, including Far Cry 5, Far Cry New Dawn, and the Outer Worlds. My GPU has been brought to its knees at 4k in all of these titles, but I have never seen any indication that the CPU was ever inadequate.

I really can't help but suspect you uncovered either a driver bug of some sort or that this was caused by the Infinity Fabric issue which was well known with early Threadrippers.
 
This isn't entirely true. I've tested it multiple times, and at higher resolutions CPU performance absolutely impacts your frame rates in games. While averages are largely unaffected, minimum frame rates and even frame times are effected to a degree. As I've stated before, my AMD Threadripper 2920X with a 4.2GHz all core overclock would drop to 26FPS in Destiny 2 at 3840x2160. In contrast, the Core i9 9900K @ 5.0GHz never dropped below 56FPS. This isn't to say that the 2920X system wasn't playable, but it routinely dropped into the 45FPS realm which was noticeable in firefights. Conversely, the 9900K @ 5.0GHz's drop to 56FPS was so rare that it only showed up in benchmarks.

This is in the same system with the same hardware with only the motherboard and CPU being changed. The 9900K provided a smooth and enjoyable gaming experience while the Threadripper 2920X simply didn't. With PBO on, the 2920X achieved a minimum of 36FPS which was better, but it didn't impact the overall experience as much as I'd hoped. I was perfectly happy with my Threadripper 2920X outside of gaming, but as a gaming CPU it was lacking for sure. At 1920x1080, I might not have noticed as much. At 4K and later 3440x1440, I absolutely found the CPU inadequate as it couldn't deliver the experience I desired.

Now, having said that, I'm not sure what kind of impact we would see from the 10900K over something like a 9900K.
That is really strange. Almost makes me wonder if something else was going on with the test system, related to chipset drivers, or GPU driver compatibility. (Did the Threadripper 2xxx have that same infinity fabric issue with differences in latency and between different chips that the original Threadripper had? I can't remember. I know mine doesn't suffer from this issue, but I can't remember if they fixed that in the 2xxx series or in the 3xxx series)

In all of my own testing across the countless systems I have built over th eyears, with different configurations for both myself and others, I have never seen a single instance where added resolution (or any GPU setting at all for that matter) had any impact on CPU load.

The two have always been 100% independent from each other in every test I have ever run. You know like this:

Test a GPU with the fastest CPU on the market to minimize CPU impacts. Note performance.

Test a CPU with minimal GPU settings in order to minimize GPU effects on the numbers.

Now, when you combine the two in the same system, the performance you get is the minimum of the two. On a functioning system, at a fixed frame rate, I've never seen CPU be impacted by any graphics settings, including resolution. The only time I've seen any CPU impact from changing graphics settings is once you turn the graphics settings up, and you become GPU limited, which thus lowers the overall system framerate, and thus puts less load on the CPU.

Now I've never played (or even heard of) the Destiny series, but since getting my Threadripper 3960x I have run several titles, all of them at 4k, including Far Cry 5, Far Cry New Dawn, and the Outer Worlds. My GPU has been brought to its knees at 4k in all of these titles, but I have never seen any indication that the CPU was ever inadequate.

I really can't help but suspect you uncovered either a driver bug of some sort or that this was caused by the Infinity Fabric issue which was well known with early Threadrippers.


Side note. I see Destiny 2 is free to play and on Steam, so there is nothing impeding me from testing it out and seeing what my experience is like on my current configuration, just in case this is some weird title based problem. From your description it also sounds like it has a built in benchmark, which should make this easier.
 
Side note. I see Destiny 2 is free to play and on Steam, so there is nothing impeding me from testing it out and seeing what my experience is like on my current configuration, just in case this is some weird title based problem. From your description it also sounds like it has a built in benchmark, which should make this easier.

Destiny 2 does NOT have a built in benchmark. I've been testing it with NVIDIA Frameview.
 
Destiny 2 does NOT have a built in benchmark. I've been testing it with NVIDIA Frameview.

I will have to take a look at that. I have never used Frameview. While playing games, I usually monitor things real time using RivaTuner charts on a separate screen.
 
@Zarathustra I wonder if you will get different results using Rivatuner as compared to Frameview. If they are both free to use mind you.

Also keep in mind the 3900x series of Ryzen CPU's are built to be better for gaming workloads. While the threadripper's intended target is Workstation CPU's doing things like CAD and other mutli-core heavy workloads.

So you might have some basic design differences between the two that inject a perceptible latency issue in fast refresh rate scenarios.

Though with the AMD CPU's the I/O control can be on the motherboard or the CPU right based on which PCIE lanes are in use. And with storage and Video cards now consuming more of that PCIE bandwidth perhaps that is something to look at as well. Curious. I wish the guys here, or I, had unlimited time and resources to prove this out or disprove it altogether.
 
I will have to take a look at that. I have never used Frameview. While playing games, I usually monitor things real time using RivaTuner charts on a separate screen.

You can do that, or use whatever you wish. I downloaded Frameview and it's what I've been using with games that do not have a built-in benchmark. Mostly because it generates a report that I can look at when I'm done. You do not monitor the game in real time. Destiny 2 does have a built-in FPS counter though if that's what you are looking for.

That said, looking at an in-game FPS counter it doesn't capture minimum frames very well. I never saw it hit 26FPS on my Threadripper 2920X. I just saw it hit the low 40FPS range all the time. The drops even lower were rare, but I didn't know they were that bad until I captured the data. Frametimes were basically crap and the game only felt smooth some of the time. With the 9900K@5.0GHz, I never felt the drop below 60Hz@4K even though the report says it happened. But the frame times were excellent and that 56FPS minimum, though easily reproduced wasn't normal. It occurred once in a blue moon.
 
@Dan_D I wonder if the Ryzen 9 line has that same problem or not. I'm betting the drops in frame times was CCX driven.
 
I'm not sure what drops you are referring to regarding the Ryzen 9. With Destiny 2 in particular, it happened early on when workarounds were employed to make it work with that game. After the AGESA code was updated properly to address the issue, this was resolved. It now runs as fast or faster than comparable Intel machines. If you are referring to the fact that some early testing showed slightly worse performance vs. the 3700X and 3800X, it's been the theory that crossing CCD's causes the issue. However, this issue seems to have been mitigated somewhat as Windows has been made aware of the topology of Ryzen CPU's and now attempts to contain processes on single CCX's and only crosses CCX and CCD boundaries when necessary.

I haven't tested Threadripper, but from what I have seen there is no real difference between Threadripper 39xx and Ryzen 9 for gaming. Obviously, single core frequency may have some impact, but there are some games that may see slightly more performance due to the greater memory bandwidth. If there are any differences in performance it would come down to crossing CCD boundaries. This isn't remotely as bad as crossing CCX boundaries on Ryzen 1000 and 2000 series CPU's.
 
Become a Patron!
Back
Top