Intel Refutes AMD’s “2X” EPYC Claim, Brings Its Own Benchmarks

Tsing

The FPS Review
Staff member
Joined
May 6, 2019
Messages
11,255
Points
83
One of the things Dr. Lisa Su introduced during her Computex keynote was AMD’s new 7nm EPYC processors. She mentioned that their performance is double that of the competition’s Xeon Scalable 8280 chips. Intel didn’t like that and has released benchmarks in an attempt to prove AMD’s claim wrong.

AMD's dual-socket server has 128 cores and 256 threads, so it isn't entirely surprising to see that much firepower beat Intel's competing 56-core 112-thread dual-socket server. But core counts, while important, aren't the end-all, be-all. All cores are not created equal, and Intel says that after it applied the correct optimizations to its system, it scored 30% higher than the results presented by AMD (below). That doesn't give Intel's Xeon 8280 the lead, but shows the chips are more competitive with the correct optimizations.

The benchmarks also flaunt the purported muscle of Intel’s Xeon Platinum 9000-series chips, with the 48-core 9242 edging out AMD’s 64-core Rome just barely, while the 56-core 9282 performs substantially higher. It’s probably worth pointing out that Intel’s parts have a much higher TDP, however.

TDP is a decent litmus test of system pricing, as more heat generation and power consumption require more expensive components and equate to higher operating costs. That means Intel's behemoths require exotic cooling, and because they aren't socketed processors like we see in most servers, unique system designs that impact price heavily. Meanwhile, AMD's chips are definitely designed for the general-purpose market, whereas Intel's -AP models are pricey pieces of silicon that are only available as OEM systems and command premiums so high that pricing isn't public.
 
So who is buying me a server like that to replace my ancient one?
 
Curious if Intel's "correct optimizations" were removing the security fixes that reduce their performance by about 30%.
 
It's hard to fathom you could pick up an additional 30% performance through any optimizations outside of removing the patches for security mitigation. That said, there are applications that servers run where specific settings in BIOS or in the OS have to be used for best results. Running the right block size on drive volumes, or the correct NUMA mode for Exchange is a good example for this. There are some similar tricks for things like MSSQL.
 
Last edited:
Depending on the workload and I don't have any idea what the benchmarks are like some of the Intel security flaws were noted to reduce performance by up to 50%. If I remember correctly this was mainly in SQL servers, however even Apple stated they saw as much as a 40% drop in performance and have only implemented a partial solution and left the rest up to the user (to disable HT).
 
Become a Patron!
Back
Top