One Threadripper to Rule Them All as Threadripper Pro 7995WX 96-Core Hits over 100,000 in Cinebench R23 Benchmark

HEDT used to have really long legs. I had friends that built i7-920s and Sandy Bridge Es that stretched them for 10 years (or more!) and they could get away with that through strategic upgrades and just the benefits from the platform itself. AM4 had a pretty good lifespan for the friends that built with it, but it didn't compare to those older HEDT platforms. Honestly, I'd love to be able to recommend a low core count Threadripper platform if it could get a few generations of CPU upgrades and then have all that extra memory bandwidth and PCIE lanes so it could be strategically upgraded for 10-12+ years.

I used my x79 Sandy-E 3930k for 8 years. It overclocked to a - then - ungodly 4.8Ghz which made it untouchable at the time.

I still use it as a test bench machine.

As far as Threadripper goes, I am still undecided. I still like the many PCIe lanes and workstation class motherboards without racing heatsinks, fancy paint jobs or RGB LED's, but the cost is difficult to justify....

Especially when you can upgrade to the next generation drop in Zen chip every couple of years for only $350.

The value proposition just isn't there.
 
My 4930K almost made it 10 but I moved on because of all the GPU features it was holding me back from with my NV RTX cards.
 
I could only see the value making sense for folks that are already looking at a full top-spec platform upgrade / new build.

Board may be a few hundred more, and CPUs are going to be around twice as much or a little more; but the base connectivity is pretty appealing. Especially the option in some cases to have remote management. Wish that option was available for more boards that touch the prosumer space.
 
*** I think it was Wendel at L1T that showed something like 500W from a Xeon being cooled by a 92mm Noctua tower that is designed to fit in a 4U chassis. For those accustomed to the traditional 100W / 120mm philosophy when it comes to water cooling, it may come as a surprise that a 360mm AIO could probably handle 600W or 700W from one of these monsters. The biggest bottleneck in CPU cooling is the IHS and smaller dies, relative to say GPUs that'll gladly get by on a 240mm AIO while dumping 450W. The larger overall package sizes on HEDT seem to provide some interesting computing options for enthusiasts.

Well, there's a few moving parts here -

1. Per conversation with SilverStone at Computex, I was told that the mount pressure for the new TR socket is higher than the desktop side of things. This should solve some of the coldplate coverage issues.
2. I ran the AIO test bench on a 10980XE for... three years? I'd say less than half of the 360mm aios that I tested could keep it from throttling at the 500w mark. You'd need pretty stout airflow on the radiator to keep it cool enough....
 
1. Per conversation with SilverStone at Computex, I was told that the mount pressure for the new TR socket is higher than the desktop side of things. This should solve some of the coldplate coverage issues.
This is a good thing, though for HEDT they do need to ensure proper coverage, which isn't a simple thing to do for the enthusiast builder space I'd think.

2. I ran the AIO test bench on a 10980XE for... three years? I'd say less than half of the 360mm aios that I tested could keep it from throttling at the 500w mark. You'd need pretty stout airflow on the radiator to keep it cool enough....
I looked up the size of the IHS for the 10980XE, as I'd remembered it being much closer to the consumer SKUs than TR / newer Xeons in size - I'm betting that that was a contributor, on top of that CPU being so dense. I'd also assume that the IHS wasn't doing it any favors, yet it was still pulling 500W - we'd be lucky to cool 300W with the stock IHS on a desktop platform, not that there's any real reason to be drawing that amount of power IMO.

I know that the newer Xeons have larger packaging, but really the Threadrippers with their dispersed CCDs should be very amenable to improved cooling. I doubt anyone would want to go as far as delidding, but with those massive IHSs just making sure that the whole IHS surface is in contact with the cooling block would probably take it pretty far I think.
 
This is a good thing, though for HEDT they do need to ensure proper coverage, which isn't a simple thing to do for the enthusiast builder space I'd think.


I looked up the size of the IHS for the 10980XE, as I'd remembered it being much closer to the consumer SKUs than TR / newer Xeons in size - I'm betting that that was a contributor, on top of that CPU being so dense. I'd also assume that the IHS wasn't doing it any favors, yet it was still pulling 500W - we'd be lucky to cool 300W with the stock IHS on a desktop platform, not that there's any real reason to be drawing that amount of power IMO.

I know that the newer Xeons have larger packaging, but really the Threadrippers with their dispersed CCDs should be very amenable to improved cooling. I doubt anyone would want to go as far as delidding, but with those massive IHSs just making sure that the whole IHS surface is in contact with the cooling block would probably take it pretty far I think.
The Core i9 10980XE only pulled that much power when overclocked. It wasn't that bad in stock form, but didn't really offer competitive performance with the Threadrippers of the era. However, overclocked it was quite competitive with them. Unfortunately, that architecture's efficiency went out the window the more you ramped up clocks. At 4.9GHz it took serious radiator space and a custom loop to handle it.
 
Unfortunately, that architecture's efficiency went out the window the more you ramped up clocks. At 4.9GHz it took serious radiator space and a custom loop to handle it.
I've seen that as with any Intel platform in the last... near decade?

They can be pushed, but only if your cooling can handle the heat from the resulting power draw...
 
I had friends that built i7-920s and Sandy Bridge Es that stretched them for 10 years (or more!)
I used my x79 Sandy-E 3930k for 8 years. It overclocked to a - then - ungodly 4.8Ghz which made it untouchable at the time.
I got a friend who is currently on an X58 machine that I built him in 2009. We had plans for a new system in 2020 but then 2020 happened. I've been asking him how he feels about a new PC now, but so far I haven't got much of a response from him.

I'm at 9+ years with my X99 machine (and that PSU has done 13 years so far).
 
HEDT used to have really long legs. I had friends that built i7-920s and Sandy Bridge Es that stretched them for 10 years (or more!) and they could get away with that through strategic upgrades and just the benefits from the platform itself. AM4 had a pretty good lifespan for the friends that built with it, but it didn't compare to those older HEDT platforms. Honestly, I'd love to be able to recommend a low core count Threadripper platform if it could get a few generations of CPU upgrades and then have all that extra memory bandwidth and PCIE lanes so it could be strategically upgraded for 10-12+ years.
I'm not one to run hardware for all that long but back in the day I ran X99 and a Core i7 5960X for just over 5 years. There just wasn't anything all that compelling to upgrade to for so long. X299 came around and Intel's HEDT CPU's hit $2,000 and I wasn't really interested since Threadripper was far more affordable and generally more capable. I went that route and it was **** for gaming. I wished I had stayed on X99 longer than I did honestly. I went to a 9900K as my next gaming upgrade and though it was fine, I wasn't on it very long and switched to a 10900K.
 
I got a friend who is currently on an X58 machine that I built him in 2009. We had plans for a new system in 2020 but then 2020 happened. I've been asking him how he feels about a new PC now, but so far I haven't got much of a response from him.

I'm at 9+ years with my X99 machine (and that PSU has done 13 years so far).

My thought is for a desktop that is not used for games (which really has to be a shrinking market these days when everyone wants mobile everything) there really hasn't been any reason to upgrade for a long long time.

If not for the fact that it is limited less RAM due to being a 32 bit chip, a Core 2 Duo from 2006 can still handle most things on the Desktop. Pretty much anything 64 bit (except maybe some of those awful low performance energy saving ultrabook CPU's from a decade ago) is really adequate for the Office/Email/Web experience.

The funny part is that Windows 11 with its oddball TPM requirements is the biggest reason these folks may have had to want to upgrade in over a decade.
 
My thought is for a desktop that is not used for games (which really has to be a shrinking market these days when everyone wants mobile everything) there really hasn't been any reason to upgrade for a long long time.

If not for the fact that it is limited less RAM due to being a 32 bit chip, a Core 2 Duo from 2006 can still handle most things on the Desktop. Pretty much anything 64 bit (except maybe some of those awful low performance energy saving ultrabook CPU's from a decade ago) is really adequate for the Office/Email/Web experience.

The funny part is that Windows 11 with its oddball TPM requirements is the biggest reason these folks may have had to want to upgrade in over a decade.
It's not just office tasks either. The X58 system my buddy's kid was using was apparently fine for rocket league, counter strike, WoW, League of Legends, and other things. Of course, that system started at 3gb ram and an i7-920 and ended up with both ram and CPU upgrades.
 
I have an i7-920 system that is still running today. I have no idea how, but it still chugs away as a database server in my office. It's cycled through dozens of hard drives now (I usually replace every 3 years, and it has a 4-drive array + boot volume)
 
If not for the fact that it is limited less RAM due to being a 32 bit chip, a Core 2 Duo from 2006...
Uuuhhh, are you sure that's a family of 32-bit CPUs? I ran an E6750 on an X38 board for a while, that was a 64-bit CPU, and I only ran 64-bit OSes on it.

My thought is for a desktop that is not used for games (which really has to be a shrinking market these days when everyone wants mobile everything) there really hasn't been any reason to upgrade for a long long time.... Pretty much anything 64 bit (except maybe some of those awful low performance energy saving ultrabook CPU's from a decade ago) is really adequate for the Office/Email/Web experience.
I completely agree. The only reason I'm still stuck on X99 is cuz of my current financial situation. I've never used a desktop PC for more than 4-5 years until now. I'm long overdue for a new system. My 5930K has become a problem in a lot of games. I never meant to get another GPU for this system, but earlier this year an RTX 3090 fell in my lap. Now my CPU is holding back this GPU a good amount.

My buddy's parents were asking for new systems even though all they do is browse the web and read email and such. They were on Sandy Bridge or X58 systems I believe. He recently built them B650 systems with Ryzen 7700Xes. Only reason was cuz of good Micro Center bundle deals. These systems are waaaaaaaaaay overkill for what his parents need. They probably won't need another PC again for the rest of their lives. I mean technically they are better than his own X570 system with a Ryzen 5600X.

The X58 system my buddy's kid was using was apparently fine for rocket league, counter strike, WoW, League of Legends, and other things.
Yeah well, old games, and games that are light on the hardware. If you don't need to play modern AAA games then yeah older systems work out just fine for gaming too.

I have an i7-920 system that is still running today. I have no idea how, but it still chugs away as a database server in my office.
I'm sure I'd still be using my X58 system for something if I hadn't sold it a few years ago.

My last 32-bit/single-core PC, an Athlon XP "Barton" 3200+ on an nForce 2 board with Radeon 9800 Pro, still runs. I haven't turned it on since around 2007 or 2008, but last I checked it was still running fine. My dad has a Pentium 3 533MHz system and an AMD K6 350MHz system still in operation! I think he even still uses the K6 system cuz his laser printer is attached to it.
 
I still got an x99 setup I'm using as a NAS with a 12 core Xeons. I'll most likely replace it with a ROME EPYC build.

But I am interested to see the prices and performance comparison of these new TRs compared to the Genoa EPYCs and 7950x.

Might be worth if for the density for me to get one.
 
I still got an x99 setup I'm using as a NAS with a 12 core Xeons. I'll most likely replace it with a ROME EPYC build.

But I am interested to see the prices and performance comparison of these new TRs compared to the Genoa EPYCs and 7950x.

Might be worth if for the density for me to get one.

It has been suggested to me by some enterprise early adopters over in the Hardforums that Naples and Rome are probably best avoided, as they early ZEN Ryzens they had some issues with RAM.

Milan+ seems to be the way to go when it comes to decommed EPYC systems, but that does increase the price point a little.

Another thing to keep in mind with any EPYC CPU is AMD's vendor locking feature that allows the Dell's, HP's and others of the world to vendor lock CPU's when integrated into their systems, after which they are not usable in any other manufacturers motherboards.

Apparently there is no way to tell from the outside when this has been engaged. Many eBay sellers have been known to sell vendor locked EPYC CPU's as vendor unlocked, and then push back if the buyer files a complaint suggesting that they just have a system (motherboard/RAM/power/whatever) problem.

it's very much a buyer beware world out there for these chips. And that said, I want to go EPYC for my server at some point as well. :p
 
Another thing to keep in mind with any EPYC CPU is AMD's vendor locking feature that allows the Dell's, HP's and others of the world to vendor lock CPU's when integrated into their systems, after which they are not usable in any other manufacturers motherboards.
WHAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAT?!?!?!?!?!?!?!?!?!?!
 
WHAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAT?!?!?!?!?!?!?!?!?!?!

Yeah, unfortunately it is true.

And they have the balls to call it a "security feature" too...

Apparently, if the feature is enabled in the BIOS, the first time the CPU completes POST it is from that point on irreversibly locked to the vendor of the motherboard that was used.

So, a CPU locked to a Dell motherboard can only be used in Dell motherboards, etc.

If you try to get through POST on another vendors motherboard, you supposedly get a CODE 78 error and it will refuse to boot.



It's really ****ty. And the bald faced lie that it has anything what so ever to do with security is even more infuriating.

What I have read is it can't be fixed by modding the BIOS to ignore the locking. Apparently it is done cryptographically using some sort of public/private key arrangement.

The motherboard flashes it's vendors key to the ROM on the EPYC CPU on first post, after which it burns out some trace or something making the ROM unable to be written to or cleared again.

From that point on, if the right key isn't presented by the motherboard to the CPU during POST, the CPU will refuse to proceed.
 
It has been suggested to me by some enterprise early adopters over in the Hardforums that Naples and Rome are probably best avoided, as they early ZEN Ryzens they had some issues with RAM.

Milan+ seems to be the way to go when it comes to decommed EPYC systems, but that does increase the price point a little.

Another thing to keep in mind with any EPYC CPU is AMD's vendor locking feature that allows the Dell's, HP's and others of the world to vendor lock CPU's when integrated into their systems, after which they are not usable in any other manufacturers motherboards.

Apparently there is no way to tell from the outside when this has been engaged. Many eBay sellers have been known to sell vendor locked EPYC CPU's as vendor unlocked, and then push back if the buyer files a complaint suggesting that they just have a system (motherboard/RAM/power/whatever) problem.

it's very much a buyer beware world out there for these chips. And that said, I want to go EPYC for my server at some point as well. :p
I don't know about the avoiding points. I've got a bunch of them running without issues. However, Naples sucks in terms of computing. All of mine are ROME and I'm looking to buying two more 64 core setups as well as a 64 core Genoa setup in the very near future.

Some people I know have had issues with their systems only running 7 out of 8 RAM channels, but a CPU reseat usually fixes it. I e not had that issue, but I also don't have all 8 channels used on some of mine. 4 channels is good enough for my use case.

All eBay sellers that I've delt with will let you know if the CPU is vendor locked. I mostly just ordered 7V12 CPUs as those all seem to be unlocked unless someone bought one and put it in a Dell or Levono motherboard. I don't think that's very likely though. I've searched for those locked vendors mainboards becuse they usually sell locked CPUs for a lot less, but I haven't been successful in finding them for a reasonable cost anyway.

Everyone I know buys Supermicro or ASRock boards for them.
 
The vendor locking is pretty clearly a move to devalue the second hand market of the EPYC CPU's as sold by Dell. And really most businesses don't care anyway once the ROI is fulfilled.

Heck it was a physical thing with motherboards back on some older IBM hardware that the gold mesh seating for the CPU's was one mating only. something go's wrong you have to pull the CPU you have to replace the motherboard.

Like the EPYC vendor locking this is something that won't be an issue outside of the HP's and Dell's of the world selling servers.
 
I've not seen HP locked CPUs. Only Dell and Lenovo.

You can buy a 7v12 (64 core, 128 thread) with motherboard and RAM combos on eBay for $1500. Pretty cheap for the amount of computing power they have.

I just prefer to buy the ASRock motherboards instead of the Supermicro ones. The ASRock boards boot much faster than the Supermicro. Which realistically isn't a problem once the system is up and running, but I'm super impatient.

I've got:
Dual motherboard setup with 2x 64 core CPUs (ROME) and 5x P100 GPUs.
Single motherboard setup with 1 64 core CPU (ROME) and 6x P100 GPUs.
Single motherboard setup with 1 32 core CPU (ROME) and 6x 3070 GPUs.

My favorite thing about these setups is the IPMI (BMC) remote management hardware/software built into the boards that allow full control of the system remotely including access to the BIOS. All from a web page.

Future plans include replacing the 32 core CPU with a 64 core part.
Two more 64 core ROME single board setups.
One 64 core Genoa single board setup.
 
Last edited:
Become a Patron!
Back
Top