AMD Ryzen Threadripper PRO 7995WX and 7975X Leak Reveals 96- and 32-Core Zen 4 Workstation Processors

Tsing

The FPS Review
Staff member
Joined
May 6, 2019
Messages
11,277
Points
83
Key specifications for AMD's new Ryzen Threadripper PRO 7995WX and 7975WX Zen 4 workstation CPUs have seemingly been outed by a number of SiSoftware benchmarks for Dell's latest series of Precision workstations. According to the listings spotted by @momomo_us, the Ryzen Threadripper PRO 7995WX, the new flagship, will feature 96 cores and 192 threads, while the 7975WX will feature 32 cores and 64 threads. The current-generation Ryzen Threadripper PRO 5995WX and Ryzen Threadripper PRO 5975WX chips, which are based on AMD's older Zen 3 architecture, feature 64 cores and 32 cores, respectively. AMD launched its Ryzen Threadripper PRO 5000 WX-Series Processors in March 2022.

See full article...
 
So are they going to do the same BS and not have any parts for the DIY market again?
 
So are they going to do the same BS and not have any parts for the DIY market again?

From what little I have read it looks as if they will be returning to having both PRO and non-Pro model Threadrippers again in this generation.

I was pissed when AMD abandoned the 3xxx series Threadrippers on the TRX40 platform without a drop-in Zen3 upgrade, but I am actually considering making the non-Pro Threadripper my next major upgrade anyway, possibly against my better judgment.

I was leaning towards just keeping the Threadripper 3960x as my work machine and building a dedicated secondary machine as my "gaming machine" due to the intermittent AMD support, and expense of upgrading Threadripper platforms, but I put together a huge spreadsheet adding up all of the costs involved, and between the added GPU, CPU, motherboard and RAM necessary, as well as good KVM switches for high enough bandwidth display port output that support three monitors, the Threadripper in place upgrade actually winds up being cheaper, even if I have to pay the same high prices of the Threadripper PRO 5xxx platform.

So I'm very much in a "wait and see if non-Pro Threadripper 7xxx materializes, and if it does, what it looks like and how much it costs" mode.

I don't need the "more cores than God" model. I really go threadripper mostly for the PCIe lanes, and maybe slightly for the quad channel RAM and larger L3 cache. I'd take a 16 core or another 24 core Threadripper, as long as they haven't gimped the max turbo clock too much, like they did in the past.

It would be really cool if it was X3D cache enabled, as I'm probably not going to overclock it anyway.
 
It would be really cool if it was X3D cache enabled, as I'm probably not going to overclock it anyway.
I don't think you'll see less than 32 cores for CCX and memory channel reasons. But yea the PCIE lanes would be nice. A good X16, video card, and another X16 storage controller support 4, 4tb NVME drives in raid 10. Maybe a high bandwidth nic or something along those lines if you have some super high speed external stuff. Otherwise a couple of the new thunderbolt ports would be nice for expansion purposes.

actually I'm struggling here... unless you're moving some HUGE amounts of data or doing AI work... AI work you could add a couple more GPU's for AI crunching. And more ram too. Modeling... geophysics or high end physics engines for architectural work... Yea ok that makes sense.
 
I don't think you'll see less than 32 cores for CCX and memory channel reasons. But yea the PCIE lanes would be nice. A good X16, video card, and another X16 storage controller support 4, 4tb NVME drives in raid 10. Maybe a high bandwidth nic or something along those lines if you have some super high speed external stuff. Otherwise a couple of the new thunderbolt ports would be nice for expansion purposes.

actually I'm struggling here... unless you're moving some HUGE amounts of data or doing AI work... AI work you could add a couple more GPU's for AI crunching. And more ram too. Modeling... geophysics or high end physics engines for architectural work... Yea ok that makes sense.

A few things. Only the PRO models have the same 128 PCIe lanes as EPYC, the non pro models halve EPYC's PCIe lanes down to 64, and also halve the 8 channel RAM down to 4 channels.

In my server I could DEFINITELY make use of 128 PCIe lanes, but on my desktop probably not. Even the 64 lanes regular threadripper lanes wind up being mostly unused, due to the massive amounts of Gen4 bandwidth that just goes unused.

Take my current Asus ROG Zenith II Extreme Alpha, for instance.

1694737200139.png

Despite all of those 64 available lanes, (and a few more coming off the chipset) the layout can be restrictive.
There are "only" four 16x PCIe slots on the motherboard. Max they can support is 16x/8x/16x/8x, so those get a total of 48x lanes. All of them Gen 4.

Of those I use
- 16x Gen 4 lanes for a GPU. (Full utilization of this slot)
- Dual port 10gig Server nic (8x Gen 2 in 16x physical, 8x electrical Gen 4 slot) Physical slot 50% used, but only 25.39% of bandwidth utilized due to Gen 2 in Gen 4 slot.
- 1x Gen 1 sound card in 16x Gen 4 slot. Physical slot 6.25% used, bandwidth 0.79% used.
- 4x Gen 3 PCIe NVMe SSD in 16x physical, 8x electrical slot. Slot 25% filled physically, bandwidth 25% used.

So with these measly four PCIe expansion cards, I've used all of the PCIe slots on this motherboard while only using ~42% of the bandwidth allotted to the PCIe slots, and most of that going to the GPU.

Now, lets say I have a little data rescue project, need to pop in an 8x Gen 2 RAID card to pull some data off some SAS drives. Now I have to remove something. That something will probably be the sound card (because I only rarely use it for line-in recording, as my USB DAC doesn't have an input, but still, that's a pain in the ***.

Then we have the m.2 slots. This board has 5 of them between on board, and the proprietary DIMM.2 riser. (one of the on board ones shares bandwidth with the fourth PCIe slot, and drops it down to 4x when in use, but that doesn't matter, as the card I have in there is only 4x anyway)

Right now I do have them all populated, mostly "just because" because I had the drives. I upgraded old drives, and moved the old ones to other slots.

Do I really need them all? Nope. Two would be just fine.

The reason I want lots of PCIe lanes is not because I want to push some I/O bandwidth monster. It is for the flexibility, and ability to "just pop it in" if I need something.

Ideally, I just want a full row of eight 16x PCIe slots (phsycially and electrically). Only like two of them need to be latest Gen (gen 4 or 5) The rest could be Gen3 or even Gen2 and still meet all of my needs since I often pop in older server stuff.

I just want my PC to be like PC's used to be. Lots of expansion capacity so I can install whatever hardware I want at any given moment.

Threadripper Pro is the only Workstation board like that in recent memory, but I probably won't buy it because they want WAY too much money for it. So I will go with the compromise solution, which is "consumer' Threadripper.

This was my Abit KR7A-RAID I used back in 2001:

1694737037571.png

SEVEN beautiful expansion slots. One AGP dedicated to the GPU, and six conventional PCI slots for any expansion I could ever want at the time. Very little extraneous junk I'll never use on board.

That - to me - is still the premium experience I want, offering the full flexibility a PC is supposed to offer. I just want it updated to the latest standards.

For all of its majesty and expense, my $950 top of the line Threadripper board, once it has been equipped with a GPU, it can only add 3 more expansion cards. Once my Abit KR7A-RAID had a GPU installed, it could add six additional expansion cards, twice that of the Threadripper. Oh, and it only cost me $139 brand spanking new at launch. Because thats how top end motherboards were priced in 2001.
 
Last edited:
How much stuff out there do you still need to drop in on an expansion card?

I mean - that Abit board you linked - it had 2 ATA connectors and 1 floppy connector, and I think it even had a RAID for another 2 ATA ports? One parallel and maybe 1 serial port. Looks like a pair of PS/2 ports for KB/M. That was pretty much it. So you needed to add in a lot more to it to get to the same baseline that modern boards have already built in.

I get you probably still want dedicated RAID cards. Maybe a dedicated NIC if the 2 that are onboard most high end boards aren't enough for you or you want to go spendy on 10Gb. But past that ... umm... you still have a GPU and ... that's about it really that you need to drop in to make a modern motherboard functional (and with IGP, often times not even that). Today's motherboards have oodles of USB, sound, often WiFi and Bluetooth, often dual NICs, a pair of NVMe's, a handful of SATA ports, the capability to handle IGP if it exists, and sometimes even ~still~ have PS/2 ports if you really want to get down with that.

I get liking to have options. And I also get that having options over having all that crap built in that you may or may not want. But don't mention one without the other: if you want to go back to the days of 7 PCI ports, then get rid of all the built in crap that's driving your base price up.
 
I keep four NVMe M.2 drives in my desktop - and it has a board with TB4 built in, and an x4 slot that can handle a high-speed NIC, while keeping 16 lanes reserved for the GPU.

This is on Z690.

As cool as having more PCIe lanes and more memory channels would be, and I do occasionally find myself exploring the possibilities therein, I have a real hard time when it comes to putting a budget toward such a system. Mostly because any work I would do would be aspirational, and if being honest with myself, I'd probably only make use of the power available by running benchmarks.

Now, if I wanted to game on it as well as run say a fileserver, likely in a VM (I've done this successfully before...), well that might make sense. Could toss in an Arc GPU to handle AV1 transcoding for the VM for media archival and Plex services alongside the HBA and NVMe cards.

But the thing is... I'd still rather those functions be on a separate system. Fewer moving parts etc., for both the desktop and the 'server'.
 
Some interesting use cases here. Thanks for sharing them. And I tend to agree all of those lanes on prosumer boards doesn't seem that great. More SLOTS!!
 
How much stuff out there do you still need to drop in on an expansion card?

I mean - that Abit board you linked - it had 2 ATA connectors and 1 floppy connector, and I think it even had a RAID for another 2 ATA ports? One parallel and maybe 1 serial port. Looks like a pair of PS/2 ports for KB/M. That was pretty much it. So you needed to add in a lot more to it to get to the same baseline that modern boards have already built in.

I get you probably still want dedicated RAID cards. Maybe a dedicated NIC if the 2 that are onboard most high end boards aren't enough for you or you want to go spendy on 10Gb. But past that ... umm... you still have a GPU and ... that's about it really that you need to drop in to make a modern motherboard functional (and with IGP, often times not even that). Today's motherboards have oodles of USB, sound, often WiFi and Bluetooth, often dual NICs, a pair of NVMe's, a handful of SATA ports, the capability to handle IGP if it exists, and sometimes even ~still~ have PS/2 ports if you really want to get down with that.

I get liking to have options. And I also get that having options over having all that crap built in that you may or may not want. But don't mention one without the other: if you want to go back to the days of 7 PCI ports, then get rid of all the built in crap that's driving your base price up.

I guess I am constantly disappointed with most things on board, and as a result almost never use any of them. The exception is probably USB.

Just about everything else I just disable. On board NICs? Disable. On board sound? Disable. On board GPU? Disable. On board SATA? Haven't used that on my main desktop in close to a decade. Almost everything you find on board integrated into a motherboard is just a half-assed compromise using some cheap Realtek junk chipset. I do not want that. I want to design my system choosing every component that is best for me.

I hate having to put up with the compromises motherboard manufacturers integrate on board, and want to specify every system component myself. I really wish nothing (except maybe USB) were integrated on board.
 
I keep four NVMe M.2 drives in my desktop - and it has a board with TB4 built in, and an x4 slot that can handle a high-speed NIC, while keeping 16 lanes reserved for the GPU.

This is on Z690.

As cool as having more PCIe lanes and more memory channels would be, and I do occasionally find myself exploring the possibilities therein, I have a real hard time when it comes to putting a budget toward such a system. Mostly because any work I would do would be aspirational, and if being honest with myself, I'd probably only make use of the power available by running benchmarks.

Now, if I wanted to game on it as well as run say a fileserver, likely in a VM (I've done this successfully before...), well that might make sense. Could toss in an Arc GPU to handle AV1 transcoding for the VM for media archival and Plex services alongside the HBA and NVMe cards.

But the thing is... I'd still rather those functions be on a separate system. Fewer moving parts etc., for both the desktop and the 'server'.

For me it is an accumulation over time type of thing.

I have a very particular set of hardware which I have acquired over a long period of time. :p

1694794458674.png

More seriously though, I buy hardware over time, one expansion card here, one expansion card there.

It's not like I build a system from scratch, and plan out all of this high end hardware I am going to stick in. Hardware is acquired for a variety of purposes (sometimes my main workstation, sometimes my servers, sometimes a router build, sometimes my stepsons gaming rig, or other client machines in the house, etc. etc.) Rarely are we talking about buying brand new multi-thousand dollar enterprise hardware. A lot of this stuff is bought as used enterprise pulls on eBay (I've never been let down by a good enterprise recycler/refurbisher on eBay, but I don't trust used consumer hardware)

I have a parts bin, which migrates out of builds being retired, and into new builds over time. Any one build is not particularly expensive (with some notable exceptions) but when I tally up everything I've spent over the years, it gets a little uncomfortable :p

I guess what I ma trying to say is, I have a big ole bin of parts, that depending on my current needs I migrate in or out of various systems. The truth is, things like high end 10gig server NIC's, and RAID cards last a very long time. You don't need to buy them brand new for every build. Especially if you use Linux, which due to the drivers being built into the kernel, hardware support is almost eternal for.

I still daily a lot of 8x Gen2 hardware, despite Gen2 being very old at this point, and this old hardware performs WAY Better than anything that might be included on board.
 
Last edited:
I don't think you'll see less than 32 cores for CCX and memory channel reasons.

Well, The Threadripper 7000 is going to be a cut down EPYC Genoa, and there are 16 core Genoa's, and those have 12-channel RAM, so I don't think there is any reason it can't happen technically, though it may be using binned dies with disabled cores.

(Previous consumer Threadrippers just halved the memory channels and PCIe lanes of their server counterparts (quad channel, 64 lanes vs octa-channel, 128 lanes). No idea if the move to 12-channel (dodeca-channel?) on the server parts mean that the consumer Threadrippers will receive hexa-channel RAM or if they will be sticking with quad channel as before. It will be interesting to see.)


If I had to guess, the smallest version will be 24 cores again, but I could be wrong. I'd even buy a 16 core variant if they made it (provided it wasn't gimped with lower clocks) I can't think of a single time I've made good use of the 24C/48T in my 3960x, I'm really in it for the PCIe lanes, so if they come out with a smaller version, I will be happy, but I am not holding my breath either.
 
They also have 3D-cache dies as well as their condensed dies that they can throw on the package.

Another thing to consider is that we already have high-performance 48GB DIMMs for DDR5; you'd need the Registered ECC variants of course, but that kind of points toward fewer memory channels which would also help keep costs in check.

I still daily a lot of 8x Gen2 hardware, despite Gen2 being very old at this point, and this old hardware performs WAY Better than anything that might be included on board.

If you're talking consumer, entry-level boards, I agree. And recent Intel wired NICs have been troublesome for some reason. The Marvell (nee-Aquantia) NICs have remained pretty good though and are available on some boards, while onboard audio is almost indistinguishable. And you don't really want USB or Thunderbolt to be AIC only; going AIC for USB can add complications, and Thunderbolt AICs eat a PCIe slot and haven't been terribly reliable as far as my research has gone.

About the only thing I'd want running discrete aside from the GPU would be an HBA for spinners, and that'd I'd just as much want in another box. Could perhaps argue for 40Gbps+ NICs, though that gets a bit wild from the consumer perspective and provides fairly limited returns IMO.
 
Well, that's a bummer.

While earlier info was ambiguous and suggesting that we may be seeing both this time, it's starting to look more and more like this launch will be like the last one, Pro only.

That really sucks.
 
Well, that's a bummer.

While earlier info was ambiguous and suggesting that we may be seeing both this time, it's starting to look more and more like this launch will be like the last one, Pro only.

That really sucks.
Well, you can always go Intel - it's not like Intel's P-cores are slow, they're probably still core-for-core faster than Zen 4 - and you get the same HEDT benefits.

I want to do it just to do it at this point, which obviously makes no financial sense...
 
Well, you can always go Intel - it's not like Intel's P-cores are slow, they're probably still core-for-core faster than Zen 4 - and you get the same HEDT benefits.

I want to do it just to do it at this point, which obviously makes no financial sense...

I haven't kept up with Intel's HEDT offerings.

Last I researched them it looked like they had abandoned that market, but that was like 2 years ago.

I have no brand loyalties. I'll just as happily go with intel if they can provide me with large numbers of PCIe lanes without sacrificing max core clocks too much over the top consumer models.

I didn't realize this had changed. I will have to research what their offerings are today.

All of that said, my big driver to upgrade was coming from Starfield's CPU performance. Unlike on its more consumer-oriented stablemates, like the 8C/16T Ryzen 7 3800x which shares the same clocks, it turns out Starfield did OK on my current Threadripper 3960x (probably due to a combination of quad channel RAM and large L3 cache) and Starfield is likely an outlier as far as titles go. I don't expect anyhting else to really make me CPU bound at 4k, even with a 4090.

So, I may just put off the upgrade. For everything else I do, the Threadripper continues to be excellent, even though the more mainstream AM5's are catching up (and beating it) in some instances in multhithreaded work now.

As an aside, what are some key search terms for Intel's current pro-Sumer/HEDT platform. Is it Fishawk Falls?
 
Well, you can always go Intel - it's not like Intel's P-cores are slow, they're probably still core-for-core faster than Zen 4 - and you get the same HEDT benefits.

I want to do it just to do it at this point, which obviously makes no financial sense...
I haven't kept up with Intel's HEDT offerings.

Last I researched them it looked like they had abandoned that market, but that was like 2 years ago.

I have no brand loyalties. I'll just as happily go with intel if they can provide me with large numbers of PCIe lanes without sacrificing max core clocks too much over the top consumer models.

I didn't realize this had changed. I will have to research what their offerings are today.

All of that said, my big driver to upgrade was coming from Starfield's CPU performance. Unlike on its more consumer-oriented stablemates, like the 8C/16T Ryzen 7 3800x which shares the same clocks, it turns out Starfield did OK on my current Threadripper 3960x (probably due to a combination of quad channel RAM and large L3 cache) and Starfield is likely an outlier as far as titles go. I don't expect anyhting else to really make me CPU bound at 4k, even with a 4090.

So, I may just put off the upgrade. For everything else I do, the Threadripper continues to be excellent, even though the more mainstream AM5's are catching up (and beating it) in some instances in multhithreaded work now.

As an aside, what are some key search terms for Intel's current pro-Sumer/HEDT platform. Is it Fishawk Falls?

Hmm. Did some brief googling.

Sapphire Rapids seems to be the latest current Server/Workstation platform. If my brief googling is accurate, they ahve the same architecture as the P cores in consumer Raptor Lake cores.

They also have more than enough PCIe lanes. 64 in lower end models, and 112 lanes in the higher end models.

The only downside is that the top turbo core maxes out at 4.8Ghz, compared to 6Ghz on the 13900KS.

I wonder how easy it is to overclock them to make them perform on par with the consumer models clocks...

I've been googling, but there aren't exactly a lot of reviews that test overclocking on these chips, though the news stories that reference the press-release suggests that they are unlocked.If it's a thermal limit thing, I'd totally disable a few cores in order to allow the best ones some breathing room and hit 13900KS speeds.
 
I wonder how easy it is to overclock them to make them perform on par with the consumer models clocks...
You have two variables nowadays - silicon quality and cooling.

I've yet to see anyone take one of the lower core-count parts and bolt some serious cooling to it for overclocking. Many / most have actually used Noctua air coolers, and pushed upward of 500W through them. With that much surface area and not pushing voltage and clockspeed much beyond stock, Sapphire Rapids comes across as a pretty tame beast.

Now, if I were to go that route, I'd get a MORA 420 with the 200mm Noctua mounting (so 4x 200mm) and just run an open chassis.

Also, take a look at the boards available. From Intel 10Gbit 10Gbase-T NICs to built-in remote management (like servers), there's something for everyone. Not cheap by any means, but also hard to find them lacking in any serious way either.
 
I want to do it just to do it at this point, which obviously makes no financial sense...

You can say that again.

Some brief research on Sapphire rapids places the workstation CPU's in the $1500 - $2000 price range, and the motherboards in the $900-$1200 range. Yikes.

That's a bit much even for me.
 
Last edited:
Become a Patron!
Back
Top