AMD Ryzen Threadripper PRO 7995WX and 7975X Leak Reveals 96- and 32-Core Zen 4 Workstation Processors

You have two variables nowadays - silicon quality and cooling.

I've yet to see anyone take one of the lower core-count parts and bolt some serious cooling to it for overclocking. Many / most have actually used Noctua air coolers, and pushed upward of 500W through them. With that much surface area and not pushing voltage and clockspeed much beyond stock, Sapphire Rapids comes across as a pretty tame beast.

Now, if I were to go that route, I'd get a MORA 420 with the 200mm Noctua mounting (so 4x 200mm) and just run an open chassis.

Also, take a look at the boards available. From Intel 10Gbit 10Gbase-T NICs to built-in remote management (like servers), there's something for everyone. Not cheap by any means, but also hard to find them lacking in any serious way either.


More on this later in a different thread, but my long term plan is to actually move my main workstation to a 5U chassis, with an external MORA rad (or two).
 
That's a bit much even for me.
It's about 2x what you'd pay for higher-end enthusiast parts, so yeah expensive, but perhaps not crazy if you can put it to use.

Granted in your situation I'd just recommend demoting your current TR system to a server and go for a decent enthusiast build.
 
You can second that again.

Some brief research on Sapphire rapids places the workstation CPU's in the $1500 - $2000 price range, and the motherboards in the $900-$1200 range. Yikes.

That's a bit much even for me.
That's what's kept me from even considering it. Maybe once I get some bills off the table I might consider it but at this point that kind of money goes to the best GPU I can get per gen.
 
I want to do it just to do it at this point, which obviously makes no financial sense...

You can second that again.

Some brief research on Sapphire rapids places the workstation CPU's in the $1500 - $2000 price range, and the motherboards in the $900-$1200 range. Yikes.

That's a bit much even for me.

Or I could just give up on the whole "PCIe slot flexibility" demands that I have had going on.

I don't absolutely need it on my main machine anymore, now that on board 10gig networking has become - if not mainstream - at least somewhat normal.

I'm currently set up for SFP+ networking though, so I'd have to use adapters. I currently make use of two 10gig networking ports. One to my main network, and one direct link to my NAS. I havent found a single LGA1700 or AM5 board out there with dual 10gig NIC's, but I guess I could use the 10gig for the NAS, and the 2.5gig for the main network, provided I can find copper to SFP+ adapters that can actually negotiate 2.5gig speeds on a 10gig switch...

Then I could move the current Threadripper board to the workbench/testing build I keep in my office for doing IT/Imaging/Firmware flashing/etc. style work where is really where the PCIe slots are the most important.

It just feels wrong, like I am giving up on the way PC's are supposed to work though, in favor of this modern all-in one compromise bullshit.

I'll have to sleep on it. I'd certainly spend a hell of a lot less money, and get better peak CPU performance in all but the most threaded workloads.
 
It's about 2x what you'd pay for higher-end enthusiast parts, so yeah expensive, but perhaps not crazy if you can put it to use.

Granted in your situation I'd just recommend demoting your current TR system to a server and go for a decent enthusiast build.

I won't use it as a server. My server is old, and could definitely use an upgrade, but I don't like using consumer parts for server needs.

It would probably go into the workbench/testbench build. Thats really where I take most advantage of the PCIe lanes anyway.
 
The annoying part is, I would be completely happy going with a consumer solution if there were EVEN ONE motherboard out there that allowed me to go in a 16x/8x configuration that allowed me to simultaneoulsy use a 16x GPU and an 8x network controller.

There are enough PCIe lanes to make this happen on both AM5 and LGA1700, but not a single manufacturer has gone with this configuration. At least not that I can find.

I figured Supermicro might on their green Server/Workstation oriented board, but not even them.

The closest I have found is the Supermicro X13SAQ with this rather odd slot configuration:

  • 1 PCIe 5.0 x16,
  • 1 PCIe 4.0 x4 (in x8 slot),
  • 1 PCIe 3.0 x4,
  • 1 PCIe 3.0 x2 (in x4 slot),
  • 1 PCIe 4.0 x4 SlimSAS


 
It just feels wrong, like I am giving up on the way PC's are supposed to work though, in favor of this modern all-in one compromise bullshit.
I have used... two AICs in addition to the GPU in the past five or seven years. One has been for more USB ports (internal in addition to external, because the board didn't have enough for the case), and the other has been a NIC.

Sound? Been external via USB even longer than that. Audiophile stacks, but now mostly just straight out of my SSL2+, which measures and sounds as good as any decent DAC will, while providing a stellar set of preamps and balanced connectivity all alongside how cool it looks and feels.

I don't use SATA anymore on my desktop, that's for the server to run ZFS on.

And I don't even know what other AICs I might use. If I were a streamer, I'd probably have a separate PC for that too - could go ITX even for that one, since you'd only need one slot for the capture card, and I'd use one of Rode's interfaces that has two USB connections for two separate devices, and can route between them.

But what else is there these days?
 
and an 8x network controller.
Is this really an issue? I've run my X550-T2 off of a 16x physical / 4x electrical slot without issue. Unless you're planning on hitting 10Gbps both ways on both interfaces?
 
Is this really an issue? I've run my X550-T2 off of a 16x physical / 4x electrical slot without issue. Unless you're planning on hitting 10Gbps both ways on both interfaces?

That hasn't been my experience at all. Every 8x Intel enterprise NIC I've run on a 4x slot has had an absolute panic attack when inserted in a 4x slot and just refused to work properly, even at light loads.

That said, I don't have an X550-T2 in any of my systems.

All four 10gig-enabled systems in the house right now (desktop/workstation, testbench machine, server, router) use dual port X520 SFP+ cards (82599ES chips I think)

They are old, but they are rock solid. Reliable as hell.

I also have two older Intel AT2 copper 10gig NIC's (82598EC chip). Same experience.

They throw an absolute hissy fit when installed in a 4x slot. Like, they show up, and are recognized by the system, and allow you to install drivers and get a link with another system, but the connection is flaky as hell, dropping out on occasion, and wildly oscillating between ~gigabit speeds and 10mbit speeds, but never faster than that, even with a single port.

My conlusion has been that Intel NIC's of that era were the most reliable ever, as long as you gave them their full 8x lanes. Deviate from that even slightly and they freak the hell out.

Maybe Intel resolved this in the x550's then? That is good to know.

The thing is, I had been toying with the idea of moving to 25gig or 40gig at least for my NAS traffic. If I go to a consumer board I can make my current 10gig work, but I pretty much give up on growing my network speeds beyond that. At least until affordable 4x gen3 or gen4 25gig NICs hit the market (if they ever do)
 
I haven't kept up with Intel's HEDT offerings.

Last I researched them it looked like they had abandoned that market, but that was like 2 years ago.
Yeah, it was my understanding that HEDT is currently dead for both companies. I would very much welcome a return to the HEDT market for either company, preferably both.
 
Yeah, it was my understanding that HEDT is currently dead for both companies. I would very much welcome a return to the HEDT market for either company, preferably both.

I think HEDT is very much alive for both, but they have just turned it into TRUE HEDT. In other words, workstation.

Here, top per core clocks are not as important as large core counts, quad, octa or even dodeca channel RAM and large numbers of PCIe lanes are, and those who need them know it are willing to pony up the minimum of $3000 it takes.

For a while there HEDT was a "best of both worlds" type of solution. All the benefits of a high end consumer machine + all the benefits of a true workstation machine (at a premium price) but the last release like this from any company appears to have been the Threadripper 3xxx series CPU.

Since then HEDT = workstation, and is great at workstation stuff, but is excessively priced, Oh, and it significantly sacrifices game performance, if you care about that.

My mindset has always been a "no compromises" high end machine, but that doesn't seem to exist anymore. You have to choose the workstation path or the gaming path. You can't "level up" both at the same time.

And that is a crying shame.
 
Last edited:
I mean... lets be honest. If you're doing ACTUAL HEDT stuff you want a workstation that is more task focused.

If you want a gaming system you want that. If you want a blend you build a high end gaming system and get almost workstation like performance until you cross the 10k line. (Rather easy to do in workstations.)

Then up from 10k forget about gaming because you're doing heavy computational work or even higher end AI work. (though AI really is the realm of number crunching servers these days. Desktop work is nice for POC but to actually iterate and ingest vast quantities of data you need a bigger platform with a much fatter pipe, not to mention massive storage, and processing power/ram to be able to navigate the petabytes of data you may be dealing with.)

I'd love to tinker with some for profit AI, the problem is nobody wants to fund a porn ai bot. No joke it would probably make MAD money... but the doing it is the hard part.

I suppose it could be billed as some sort of sexual therapy AI for the gender undecided and such. Safe space to express non uniform desires and fantasies.

It could be profitable but the cost per user would either be high or the AI would be slow. Not to mention training it. (licensing access to some repository of text kink like a specific lit site or amazon's selection of erotica.)

I really need to pay some people in India to write that. Maybe a fiver task... ;)
 
Kinda like when I need a truck for work, but I want to drive a sports car

Sure, you can get something that kinda does both, but then it pretty much sucks at both.
 
Kinda like when I need a truck for work, but I want to drive a sports car

Sure, you can get something that kinda does both, but then it pretty much sucks at both.

Like a 1999 Ford F-150 SVT Lightning? :p

1695255815358.png


The analogy is apt I guess, but the frustrating part to me is that unlike in the vehicle analogy, where there are real physical things about a truck (poor front to rear weight distribution, high center of mass, truck chassis, stiff axles, etc.) that mean it will never be a sports car, despite putting a huge engine in it, with a PC this is not the case.

There is nothing about a Workstation type of design that precludes it from also having screaming consumer gaming desktop per core performance.
The only exception here is when you have to choose between ECC RAM and regular RAM, as the ECC does add a full cycle of latency, harming performance, but in real world numbers, this won't make a huge difference.

It is totally possible to build a PC that is both. And both AMD and INtel used to make products that did just this. They just don't want to anymore, and thus we are all stuck with compromises.

I hate compromises.
 
I think HEDT is very much alive for both, but they have just turned it into TRUE HEDT. In other words, workstation.
...
Since then HEDT = workstation, and is great at workstation stuff, but is excessively priced, Oh, and it significantly sacrifices game performance, if you care about that.
To me, HEDT is "prosumer" which means in between the mainstream and professional (workstation/server) market segments.
 
Become a Patron!
Back
Top