TEAMGROUP Announces New T-FORCE SIREN Series CPU/SSD AIO Liquid Cooler

So are you really pumping enough heat through the SSD that running CPU warmed liquid through the cooler will help dissipate more?

And don't nvmeSSD's have a low temperature floor as well where performance is reduced at lower temperatures as well?

And this is the first AM5 cooler I've seen.
 
So are you really pumping enough heat through the SSD that running CPU warmed liquid through the cooler will help dissipate more?
I would think it would be the other way around - warmed "SSD" water to the CPU - where the heat load from the SSD is fairly inconsequential and wouldn't significantly impact the CPU.

And you aren't going to get anything sub-ambient, so I don't think there will be issues with reduced temp from too low of a temperature.

That said, I think this will go about like HDD and DIMM watercooling did a few years ago. It was all the rage when you first needed to start cooling DIMMs, but we never really got to the point of needing active cooling on either (unless you are getting into some really dense situations) - and while open look jackets existed for these, they never really caught on. I think SSD cooling will follow a similar vein -- I would imagine there will be diminishing returns to cranking up the controller to require active cooling, and as generations get more efficient on better nodes, it will probably stay that way.
 
I would imagine there will be diminishing returns to cranking up the controller to require active cooling, and as generations get more efficient on better nodes, it will probably stay that way.
I know you're probably right but the way nodes, and heat, have been going for CPU/GPUs things could get hotter before it gets better. Just trying to imagine a single NVMe stick hitting 14 GB/s makes my head spin.
 
Just going to say this. The order in which items are placed in a water cooling loop does not matter. The loop is going to reach an equilibrium temperature by heat absorbed versus heat expelled. Coolant has a thermal capacity which will never be reached by whatever you are cooling so long as you have enough radiator/fan to remove heat.

CPU, GPU, SSD, whatever, doesn't matter where it is in the loop. J2C and GN both have really long videos covering this.

For instance my GPU is first in loop, CPU second. Under full load my CPU stays in the low 50's, GPU stays in the upper 30's. If I reverse their order the temps would be the same.
 
I know you're probably right but the way nodes, and heat, have been going for CPU/GPUs things could get hotter before it gets better. Just trying to imagine a single NVMe stick hitting 14 GB/s makes my head spin.
Well, on the CPU/GPU side, that's computational load, and largely from stacking a lot of parallel units. And so long as we can either make wafers big enough or shrink the node small enough to pack more parallel units on there - that's a lot of what is driving that right there.

In storage - well, there are a lot of parallel storage units, but .... they aren't exactly computational - they only do work if they change state. Yeah, the controller will need some cooling, but the controller is a very small part of the overall package, and it doesn't exactly need to be super-high powered with respect to computational ability.

Same thing with NICs really - you see a few high performance ones that need active cooling, but by and large the vast majority of them are entirely passive - most without even a sink, just because it isn't really needed to hit consumer-level performance.

I'd expect enterprise and some level to require some cooling, either because it's just that high performance or because you've got the density up there to where you can't passively cool it any longer. But in the consumer space, right now active cooling is a novelty more than a necessity, and I suspect it will stay that way.

I know a lot of folks had a cow when the x570 come back with an actively cooled chipset. And when 4K 120Hz monitors starting coming out with active cooling. Now, SSD isn't exactly in the same vein, and I don't know that the vast majority of consumers even know the difference between active and passive cooling (unless they can hear it running), and a lot of us can remember back to the original Pentium P60, which was the first consumer CPU I can recall to require active cooling, and the jokes about frying an egg on that. There's always a bit of resistence when things first start requiring active cooling, and you could even make the case in CPU/GPUs that we are finally coming around full circle (with ARM) to the point where we can get adequate performance again with just passive cooling.
 
CPU, GPU, SSD, whatever, doesn't matter where it is in the loop. J2C and GN both have really long videos covering this.
Maybe at the relatively close temperatures that a CPU loop operates, but physics states otherwise. I mean, if we (the collective "We" of this forum community, not singling anyone out here) are gonna discuss a difference in opinion about how much gas it takes to come to a complete stop, I'm gonna argue loop order matters, even if only slightly.
 
If the discussion is transfer of energy between two materials. One of the materials is effectively in motion and that is transferring energy. The total surface area over time of the in motion material is greater allowing for more energy transfer over that period of time. The passage through the radiator is a fixed length and with fixed pump speed would transfer the same amount of energy. Now the question is how much energy transfer is lost in an 5 degree (sounds high for an nvme waterblock) difference. And does the introductory temperature of the liquid material make a substantial difference?

I would argue it does matter or having actively cooled radiator lines wouldn't make a difference. Accepting that the delta between parts means less resistance and greater thermal conductivity. It only stands to reason the order of items in a loop will matter. Now the question is really about scale more so than if.

I think we can all accept that temperature difference matters.

Now we talk scale and if the few degrees picked up on an multi item loop actually makes an appreciable difference in heat transfer. I will accept that it probably doesn't affect overall loop performance.
 
Maybe at the relatively close temperatures that a CPU loop operates, but physics states otherwise. I mean, if we (the collective "We" of this forum community, not singling anyone out here) are gonna discuss a difference in opinion about how much gas it takes to come to a complete stop, I'm gonna argue loop order matters, even if only slightly.
Nope. After 20-30 minutes of use your loop equalizes. Temperatures settle. Doesn't matter where something is in your loop, the temp will be the same after X time.
 
Nope. After 20-30 minutes of use your loop equalizes. Temperatures settle. Doesn't matter where something is in your loop, the temp will be the same after X time.

In measurable practice I am sure you are correct. But in actual fact you are incorrect.

Otherwise we would just have water passing through with no need to radiate the thermal energy that has been transferred to the water into the air.

I understand that looking at your loop and the measure of the water in the radiator the temperature appears to stabilize. You are not seeing the temperature of the liquid at each stage in the loop.

If you accept that the liquid is warmer when it enters the radiator and cooler when it leaves the radiator. Then you also must accept that the same liquid is gaining thermal energy as it passes through each of the heat exchange elements. Hence the water entering a second device on the loop is warmer than the water was when it entered the first device on the loop.

Now if you can't accept that... well I think we are now combating opinion vs fact. Your opinion is that all of the water in the loop is effectively the same temperature. Somehow...

While fact argues that an exchange of thermal energy is happening. Even if you don't actively detect it by what you are looking at.
 
I know a lot of folks had a cow when the x570 come back with an actively cooled chipset. And when 4K 120Hz monitors starting coming out with active cooling.
Totally getting off track with you here but yeah, I remember those. My MPG x570 has one. Anytime I'm doing a gaming session I usually use the BIOS to max all the system fans but then go to that one and set it to "boost", it's barely audible then. Otherwise, that puppy is pretty loud at ~6000 rpm, boost keeps it ~1200-2000 rpm.

Meanwhile, I remember reading some pretty hilarious reviews about those first 4K 120Hz monitors(ROG if I remember correctly). Evidently, they got kind of loud as well.


In the end, I think we'll just have to wait and see if stuff like this really is needed. Once enough people can get their hands and really use PCIe 5.0 NVMe drives, at max speeds, we'll actually know what's happening instead of speculation. With PCIe 4.0 it's been a bit hit or miss since it's only on AMD and even then direct access storage for games has been almost non-existent so other than large file transfers and benches most people hardly ever see them used to their fullest potential.
 
In the end, I think we'll just have to wait and see if stuff like this really is needed. Once enough people can get their hands and really use PCIe 5.0 NVMe drives, at max speeds, we'll actually know what's happening instead of speculation. With PCIe 4.0 it's been a bit hit or miss since it's only on AMD and even then direct access storage for games has been almost non-existent so other than large file transfers and benches most people hardly ever see them used to their fullest potential.

I have to say that in actuality most SSD's are rarely ever used to maximum throughput. So what becomes more important than IOP's is latency. Now most SSD's and every NVME in spite of the PCIE version have insanely low latency. So direct access storage or whatever it is called will be the real flex for NVME owners. The more bandwidth you have the better your performance will be... as long as the controllers can keep up. THAT will be the trick.

I think we are going to find some weak links when we start flexing that new API.
 
as long as the controllers can keep up. THAT will be the trick.

I think we are going to find some weak links when we start flexing that new API.
I agree. I still wholeheartedly remember how when the first SSDs and NVMe drives started popping up all the weak links that showed up when various controllers were not up to par. I forget which controllers in particular but do remember it got to a point it was best to research who was using what before making a purchase regardless of the brand name.
 
I agree. I still wholeheartedly remember how when the first SSDs and NVMe drives started popping up all the weak links that showed up when various controllers were not up to par. I forget which controllers in particular but do remember it got to a point it was best to research who was using what before making a purchase regardless of the brand name.
It's funny because I waited so long before getting my first SSD all of my SSD and NVME drives have been Samsung and I've been happy with them all. Like my ram. I will use Corsair or Crucial but I won't touch G-skill. The one time I did it wouldn't run at out of the box settings on the motherboard I had so I returned it and got cheap corsair and it worked fine. In my experience being able to work out of the box is more valuable than being highly tweakable.
 
It's funny because I waited so long before getting my first SSD all of my SSD and NVME drives have been Samsung and I've been happy with them all. Like my ram. I will use Corsair or Crucial but I won't touch G-skill. The one time I did it wouldn't run at out of the box settings on the motherboard I had so I returned it and got cheap corsair and it worked fine. In my experience being able to work out of the box is more valuable than being highly tweakable.
I think I've tried most flavors of SSDs and NVMe drives. Samsung, Sandisk, Toshiba, PNY, Intel, Sabrent, Inland, and a few others that I can't remember. The only brand I'll never use again is Kingston. I had 4x 2280s in two RAIDs in a laptop that had one crap out barely three months in. One for pair for OS and the other for games(MSI GT80 Titan that had two hardware RAID chips). I replaced them with Samsung 850s and those drives are still alive to this day.

I've had good luck with G.Skills 32 GB 2133 MHz kits in my 2600K(MSI GD65 Z68) and 4930K (Gigabyte UD3 X79) rigs. I must've gotten lucky because you're not the first person to say they had issues out of the box. That 4930K rig is still alive and kicking. I think I may've even used one of their kits for that laptop I mentioned above.

I was really concerned about compatibility with the X570/3700x rig so I stuck with Corsair and it worked out. I've mostly stuck to one or the other for over ten years now. Although I haven't used any Crucial they are always on a list of something to try.

I have to completely agree with you about how it's more valuable just to have something that just works as opposed to having to tweak it to get it to work. These days I've got enough going on that it's just not worth my time to nurse something along to do what it was bought to do.
 
It's funny because I waited so long before getting my first SSD...
Same. I didn't even touch SSDs until Samsung 3D V-NAND. I've never used an SSD without some kind of 3D NAND in it. So I missed a lot of the early growing pains of the SSD world.
 
I hoped on fairly early. I had good luck with them though - fairly similar failure rate with hard drives, maybe a bit better even. I have some very early generation ones that are still running in oddball places. Even early on the speeds were so much better than HDDs, if they had failed often I probably wouldn't have shied away from them. They were so expensive you could really only afford for them to be your boot drive + maybe 1 or 2 other things that you used frequently, so backup/restoring was pretty easy. That was at the time when we were looking at raid controllers, big striped arrays, and RAM drives to get any speed increases we could, so the SSD prices weren't that unreasonable if you were in a performance mindset.
 
Yea and now we're speccing out 30+ tb ssd arrays. Getting nanosecond speed latency to disk subsystems and more IOPS than you can shake a stick at.

Not surprising since my personal drive tests at almost a million IOPS.
 
Become a Patron!
Back
Top