- Joined
- May 28, 2019
- Messages
- 8,893
- Points
- 113
T-FORCE has announced the T-FORCE SIREN Series, the world's first ARGB CPU/SSD AIO liquid coolers.
Go to post
Go to post
I would think it would be the other way around - warmed "SSD" water to the CPU - where the heat load from the SSD is fairly inconsequential and wouldn't significantly impact the CPU.So are you really pumping enough heat through the SSD that running CPU warmed liquid through the cooler will help dissipate more?
I know you're probably right but the way nodes, and heat, have been going for CPU/GPUs things could get hotter before it gets better. Just trying to imagine a single NVMe stick hitting 14 GB/s makes my head spin.I would imagine there will be diminishing returns to cranking up the controller to require active cooling, and as generations get more efficient on better nodes, it will probably stay that way.
Well, on the CPU/GPU side, that's computational load, and largely from stacking a lot of parallel units. And so long as we can either make wafers big enough or shrink the node small enough to pack more parallel units on there - that's a lot of what is driving that right there.I know you're probably right but the way nodes, and heat, have been going for CPU/GPUs things could get hotter before it gets better. Just trying to imagine a single NVMe stick hitting 14 GB/s makes my head spin.
Maybe at the relatively close temperatures that a CPU loop operates, but physics states otherwise. I mean, if we (the collective "We" of this forum community, not singling anyone out here) are gonna discuss a difference in opinion about how much gas it takes to come to a complete stop, I'm gonna argue loop order matters, even if only slightly.CPU, GPU, SSD, whatever, doesn't matter where it is in the loop. J2C and GN both have really long videos covering this.
Nope. After 20-30 minutes of use your loop equalizes. Temperatures settle. Doesn't matter where something is in your loop, the temp will be the same after X time.Maybe at the relatively close temperatures that a CPU loop operates, but physics states otherwise. I mean, if we (the collective "We" of this forum community, not singling anyone out here) are gonna discuss a difference in opinion about how much gas it takes to come to a complete stop, I'm gonna argue loop order matters, even if only slightly.
What temp?Doesn't matter where something is in your loop, the temp will be the same after X time.
Nope. After 20-30 minutes of use your loop equalizes. Temperatures settle. Doesn't matter where something is in your loop, the temp will be the same after X time.
Totally getting off track with you here but yeah, I remember those. My MPG x570 has one. Anytime I'm doing a gaming session I usually use the BIOS to max all the system fans but then go to that one and set it to "boost", it's barely audible then. Otherwise, that puppy is pretty loud at ~6000 rpm, boost keeps it ~1200-2000 rpm.I know a lot of folks had a cow when the x570 come back with an actively cooled chipset. And when 4K 120Hz monitors starting coming out with active cooling.
In the end, I think we'll just have to wait and see if stuff like this really is needed. Once enough people can get their hands and really use PCIe 5.0 NVMe drives, at max speeds, we'll actually know what's happening instead of speculation. With PCIe 4.0 it's been a bit hit or miss since it's only on AMD and even then direct access storage for games has been almost non-existent so other than large file transfers and benches most people hardly ever see them used to their fullest potential.
I agree. I still wholeheartedly remember how when the first SSDs and NVMe drives started popping up all the weak links that showed up when various controllers were not up to par. I forget which controllers in particular but do remember it got to a point it was best to research who was using what before making a purchase regardless of the brand name.as long as the controllers can keep up. THAT will be the trick.
I think we are going to find some weak links when we start flexing that new API.
It's funny because I waited so long before getting my first SSD all of my SSD and NVME drives have been Samsung and I've been happy with them all. Like my ram. I will use Corsair or Crucial but I won't touch G-skill. The one time I did it wouldn't run at out of the box settings on the motherboard I had so I returned it and got cheap corsair and it worked fine. In my experience being able to work out of the box is more valuable than being highly tweakable.I agree. I still wholeheartedly remember how when the first SSDs and NVMe drives started popping up all the weak links that showed up when various controllers were not up to par. I forget which controllers in particular but do remember it got to a point it was best to research who was using what before making a purchase regardless of the brand name.
I think I've tried most flavors of SSDs and NVMe drives. Samsung, Sandisk, Toshiba, PNY, Intel, Sabrent, Inland, and a few others that I can't remember. The only brand I'll never use again is Kingston. I had 4x 2280s in two RAIDs in a laptop that had one crap out barely three months in. One for pair for OS and the other for games(MSI GT80 Titan that had two hardware RAID chips). I replaced them with Samsung 850s and those drives are still alive to this day.It's funny because I waited so long before getting my first SSD all of my SSD and NVME drives have been Samsung and I've been happy with them all. Like my ram. I will use Corsair or Crucial but I won't touch G-skill. The one time I did it wouldn't run at out of the box settings on the motherboard I had so I returned it and got cheap corsair and it worked fine. In my experience being able to work out of the box is more valuable than being highly tweakable.
Same. I didn't even touch SSDs until Samsung 3D V-NAND. I've never used an SSD without some kind of 3D NAND in it. So I missed a lot of the early growing pains of the SSD world.It's funny because I waited so long before getting my first SSD...
30 TB......That's so 2016Yea and now we're speccing out 30+ tb ssd arrays. Getting nanosecond speed latency to disk subsystems and more IOPS than you can shake a stick at.
Not surprising since my personal drive tests at almost a million IOPS.