More SSDs Shipped than HDDs in 2020

Very true but mostly in anticipation of things that may(or may not) come down the road. I can say that I do see a slight decrease in load times for games with it, not as dramatic as the switch from platter to SATA III but it's there. I committed to the 1TB drive I have now when I did the build to see with my own eyes what it all breaks down to. With a 2TB 4.0 drive, I'll pretty much drop it in there and call it good for the bulk of the lifespan for this rig.
Thanks for answering; I had tried to make my post seem non-confrontational and solely informative, and upon review it looks like I didn't really succeed, so thank you also for not taking it that way!

Specifically, NVMe 3.0 versus NVMe 4.0 is what I'm seeing as 'not really close to worth it'. NVMe 4.0 is still solidly in the 'early adopter tax' zone, and the specific drives that could take full advantage of the interface, if they exist (Samsung probably), are at the extreme end of pricing. Those that cannot take full advantage of PCIe 4.0, which is most of them I've seen, are even less of a benefit. Some of those are cheaper, so they might be worth consideration if one feels that they must have NVMe 4.0, but overall it seems more prudent to wait it out. Get an NVMe 3.0 drive with the space you need and come back later.

I have three 2TB NVMe 3.0 drives right now and ~4TB of games, applications, some media (most recent photographs) and so on spread across them, and that's with overflows etc. going to the 8TB mirrored pair I mentioned above. And about the only time I feel that any the drives could be faster is when doing larger transfers. Day to day, couldn't reasonably ask for more.
 
I have a Samsung 1TB 980 Pro on my 5950, and honestly, I don't think its really an improvement over the 970 pro. The move to TLC seems to have hurt certain areas of performance of the drive and PCIE 4 doesn't really help. Its why I would rather see a smaller SLC drive that focuses on low latency and fast random 4k reads and writes at queue depth 1 over some kind of 4TB+ TLC or QLC (QOC?) monster - most consumer use cases are much more likely to want the lower latency and small random read / write performance over sustained transfer speed. And, if you really need the sustained speed, Raid 0 is still an option.
 
I can’t even imagine using something with a 5400rpm drive today *shudder*
The 7th-gen and 8th-gen video game consoles say "hello there friend!"

When I got my PS3 as a refurb, I immediately tossed the 40GB 5400rpm drive it came with, and threw in a 320GB 7200rpm drive. PS4 Pro still using the drive it came with, and X360 and XB1 using their stock drives cuz you can't swap that crap out without soft-modding. Sh1t is painfully slow. 9th-gen systems sound real nice with their storage performance, but the space isn't there.

I can hardly imagine 5400rpm drives still being used in laptops (or anywhere really). Laptops should only have SSDs at this point.

My personal desktop has a couple SATA SSDs, 1 NVMe drive, and two HDDs (with a 3rd HDD in an external enclosure). One day I'll build a NAS. My clients usually go for a 1TB main SSD for OS and programs, and they want an HDD for bigger file storage. Some of my friends are SSD-only in their rigs. My next rig will probably be SSD-only if I build the NAS before then.
 
I have a Samsung 1TB 980 Pro on my 5950, and honestly, I don't think its really an improvement over the 970 pro. The move to TLC seems to have hurt certain areas of performance of the drive and PCIE 4 doesn't really help. Its why I would rather see a smaller SLC drive that focuses on low latency and fast random 4k reads and writes at queue depth 1 over some kind of 4TB+ TLC or QLC (QOC?) monster - most consumer use cases are much more likely to want the lower latency and small random read / write performance over sustained transfer speed. And, if you really need the sustained speed, Raid 0 is still an option.
Sustained write speeds are where flash as a technology fails.

Memory card manufacturers are dealing with this in a variety of ways, the most common and probably the most effective being the use of significant over-provisioning. Some have 3x or more flash IC capacity than they expose (and market) themselves as having in order to deal with things like high bitrate 8k video and so on.

I think if that's something that's desired in a computer form-factor, that enterprise drives are probably going to be considered, and I shudder at those prices!

[right now the... most economical way to get good sustained write speeds is to load a bunch of spinners into an array, as they're good for ~200MB/s+ each; so ten or more should be able to handle 10Gbit speeds]
 
Some of my friends are SSD-only in their rigs.
When I revived my 2600k rig a few years back I did that with it. Somewhat a twist since that rig originally began with 4 x HHD in two RAIDs. I revived it again last Feb. for WFH and did the same again.

When I put together the 3700x rig I was happy to keep it SSD only. My ultimate goal is to just have NVMe drives in the board. More of an aesthetic thing than anything else but after all these decades I really like the idea of it all being in the board instead of hanging out somewhere in the case. It's a totally silly thing but I really like the idea of it.
 
When I put together the 3700x rig I was happy to keep it SSD only. My ultimate goal is to just have NVMe drives in the board. More of an aesthetic thing than anything else but after all these decades I really like the idea of it all being in the board instead of hanging out somewhere in the case. It's a totally silly thing but I really like the idea of it.
I tried this, with the goal of relying on network storage; unfortunately since I'm definitely still learning about network storage, I wound up putting some spinners back in. I don't like it but I have over 5TB on the array right now, and they were definitely cheaper than having flash, especially since I'd have to upgrade two of my 2TB NVMe drives to 4TB to do that. Or upgrade CPU and motherboard to a platform with more PCIe lanes so I can add in an NVMe PCIe card.

I'll admit that quite a bit of that is my own laziness and tendency to be a bit of a packrat, so I can't honestly claim to need the storage (and thus don't recommend it to others without qualifications), but man does it come in handy when the NAS is down for this reason or that.

And on that note, I've honestly considered a DAS or a dedicated NAS box, and perhaps a dual-purpose unit if such things exist. Running my own NAS server has been a great experience and I've learned a lot, but I would have been much better off having something that 'just works'.
 
And on that note, I've honestly considered a DAS or a dedicated NAS box, and perhaps a dual-purpose unit if such things exist. Running my own NAS server has been a great experience and I've learned a lot, but I would have been much better off having something that 'just works'.
This ASUS unit is more or less what I'd be looking for: https://www.amazon.com/dp/B07BS332TD/?tag=thefpsreview-20

I'd be interested in a DAS, but only if it also had NAS functionality as well. And I think that's probably not something that exists (and I can imagine technical reasons as to why).
 
I tried this, with the goal of relying on network storage; unfortunately since I'm definitely still learning about network storage, I wound up putting some spinners back in. I don't like it but I have over 5TB on the array right now, and they were definitely cheaper than having flash, especially since I'd have to upgrade two of my 2TB NVMe drives to 4TB to do that. Or upgrade CPU and motherboard to a platform with more PCIe lanes so I can add in an NVMe PCIe card.

I'll admit that quite a bit of that is my own laziness and tendency to be a bit of a packrat, so I can't honestly claim to need the storage (and thus don't recommend it to others without qualifications), but man does it come in handy when the NAS is down for this reason or that.

And on that note, I've honestly considered a DAS or a dedicated NAS box, and perhaps a dual-purpose unit if such things exist. Running my own NAS server has been a great experience and I've learned a lot, but I would have been much better off having something that 'just works'.
What platform are you using for NAS? Freenas has been extremely easy for me, and it just works (after I tried to quit using my old AMD A10- 5800k APU). I also work in IT and have an extensive history in system engineering, so I totally admit that I may have a skewed perspective on what is easy and what isn’t.
 
Sustained write speeds are where flash as a technology fails.

...

[right now the... most economical way to get good sustained write speeds is to load a bunch of spinners into an array, as they're good for ~200MB/s+ each; so ten or more should be able to handle 10Gbit speeds]

and this may be exactly why I hang on to these old velociraptors until they fail. Sure, the density sucks these days, but the 10k spindle speed has kept their linear write reasonably high - my 6 drive raid 0 can manage over 500MB / sec after I’ve saturated the write cache, and the latency is still better than 7200rpm drives.
 
What platform are you using for NAS? Freenas has been extremely easy for me, and it just works (after I tried to quit using my old AMD A10- 5800k APU). I also work in IT and have an extensive history in system engineering, so I totally admit that I may have a skewed perspective on what is easy and what isn’t.
FreeNAS, TrueNAS, and now TrueNAS Scale, the Debian-based one.

Main pain points have been the lack of 10Gbit support for Aquantia chipsets in the FreeBSD version, which is still a problem today. And will be until they roll FreeBSD 13 as the base, which may never happen given how well their Scale version is working.

I've tried many, many, many things to get around the Aquantia problem, and have finally just broken down and picked up a used X540 card because the Aquantia cards can be... temperamental. X550 in the desktop now too, same reason, I started with three systems using Aquantia and now only have them available as backups.

Also had some goofiness trying to keep SMB working well with Windows clients, had permissions issues of all kinds that I've only ever addressed by manually changing permissions in Bash.

Things I want to do but haven't are setting up some good NFS shares, particularly between the NAS and VMs running on other systems, and setting up a domain controller on Server 2019. Have the Server 2019 system up and running (with one Aquantia interface and several Intel 1Gbit interfaces), I've just never done the domain control thing in depth so I'm definitely... out of my depth :).

NFS I just need to learn. I've actually done an iSCSI share before with multiple Windows clients, and found out real quick why that's a very, very bad idea. I'll probably revisit that for VMs. I'm interested in both because I think they should be more efficient than SMB, though I'll admit to not really knowing whether there's any truth to that assumption.


As for why I don't find FreeNAS easy or stable... done right and left alone, that's exactly my experience. I just don't leave it alone!

I will say, as a strong aside, that holy hades ZFS is sturdy, I haven't had to re-do my pools except to change write-time variables like compression. Had I left them alone, they'd have survived 40+ OS reinstalls with CentOS, RHEL, BSD, Ubuntu, and other OSs over the last few years including being run through virtualized OSs, as well as far too many power cuts and system instability issues due to overclocking and hardware changes!
 
and this may be exactly why I hang on to these old velociraptors until they fail. Sure, the density sucks these days, but the 10k spindle speed has kept their linear write reasonably high - my 6 drive raid 0 can manage over 500MB / sec after I’ve saturated the write cache, and the latency is still better than 7200rpm drives.
Had I known what I know now, I'd probably have gone that route. But working in and out of server rooms... not sure I would want to do it today. I've spend a not insignificant amount of time and a little money trying to quiet things down. Had to replace a noisy PSU (~15 yrs old now, it's the backup), lots of fans that have gotten old, the second AIO I ever bought (~10 yrs old), and so on. Pretty happy with the results and still running plenty of scrounged parts!

And I do see the draw, and since I have a 24-port LSI controller (with at least one port broken, it's a used server pull of course), I could probably get some 10K or 15 drives to throw in. Maybe a few triple-mirror pools with the understanding that they're gonna die, if I buy a box of used ones?
 
And I do see the draw, and since I have a 24-port LSI controller (with at least one port broken, it's a used server pull of course), I could probably get some 10K or 15 drives to throw in. Maybe a few triple-mirror pools with the understanding that they're gonna die, if I buy a box of used ones?

and this may be exactly why I hang on to these old velociraptors until they fail. Sure, the density sucks these days, but the 10k spindle speed has kept their linear write reasonably high - my 6 drive raid 0 can manage over 500MB / sec after I’ve saturated the write cache, and the latency is still better than 7200rpm drives.

So... I splurged. Picked up two five-packs of these Hitachi spinners: NetApp 600GB 15K SAS HDD X412A-R5

Ten drives gives me enough to play with various configurations. May try four in a RAIDZ1 just to see, and six in a pool of three mirrored pairs. Heck, might run them all as mirrored pairs in a single pool and see how they roll so long as none of them die within the first month of use!

And with the US$10 SquareTrade "warranty" added, I'm at US$200 for the lot. I don't expect to get much out of the add-on warranty, but hey, seems 'warranted' for this course of action :)
 
This is good news all around.
I’m still looking for when I can replace my larger server storage with flash.
4TB-8TB 2.5 inch flash drives at prices that aren’t unreasonable would be massive for me.
I’m demanding too much, that I know.
 
FreeNAS, TrueNAS, and now TrueNAS Scale, the Debian-based one.

Main pain points have been the lack of 10Gbit support for Aquantia chipsets in the FreeBSD version, which is still a problem today. And will be until they roll FreeBSD 13 as the base, which may never happen given how well their Scale version is working.

I've tried many, many, many things to get around the Aquantia problem, and have finally just broken down and picked up a used X540 card because the Aquantia cards can be... temperamental. X550 in the desktop now too, same reason, I started with three systems using Aquantia and now only have them available as backups.

Also had some goofiness trying to keep SMB working well with Windows clients, had permissions issues of all kinds that I've only ever addressed by manually changing permissions in Bash.

Things I want to do but haven't are setting up some good NFS shares, particularly between the NAS and VMs running on other systems, and setting up a domain controller on Server 2019. Have the Server 2019 system up and running (with one Aquantia interface and several Intel 1Gbit interfaces), I've just never done the domain control thing in depth so I'm definitely... out of my depth :).

NFS I just need to learn. I've actually done an iSCSI share before with multiple Windows clients, and found out real quick why that's a very, very bad idea. I'll probably revisit that for VMs. I'm interested in both because I think they should be more efficient than SMB, though I'll admit to not really knowing whether there's any truth to that assumption.


As for why I don't find FreeNAS easy or stable... done right and left alone, that's exactly my experience. I just don't leave it alone!

I will say, as a strong aside, that holy hades ZFS is sturdy, I haven't had to re-do my pools except to change write-time variables like compression. Had I left them alone, they'd have survived 40+ OS reinstalls with CentOS, RHEL, BSD, Ubuntu, and other OSs over the last few years including being run through virtualized OSs, as well as far too many power cuts and system instability issues due to overclocking and hardware changes!

I just roll my own under Ubuntu Server these days. I don't need the FreeNAS web GUI. Just install ZFS on Linux. The command line tools are easy, as is file sharing.

I love ZFS. Best file system ever. Put it on a Linux server and you have all the hardware compatibility you will need as well.

I have no experience with Aquantia chipsets, but in general my take when it comes to networking on servers is, Intel NIC's or nothing at all.

I got a bunch of cheap old Intel X520's used on eBay a while back, and 10gig works like a char,
 
This is good news all around.
I’m still looking for when I can replace my larger server storage with flash.
4TB-8TB 2.5 inch flash drives at prices that aren’t unreasonable would be massive for me.
I’m demanding too much, that I know.

I've never felt the need to use flash for my bulk storage.

I have 12x 10krpm 10TB Seagate Enterprise drives right now configured in a single pool of two RAIDz2 VDEV's (so ZFS equivalent of RAID60), and they are plenty fast for what I need them for. It's really a file server, largely for media. I feel like I wouldn't benefit much from flash storage, and for my size requirements, even if flash comes down in price, it would be excessively expensive.
 
I've never felt the need to use flash for my bulk storage.

I have 12x 10krpm 10TB Seagate Enterprise drives right now configured in a single pool of two RAIDz2 VDEV's (so ZFS equivalent of RAID60), and they are plenty fast for what I need them for. It's really a file server, largely for media. I feel like I wouldn't benefit much from flash storage, and for my size requirements, even if flash comes down in price, it would be excessively expensive.
You do not need flash storage for bulk storage, I just want it to be fast as hell lol.
 
Yeah, my 2 cents on this:

SSDs are now fast enough they aren't the bottleneck most of the time (sure, a few corner cases here and there, but by and large...). That's been true since the shift to NVMe PCI 3, PCI 4 didn't really do anything.

I want the emphasis now to go toward maintaining that status of "Not being the bottleneck" and work toward capacity (caveat - without sacrificing reliability or longevity). There should be no need for spinners at all really. The only restriction on SSD capacity now is how many chips you want to stack in a package, and how cheaply you can make those chips.

So yeah, you don't need bulk storage to be fast, but there's no reason that bulk storage needs to be on spinners either, other than the infrastructure for that had become commodity, and people are still willing to pay a premium for NAND storage when it actuality in will be less expensive to produce as it gets to scale.
 
I just roll my own under Ubuntu Server these days. I don't need the FreeNAS web GUI. Just install ZFS on Linux. The command line tools are easy, as is file sharing.
This is something I think I am learning. I've even done it! Once!

Before I did something else and it self-destructed in a manner that I wasn't prepared to deal with. So goes most of my attempts at using Linux personally. I use it at work too, but in 'fire and forget' situations that are heavily configuration managed, quite importantly, by someone else.

FreeNAS and successors are just very fast to get up and running. I've set the same server up three or four times in a day because I kept doing stuff that I didn't know how to undo and also didn't know if what I did could be undone. Well, it probably could have and I probably realized that, but it is usually more important to get the server up and running when I do something bad, and that leads back to FreeNAS+ being fast to get up and running.

And my biggest problem with *nix filesharing is permissions issues with Windows clients. Perhaps I'll learn and actually write down what I do (but that sounds like work), but I've always managed to be in a position where Client A has free access with a set of credentials, and Client B plugged into the same switch does not. Haven't had this happen with our work systems, but since they're ancient, I have no intention of replicating their functionality either. I just don't have anything that old!
 
This is something I think I am learning. I've even done it! Once!

Before I did something else and it self-destructed in a manner that I wasn't prepared to deal with. So goes most of my attempts at using Linux personally. I use it at work too, but in 'fire and forget' situations that are heavily configuration managed, quite importantly, by someone else.

FreeNAS and successors are just very fast to get up and running. I've set the same server up three or four times in a day because I kept doing stuff that I didn't know how to undo and also didn't know if what I did could be undone. Well, it probably could have and I probably realized that, but it is usually more important to get the server up and running when I do something bad, and that leads back to FreeNAS+ being fast to get up and running.

And my biggest problem with *nix filesharing is permissions issues with Windows clients. Perhaps I'll learn and actually write down what I do (but that sounds like work), but I've always managed to be in a position where Client A has free access with a set of credentials, and Client B plugged into the same switch does not. Haven't had this happen with our work systems, but since they're ancient, I have no intention of replicating their functionality either. I just don't have anything that old!
Oddly enough - I can share Linux (and OS X ) files absolutely fine and with near perfect reliability - but as soon as I get a Win10 box, file sharing goes right out the window and 8 times out of 10 I end up screaming “F it” and just break out a thumb drive.
 
Why?

I get going with NVMe, cause that scales up to 3500MB/s from SATAs 550MB/s speed limit... but PCIe 4.0 isn't going to get you that bump, it's not going to lower latency outside of a very specific benchmark, and while you'll be able to see the extra speed when you use it, you're not really going to be able to use it that much.

Saying that as a friend, of course!

I just don't find the difference compelling enough to pay up for, or wait for. Once the data is in place, speeds stop mattering quite a bit at this level, and other parts become much more important, like having enough RAM and enough fast cores, and networking, and other connectivity, and so on.

Well for me my new video card has forced my pcie bus to go pcie 4 so my nvme drive has to now run in a daughter card. So my next nvme will be pcie 4. And i want a 2tb.
 
Become a Patron!
Back
Top