PCIe 5.0 SSDs with Up to 14 GB/S of Bandwidth Will Be Ready for Consumers in 2024

I look at it more in terms of 'pushing the envelope', where the economy of scale of producing faster drives will make them more affordable - particularly as tooling for older, slower drives is phased out.

We see this now with NVMe drives hitting 2GB/s bidirectional are as cheap as SATA drives that topped out at 550MB/s, or 1/4th the speed. You pay a premium for more than 2GB/s of course, and that's the needle that is likely to move.
Except they won't make slower drives cheaper they'll just sell the faster ones at a markup. In 3 years an 1TB NVME drive dropped 20% in price. When 4GB/s models came out there was no significant drop in prices, it wasn't even noticeable. So 8-14Gb/s it will mean nothing. Slower drives will just be phased out instead of being sold for cheap.
 
Except they won't make slower drives cheaper they'll just sell the faster ones at a markup. In 3 years an 1TB NVME drive dropped 20% in price. When 4GB/s models came out there was no significant drop in prices, it wasn't even noticeable. So 8-14Gb/s it will mean nothing. Slower drives will just be phased out instead of being sold for cheap.
So, I'll admit that I was being a little too positive even for myself.

But here's my thinking: the slower controllers and slower NAND will be phased out of production. Yes, whatever is 'fastest' will continue to have a premium, but that's usually not what's being produced the most - the current mass-produced controllers and NAND will be cheaper. Over the last decade, what one could get in terms of 1TB of SSD has gone up from a few 100MB/s to 2000MB/s.
 
I'm ready to let spinning drives die. Capacity is the only purpose they can really serve right now, but in almost every other metric lag way, way behind solid state storage. I want solid state to catch up on the capacity metric as well. That's what I'm saying.
NAND density is catching up for sure, we're maybe a generation or two out. The largest spinners I keep are 8TB personally and I'd happily replace them all with NAND if I could - but I'd need half a dozen 8TB SSDs to do that!

On balance, one thing to keep in mind is that your typical 1Gbase-T, 2.5Gbase-T, and occasionally 10Gbase-T ethernet connections top out at 111MB/s, 223MB/s, and 1110MB/s, respectively. The first two can be saturated by a single spinner, meaning that a mirrored pair would have zero problem maxing out line rate on most consumer network links.
 
Except they won't make slower drives cheaper they'll just sell the faster ones at a markup. In 3 years an 1TB NVME drive dropped 20% in price. When 4GB/s models came out there was no significant drop in prices, it wasn't even noticeable. So 8-14Gb/s it will mean nothing. Slower drives will just be phased out instead of being sold for cheap.

I mean, my first SSD in 2009, an OCZ Agility 120GB cost me ~$320. With inflation that's almost $400 today.

It was great for the time (though like all OCZ SSD's died within 2 years) but wouldn't exactly impress anyone today.

What I can buy with $400 today is certainly a hell of a lot better, faster and much larger.

These things do improve over time, but it is a lot slower than I'd like. Even hard drive pricing continues to drop per TB.

When I bought my 10TB Seagate Enterprise drives in October 2017, they were about $350 a pop.

Now, 4+ years later I am buying equivalent 16TB drives for $330 a pop.

So, things are improving, just much more slowly than we'd like.
 
HDD prices recently dropped a lot, a few months ago a 10TB drive would probably cost more than in 2017.
 
On balance, one thing to keep in mind is that your typical 1Gbase-T, 2.5Gbase-T, and occasionally 10Gbase-T ethernet connections top out at 111MB/s, 223MB/s, and 1110MB/s, respectively. The first two can be saturated by a single spinner, meaning that a mirrored pair would have zero problem maxing out line rate on most consumer network links.
This is true, I don't look for fast storage for NAS.

I also don't play games or load my OS or run virtualized environments over a LAN, those all happen locally. I understand it's common in some enterprise environments, but they probably aren't doing that over Gigabit copper either ...
 
I understand it's common in some enterprise environments, but they probably aren't doing that over Gigabit copper either ...
Well, higher bandwidth of course, but also with network links that are optimized for latency - for which TCP/IP over copper decidedly isn't!
 
I'm ready to let spinning drives die. Capacity is the only purpose they can really serve right now, but in almost every other metric lag way, way behind solid state storage. I want solid state to catch up on the capacity metric as well. That's what I'm saying.

I don't know if latency on a drive is my main bottleneck, but I suspect in my mostly consumer system, my NVMe drive latency isn't the main bottleneck keeping my system from booting up faster or my games from loading faster.

For client storage I agree.

My work laptop has a 2.5" spinner in it (with Intel SSD cache, but still) and it is an absolute pain to deal with.

As previously mentioned, I got my first SSD in 2009. It was expensive, but it was so transformitive, that I immediately vowed to upgrade all of my client machines to SSD's. By 2010 the only spinning hard drives I had in clients were secondary drives for large storage, and by 2011 even they were gone from the clients.

I moved to SSD only for local storage, but stored bulk files on my NAS.

If you ask me, spinning hard drives have been all but obsolete in client machines for a decade, but yeah, I am usually an early adopter with those sorts of things.

My NAS however is a different story all together.

It' currently has 120TB of spinning hard drives and is in the process (slowly, ordering and swapping one drive a month) of being upgraded to 192TB. SSD's in this setting would be complete overkill for the performance needed for mass storage of files, media libraries, etc. It would also be obscenely expensive. I mean, it's already expensive as it is with hard drives. 12 of them at $350 a pop, and the backplane and controller to handle them. Doing the same thing with SSD's would be absolutely insane and wouldn't really buy me much if anything at all.

I'm definitely in support of hard drives hanging around for quite a bit longer for NAS use.

There is no "price per GB" target for me to move to SSD's, because as SSD's get cheaper over time, my storage requirements also grow, so I need more of them.

I'll switch when the cost per GB of large capacity SSD's comes within range of large capacity hard drives.

Right now I'm sticking 16TB hard drives in my server one by one at $350.

Does anyone want to tell me what a 16TB SSD would cost me? Especially one that is more reliable (preference for MLC) :p
 
What you do is go get you a nice raid card that can host multiple PCIE x.4 drives. Lets say 4. And a 32gb stick of DDR whatever ram. Doesn't really matter there. Running with real raid hardware and not software controllers will help.

Pack on the RAM and some higher speed caching drive storage. Then some 'slower' nvme drives to act as your storage tier, then of course some old school SAS drives... say 10, 20tb drives for an 160tb cold storage with two hot spares.

Of course then you are into enterprise storage.

What we see coming to the consumer market are ostensibly bits and pieces of what is in enterprise in a more consumer friendly package.

I'm all for it mind you... when Hybrid SSD/SAS drives were a thing I really thought that was the best bang for the buck, but drive sizes for ssd and NVME shot up so fast that it priced those hybrid drives right out of the market.

Optane was just a tiered storage solution that is used commonly in enterprise installations today.

Super fast Cache 'ssd' drives, backed with slower but very high durability SSD drives. Most of your seek times are sub ms speeds. Your bottle neck becomes the communication to the server/vm. And normally a 15gb fiber connection is more than enough unless you're streaming multimedia or serving another big data need.

What drives me nuts is seeing everyone going for 10gb or higher swtiches. Thankfully the prices have come down but 5 years ago when they were fresh and still super expensive on an enterprise scale everyone was jumping on them. Meanwhile I look at our servers and we're not even touching 1gb. Now that's it's refresh season again I'm looking at 10gb Copper with 40gb uplinks to the core switches.

where was I... sorry work on the brain.
 
What you do is go get you a nice raid card that can host multiple PCIE x.4 drives. Lets say 4. And a 32gb stick of DDR whatever ram. Doesn't really matter there. Running with real raid hardware and not software controllers will help.

There are hardware RAID cards for m.2/nvme? I don't think I've ever seen them.
 
Quick search returns:

No idea if that qualifies as what we'd consider a 'hardware' NVME RAID solution though.

Interesitng. I never randomly came across them. The name "HighPoint" makes me shudder a little, but who knows. Maybe they are good?

I wonder how they present themselves to the BIOS. I'm not sure how RAID works when it comes to EFI. Maybe they just present themselves as a traditional Boot ROM?

I think I'd seriously consider something like this if I saw a name I recognized and trusted. Maybe LSI (or are they releasing products branded as Broadcom now?)

Edit:

Looks like there are some MegaRAID controllers that support NVMe drives somehow using SlimSAS cables (Is this for U.2? I guess you could use m.2 adapters) but they seem limited by the fact that they are all 8x PCIe. They allow assigning up to 24 NVMe devices in various RAID configurations, but at 8x PCIe Gen 4, the max transfer speed is going to be "only" just shy of 16GB/s.

I mean, that's not terrible, but it would be nice to have a 16x gen 4 controller to REALLY unleash the beasts.

With a combination controller for SAS/SATA/NVMe, I wonder if the on board chip can really keep up with NVMe speeds...
 
Last edited:
Looks like there are some MegaRAID controllers that support NVMe drives somehow using SlimSAS cables (Is this for U.2? I guess you could use m.2 adapters) but they seem limited by the fact that they are all 8x PCIe. They allow assigning up to 24 NVMe devices in various RAID configurations, but at 8x PCIe Gen 4, the max transfer speed is going to be "only" just shy of 16GB/s.
Yup, U.2 running over various SAS cabling is the cabled NVMe 'solution'. Works for server hot-swap bays as well. First available Optane drives were U.2, I recall looking at a U.2 to M.2 adapter to hook one up. Never pulled the trigger due to the price, myself.

With a combination controller for SAS/SATA/NVMe, I wonder if the on board chip can really keep up with NVMe speeds...
That shouldn't be hard, but it might not be cheap!
 
Yup, U.2 running over various SAS cabling is the cabled NVMe 'solution'. Works for server hot-swap bays as well. First available Optane drives were U.2, I recall looking at a U.2 to M.2 adapter to hook one up. Never pulled the trigger due to the price, myself.


That shouldn't be hard, but it might not be cheap!

I just did some poking around. As mentioned the controllers are NOT cheap.

The MegaRaid 9560-16i is $999 at retail to start with.

Then you need the Slimsas cables which are not cheap, and even the **** SAS to m.2 adapters are over $50 a pop.

Another option is this Areca all in one board with up to 6 m.2 slots.

I haven't been able to find it for sale anywhere yet, but it looks like it is marketed towards Mac Pro users, so I bet it carries the Apple tax. That, and while it uses Broacom RAID chips under the hood, the performance really really isn't great.

On their product page for bragging rights they post this benchmark:

1644429541400.png

I mean, you can do better than that with a single good gen 4 m.2 drive.

It's starting to sound like this really is a lost cause, and it is better to just rely on standalone drives.

Any of these RAID solutions just add a bunch of cost, and don't appear to really perform all that well, even the hardware ones.

And then there's the question of whether or not they even pass through TRIM commands to the drives.
 
I'm betting that the ASICs for these are not keeping up with NVMe speeds - not that they can't, but the cost involved is likely prohibitive and the market is likely pretty slim.

Enterprise users are generally using other solutions like Intel's ruler format for example.
 
Any of these RAID solutions just add a bunch of cost, and don't appear to really perform all that well, even the hardware ones.

And then there's the question of whether or not they even pass through TRIM commands to the drives.

I'm betting that the ASICs for these are not keeping up with NVMe speeds - not that they can't, but the cost involved is likely prohibitive and the market is likely pretty slim.

Enterprise users are generally using other solutions like Intel's ruler format for example.


The exception here seems like it would be ZFS. It performs really well, and doesn't really care what kind of drive you attach to it. No need for special hardware, and it can pass through TRIM as well, as long as the controller supports it (which LSI SAS HBA's generally don't)

I'd totally go this route for my desktop today if it only worked well in Windows. I don't want to set something up that is only going to work on one of my operating systems.

There IS a Windows port now, called ZFSin (not sure what the "in" stands for) which there wasn't the last time I checked, but I have no idea how stable it is yet, or if it supports booting.

I'll have to do some more research.
 
Last edited:
Well, I've run Linux and Unix distros virtualized on Windows, passed the drives through individually to the VM, and run ZFS arrays on that.

Probably not efficient, but was able to push 10Gbit from line to rust.

And generally you'd not worry too much about the OS, except to keep it away from your mass storage!
 
Well, I've run Linux and Unix distros virtualized on Windows, passed the drives through individually to the VM, and run ZFS arrays on that.

Probably not efficient, but was able to push 10Gbit from line to rust.

And generally you'd not worry too much about the OS, except to keep it away from your mass storage!

VM's are great. I have several running on my server. I also have a few large ZFS pools on there.

What usually winds up killing it is that I want my Desktop/Workstation to be a "no compromises, all purpose (except mobile) machine. I want to work on it, and be able to run my games on it, and sacrifice nothing when I do either.

Linux now can do gaming, but the performance hit is just too large. When I am already struggling to hit 60fps at 4k in the latest titles with a $1,800 GPU, I simply can't afford any performance loss at all.

I did consider installing a low end Quadro as my Linux GPU, and trying to run a windows VM under Linux, (probably KVM) and then using VT-D/IOMMU to pass through my big GPU for gaming to that VM and then getting the best of both worlds.

Last time I was thinking about this, Nvidia was intentionally blocking pass-through in their drivers for consumer GPU's because they considered it an enterprise feature and wanted you to pay more for a Quadro to do that. AMD GPU's could be passed through back then, but they weren't competitive at the high end with Nvidia at the time, and again, I could afford to give up any performance.

I'm not sure if the Nvidia passthrough blocking is still a thing, but now that I have an AMD GPU maybe I'll revisit that approach.

Then again, I have no idea how stable it is, and what if any performance impact I might see from GPU pass-through.

For instance, I wonder if AMD's Smart Access Memory will still work in a pass-through configuration.

This might be a fun project, and would certainly be convenient removing the need to constantly reboot to Windows for games, but then again, it seems like a lot of work just to be able to run ZFS as my local storage :p
 
Become a Patron!
Back
Top