PCIe 5.0 SSDs with Up to 14 GB/S of Bandwidth Will Be Ready for Consumers in 2024

Tsing

The FPS Review
Staff member
Joined
May 6, 2019
Messages
12,595
Points
113
adata-nvme-ssd-black-turbine-bg-1024x577.jpg
Image: ADATA



Enthusiasts with some of the latest and greatest PC hardware (e.g., 12th Gen Intel Core processors) will be able to move files around faster than ever thanks to the availability of new NVMe SSDs that leverage the next-generation PCIe 5.0 standard for blazing-fast speeds in just a few years. That’s according to storage company Silicon Motion, which revealed in a recent earnings call that its first PCIe 5.0 SSD controllers for consumer SSDs are slated for release in 2024. Silicon Motion’s SM2508 controller enables read speeds of up to 14 GB/s and write speeds of up to 12 GB/s, around double the speeds of modern PCIe 4.0 solutions.



PCIe 5.0 SSDs promising up to 14GB/s of bandwidth will be ready in 2024 (Ars Technica)



Other...

Continue reading...


 
For consumers, I don't see the point.

If Direct Storage was a thing being leveraged, maybe. But at this point, I don't think that it ever will for games at least.

Games have to be written in such a way that they will actually leverage the bandwidth correctly and they are starting to be.

I just don't see the justification for spending an extra $100-$200 per terabyte for maybe 2-3 seconds faster load times.
 
For consumers, I don't see the point.

If Direct Storage was a thing being leveraged, maybe. But at this point, I don't think that it ever will for games at least.

Games have to be written in such a way that they will actually leverage the bandwidth correctly and they are starting to be.

I just don't see the justification for spending an extra $100-$200 per terabyte for maybe 2-3 seconds faster load times.
I look at it more in terms of 'pushing the envelope', where the economy of scale of producing faster drives will make them more affordable - particularly as tooling for older, slower drives is phased out.

We see this now with NVMe drives hitting 2GB/s bidirectional are as cheap as SATA drives that topped out at 550MB/s, or 1/4th the speed. You pay a premium for more than 2GB/s of course, and that's the needle that is likely to move.
 
Sure and Gen 6 will be out then as well making PCIE 5 redundant just like they are trying to do to PCIE4/3.
 
All these insane speeds also need a lot from the rest of the build as well. It's not necessarily like you can drop one of these drives and expect those speeds if the other boxes like CPU, Memory, and motherboard Chipsets are not checked as well. We've also begun to see how the faster drives really are needing heatsinks which adds another factor to it all. I'm all for faster but we're at a point where there's significantly more needed than just a drive upgrade to get it.
 
Also., I mean, you could have the bandwidth today by just RAID:ing a bunch of current drives.

I've thought it might be funny to use my motherboards M.2 RAID function to create a RAID10 group of 4 mirrored (8 drives total) current m.2 drives.

Heck, even if I went with relatively cheap Gen 3 Inland Premium 512GB drives, that would be 2TB total in size, feature excellent redundancy, and write at 4x the speed of a single drive, while reading at 8x the speed of a single drive.

Since the Inland premium 512GB drive reads at 3100MB/s and writes at 1900MB/s, I could in theory have a system that reads at 24800 MB/s and writes at 7600 MB/s Much higher numbers if I go for faster drives.

The thing is, I find that in practice these fast drives don't really benefit you much. Boot times, game load times, software load times. They usually all wind up taking about the same amount of time on almost any SSD, even sata drives. The real benefit is for large sequential file transfers to/from another fast drive, and I don't know about you guys, but I just don't do that kind of sequential transfers very often, and when I do, the drive I am transferring to/from is often much slower and becomes the bottleneck anyway.
 
PCIE 5.0 x16 with 4 of these drives would be as fast/faster as DDR4 bandwidth-wise.

I wonder what the latency would be if it could be used as RAM.

Imagine only needing something like 2 x 4GB of RAM and using this as a swap file. Add in DirectStorage and I think it would work incredibly well.
 
I always read that raiding solid state drives increased latency.
 
I always read that raiding solid state drives increased latency.
I can assume that that would be the case, though in my mind it would only be due to just how low-latency NVMe already is. RAID invariably means muxing traffic through a controller of some sort, and those are usually implemented in software these days right?
 
Right now, at least for consumers, the bottleneck isn't speed; it's capacity.
Capacity is fine. 512GB drive for OS, a 1 or 2TB drive for everything else that I actually care to store on NVME. The rest of my data is fine sitting on my NAS on spinning disk.

What we really need is lower latency - that means migrating back to SLC, or even better, optane (too bad Intel killed it for prosumers)
 
Capacity is fine.
I'm ready to let spinning drives die. Capacity is the only purpose they can really serve right now, but in almost every other metric lag way, way behind solid state storage. I want solid state to catch up on the capacity metric as well. That's what I'm saying.

I don't know if latency on a drive is my main bottleneck, but I suspect in my mostly consumer system, my NVMe drive latency isn't the main bottleneck keeping my system from booting up faster or my games from loading faster.
 
I'm ready to let spinning drives die. Capacity is the only purpose they can really serve right now, but in almost every other metric lag way, way behind solid state storage. I want solid state to catch up on the capacity metric as well. That's what I'm saying.

I don't know if latency on a drive is my main bottleneck, but I suspect in my mostly consumer system, my NVMe drive latency isn't the main bottleneck keeping my system from booting up faster or my games from loading faster.
I would say Track down an optane drive to see what the extra low latency gets you, but that is basically not an option. Best case 4kb reads, optane is 10x less latency than mlc, which in turn has latency advantages over tlc. I’ve never had the opportunity to use one for an extended period, and I’m not sure a fresh install demo is really fair. It was enough to make me want one, which was probably the point :D
 
I always read that raiding solid state drives increased latency.

RAID always trades some latency for increased total redundancy or bandwidth. There is always a little bit of lag involved when it comes to synchronizing multiple drives, but at least the hope is that it is relatively small.

On hard drives most of that latency comes from synchronizing multiple moving mechanical things. So, in theory it should be better in something solid state where you don't have to wait for statistical differences between drives on seek times, etc.

If you were able to create THE PERFECT SSD RAID system, it would pretty much be parallelized just like it is on the drive itself where more highly parallelized larger drives perform better than less highly parallelized smaller drives.

After further research, I'm not sure I like the options available to me right now though.

AMD's RAIDXpert2 while on board is essentially a software solution, and it apparently fails to pass through TRIM commands to the SSD's, which is a bummer.

I could use Windows Storage Spaces, but I'm really not impressed by the benchmarks I have seen out of it.

The only good options I have are under Linux where I have my choice of ZFS, BTRFS MADM, and any number of other implementations. The trick is to make both Linux AND Windows happy.
 
Become a Patron!
Back
Top