Intel Shows PCIe 5.0 SSD Hitting 13.7 GB/s on Alder Lake System

Peter_Brosdahl

Moderator
Staff member
Joined
May 28, 2019
Messages
8,893
Points
113
intel-12th-gen-core-processor-top-1024x576.jpg
Image: Intel



Intel Chief Performance Strategist Ryan Shrout has shared a video showcasing insanely fast PCIe 5.0 SSD transfer speeds using an Alder Lake processor. The demonstration had been intended for CES 2022, but he released a teaser due to Intel switching over to a virtual presence.



Perks of the job! Was going to save this demo for #CES2022 but with that off the table, why not just share it with everyone right now?! Here’s a 12th Gen ⁦@intel⁩ Core i9-12900K system paired with a new ⁦@Samsung⁩ PM1743 PCIe 5.0 SSD getting over 13GB/s...

Continue reading...


 
Yup!

Master race at its best but let's gain more speed and screw the thermals, reduce TBW, etc.
 
My only hope here is that this will help push prices of larger drives down. If you want more than 2TB of NVMe, you have to be prepared to sacrifice some combination of performance as well as some cash versus the cost per TB at 2TB and below.

I'm also not really sold on the need for even faster SSD interfaces. While there were definite gains moving to NVMe, having extra bandwidth on top of that means less and less on a broader set of workloads. On the consumer side, it's difficult to see a use case beyond say the streaming level tech that was pioneered for the latest generation of consoles, and that's questionable itself considering that the additional cost of faster NVMe is probably better spent on just having more of it, or more system memory so that streaming isn't so necessary.

On the commercial side, one has to wonder how prevalent the need is - it's not like network interfaces are really keeping up either, as even relatively pedestrian NVMe arrays can keep up with the best networking fabric available and there's still plenty of protocol overhead to consider.

I can really only see a usecase for edge workloads, but this is no different than the NVMe 3.0 vs. NVMe 4.0 conversation. Very large workloads that are parallelizable and backed up by significant local compute resources are about the only usecase I can imagine that could actually be used to make a case for such storage transfer speeds.
 
Yep, let's hope those prices do come down. I was happy to score an Inland 2 TB gen 3 NVMe for my laptop over the summer for around $200, and at black Friday that pretty much became the norm for a number of manufacturers, but that's still not a great price.

In terms of real-world use, I also agree. Most of us are not going to see this improvement. At least, not right away anyway. On the enterprise level it could help in those scenarios for massive amounts of data transfers as @LazyGamer said but that would also depend on the interface used. There was a mention of that in one of the Twitter threads on this. I'm not so knowledgeable about such things but that person suggested a particular one should become a new standard on the consumer level.

 
imo we have reached sufficient bandwidth and capacity for general consumer use. There needs to be a renewed focus on latency, which has actually been regressing. Ideally I would have a 768GB SLC OS drive focused on the lowest possible latency, and a 1-2TB mlc drive for games. Almost Anything else I need storage for, my NAS is sufficient bandwidth and latency wise.
 
There needs to be a renewed focus on latency, which has actually been regressing.
I'm not sure if Intel's flash sale to SK Hynix affects their Optane line, but that's where I'd want to go eventually. Obviously the long game is in getting more mass storage like Optane up to DRAM levels of speed and latency.

Almost Anything else I need storage for, my NAS is sufficient bandwidth and latency wise.
I wish that larger SSDs were more affordable the way that larger HDDs usually are as capacity increases. I also have a NAS that can regularly do 700MB/s+, but the latency still bites for a lot of things, that's just inherent to TCP/IP and twisted pair - and I'm also not interested in braving the technologies that can bring that latency down just yet. Most I'd do right now would be fiber runs between key nodes.

That's really the other side of the equation, I think: you can make local storage faster for the masses, but as soon as that data is needed somewhere non-local?

depend on the interface used
Did a quick search for EDSFF out of curiosity - it looks like Intel is already on board, as the spec includes their 'ruler' format. Also saw mention that they intend the ~2.5" drive form factor version to also be able to handle AICs like GPUs and NICs, with a 70w power envelope.

Obviously that's at best today what, RTX 3050 level? But given that such a card can handle quite a bit, especially if we're talking more varied workloads, I think that they're striking a good balance between size, power, connectivity (supports eight PCIe lanes) and modularity.
 
I'm not sure if Intel's flash sale to SK Hynix affects their Optane line, but that's where I'd want to go eventually. Obviously the long game is in getting more mass storage like Optane up to DRAM levels of speed and latency.
Intel has stopped development on the consumer line of optaine as far as I know. Per Anandtech’s coverage of the 2nd Gen Optane:

The client/consumer focused portion of Intel's Optane product family has shrunk considerably. They're no longer doing Optane M.2 SSDs for use as primary storage or cache drives, and there's been no mention yet of an enthusiast-oriented derivative of the P5800X to replace the Optane SSD 900P and 905P


I would absolutely love a 480gb, m2, pcie 4 optane drive on second or 3rd Gen memory. Maybe staff here could get some additional insight from Intel?

edit: it looks like it’s really, really dead:

 
Hmm...

Maybe I've been thinking about it all wrong.

Instead of wishing SSD capacity would go up, maybe I should be hoping RAM capacity goes up. Apart from keeping our bingo cards filled out, is there a great reason DRAM density hasn't kept pace with SSD density?

Instead of wishing for faster cold storage, maybe we should want cheaper dynamic storage. Just use any old media for cold offline storage, and be able to leverage better caching mechanisms during startup and while running. Not quite a RAM drive, but more like prefetch. And if you had enough of it, you could just cache everything ...

DRAM is going to beat even the best SSD out there, I suspect by a wide margin. So why are we all trying to make the offline storage faster, instead of just making offline storage as cheap as possible, not worrying about the speed so much, and leveraging the already pretty-****-fast storage we have better?

That, or get back to what Optane was trying to do - merge offline and dynamic storage into one device. That would be the ideal.
 
Last edited:
Instead of wishing SSD capacity would go up, maybe I should be hoping RAM capacity goes up. Apart from keeping our bingo cards filled out, is there a great reason DRAM density hasn't kept pace with SSD density?
DRAM is more difficult in every way - more power hungry, more difficult to stack, and worst, doesn't support the multiple 'cell' levels that NAND can, so it's 'SLC' or nothing. DRAM is also far more focused on bandwidth than capacity it seems to me at least.
 
If DRAM is your thing, you can toss 2TB on a threadripper pro. Of course, almost no programs are going to take advantage of that much Ram unless you muck around with a Ram drive, and that comes with its own set of substantial problems.
 
If DRAM is your thing, you can toss 2TB on a threadripper pro. Of course, almost no programs are going to take advantage of that much Ram unless you muck around with a Ram drive, and that comes with its own set of substantial problems.

The idea isn't to use it as a RAM Drive, and of course programs aren't meant to take advantage of all of it. Think prefetch/superfetch - they would probably need some tweaking to take better advantage of that capacity, but something along those lines.

It may be entirely unpractical due to the economics of things - as Lazy says, DRAM isn't really engineered for density. Just seems silly that the industry is circling around things like PCI 5 and DDR5 -- when those things were never the bottlenecks and they have no practical real-world benefit in almost every use case.

I guess whatever it takes to print a larger number on the side of the box than the competition.
 
Last edited:
I see q use case here. Cache and performance drives for large databases. I mean data crunchy databases nothing with BLOB data. Putting that on a high speed direct connect storage over something like 100gb fiber and you will get some great I/O performance. Of course you're probably dropping a million plus on a decently sized NAS and support for that.
 
I see q use case here. Cache and performance drives for large databases. I mean data crunchy databases nothing with BLOB data. Putting that on a high speed direct connect storage over something like 100gb fiber and you will get some great I/O performance. Of course you're probably dropping a million plus on a decently sized NAS and support for that.
Basically one of these:

 
Become a Patron!
Back
Top