Big Storage and Storage-Class Memory (SCM)

(Timely, just before the Big Data talk by one of our neighboring groups, there is a recent development move in the news…)

 

Long, long, ago…

Since the very beginnings of electronic “computers“, we’ve had big separated lumps of functionality (CPU units, memory, slow storage, very slow storage, input and output devices, display devices) that are strung together with some form of “interconnect” (wiring). More recently, more and more ideas of parallelism and distributed operation have come into play for greater performance and at least some of the interconnect used is now a general network or a high performance data switch. And now, we have “The Cloud“…

Meanwhile, device design has progressed and ideas have progressed such that we now have the latest Marketing jargon for “Hyperconverged Computing” (or some such) whereby the mass storage utilized is low-cost enough to be bundled directly in with compute power, all in the same 19-inch rack rackmount box.

What next?… We have even more jargon to move to where operations are performed directly on a (“SCM” – pdf) storage medium itself. This is where we have an expansive “fabric/mesh” of “non-volatile” memory that is used directly as the only storage. This offers a huge performance boost if you can manipulate data in-situ rather than waiting for your bits and bytes to wander along labyrinthine long interconnect…

And there is the “Holy Grail” in computing/storage to combine all the features needed for CPU ‘fast memory’, ‘slow mass storage’, ‘very slow mass storage’, all into a single device that can be described as a “Universal Memory“. For our presently available devices, conflicting parameters mean that there is significant development needed yet… (See: The Quest for a Universal Memory.) Also there is, as is all too usual in the commercial world, a certain divergence… (See: “The long-held industry dream of the universal memory is not the way the industry is actually going“. See also: Monty Python and the Holy Grail.)

Are the recent ideas for Storage-Class Memory “SCM”, and including a balance of compute power in with that storage (‘hyperconvergence’), a useful development in a useful direction?… For some tasks, very likely so. As for reaching for the “Holy Grail”…

 

And now…

Are the latest developments of the now very long awaited ReRAM (Memristor) technology an industry disruptor to power SCM to shrink our Big Data yet further?…

Follow the link for the full interview and The Register spin on the context. Or read onwards below for my tidbit snippets (multiply palindromic pun there 😛 ):

 

A leading development…

Western Digital CTO Martin Fink refused El Reg’s questions, but did write this sweet essay

On storage-class memory and leaving HP Labs…

… Why is Storage-Class Memory (SCM) important?…

… Where we’ve historically looked at data stores (memory, rotating media, etc.) as the commodity, and the compute engine as the value; I think the model is reversing – where the data is the value and the compute engine is the commodity. Data is where we derive information, which gives us knowledge, then insight…

… The reality is that more than one type of SCM is likely to succeed and play a role. But, unlike DRAM, each will have a unique combination of characteristics that will make it ideal for one workload or another. Some of the characteristics include: cost, latency, throughput, power consumption, endurance, durability, etc. Each SCM will be optimized for a few of these, but is unlikely to be able to be the best-in-class in all of these…

… ReRAM is Western Digital’s choice for SCM and we have a close partnership with HPE to deliver ReRAM. You’ll have to ask HPE if they still plan on using the Memristor brand going forward.

This gets to your question on Universal Memory. Yes, it’s still a dream of mine that we eventually find the right technology to deliver Universal Memory. This memory would need to achieve best-in-class latency, cost/bit, endurance, and durability. We’re likely still a long way from this, but it is useful for us to have that as the end-target of where we want to get to as an industry…

… The end point, I think is much less about device latency, and much more about system latency. Thus, my device-level latency may be higher than a competing device (or DRAM), but at the system level – if I’ve done the right software work – I achieve much better results…

… Your questions attempt to create some sort of controversy where none exists. As an industry, we all are working toward SCM and we’ll all make progress in different ways. … SCM would allow us to have extremely high-density connections to processors…

… In order for us to have fabric-attached storage and memory, the industry needs to galvanize around standards that allow us to connect everything to these fabrics. That’s how we bring compute to data rather than the other way around. … We also need to be clear and vocal that proprietary connections are harmful to the industry overall and ask everyone to push back on any proprietary attempts. You can help by raising the visibility of this challenge and help drive the industry to a common industry standard that helps everybody…

 

And so onwards, ever bigger yet smaller… There is yet plenty of room at the bottom… 😉

Leave a Reply