Is flash a flash in the pan?

This article makes a case that it is. As with many articles about X dying, its worth asking if their argument makes sense.
Basically the point they are making boils down to density, resiliency, and other aspects. Specifically they point out that the fundamental flash design is inherently flawed … it self destructs after a while … wears out. So their argument begins, the denser the bits per cell, the fewer write cycles before the cell is unusable. We can work around some of these issues using intelligent writing, signal processing, etc. But it doesn’t change the fundamental physics/mechanics/chemistry of the wear-out process.
The argument continues, pointing out the relationship between increased bit density and decreasing numbers of program-erase cycles. None of what they speak of is false, faulty in logic, etc. They are essentially correct. There are severe limits to bit density, and part of the constraint comes from the chemistry/physics of the cell, the hysteresis associated with the P-E cycle, and the accumulating structural damage to the cell.
Again this is correct.
So let me take a brief anecdotal side trip.
In 1991, we didn’t have blue LEDs. Couldn’t build them in a way that would be useful. They would tear themselves apart due to threading dislocations within a matter of minutes of lighting up. Basically, the chemistry of the LEDs was such that if you were using them as an LED, chances are you were dumping enough energy into the junction to dislocate atoms and start building these physically large defects.

Fast forward to today. Yeah, more than 20 years later, you cannot escape the blue LEDs. They are everywhere.
The issue came down to how you stabilized the junction, what you built it out of. They still rip themselves to shreds, only it takes much longer now.
What I am basically saying is that I am not completely convinced that there are no other options for the Flash chips … there may be some unexplored region of a chemical compositional space which enables longer life (more PE cycles).
The article doesn’t quite go that far though. They assume that Flash is as Flash is, and the only improvements will be material shrinks and bits per cell.
Couple this with improving capability of alternatives (and they called out memristors, racetrack, phase change, etc.), they forsee the IP we’ve (not us per se, but we as an industry) developed to handle the myriad of issues with flash will be less useful in the longer term.
If their supposition is true, then their prediction is correct. I’m not tied into the Flash research community, so I don’t know if people are currently looking at different materials or stoichiometry for such things. I’d imagine the chip folks would be quite interested in this … the ones that build Flash chips.
This said, its hard to do anything but baseline silicon these days … there is simply too much momentum/investment there. So I don’t expect radical changes. Which probably biases my thoughts to agreeing more with the article than not.
There conclusion is that the long term value of a flash company is quite low. Certainly not worth billions.
Given that instagram went for $1B, I claim that actual real value, and what people might pay for something (valuation) are rather decoupled now … so I wouldn’t make the broad sweeping generalization that they made. Aside from this, we’ve learnt a number of important things.
First, the form of the technology isn’t as important as how you present it to the end users for their use. That is PCIe vs SSD flash. Or rather than flash, lets call it FOO. Presenting FOO in a way a customer understands and sees value and less risk lowers the barrier to have them consider it.
Second, some of the lessons of the interconnect shouldn’t be tossed. PCIe makes for a great high performance connection network. Why not connect machines with it? Its possible to swamp SAS/SATA pretty easily with a fast enough design.
Third, one of the bigger lessons is that RAID silicon has a shelf life … the concept/design of RAID and HBA connects was for a world of the 1990s, when disk performance metrics were very different.
Changing flash for a different technology doesn’t change these lessons. So the companies less effected by the changes in the technology, and able to present out a FOO++ (the FOO followon) without too much pain, won’t likely go away.
I am making the case that “its complicated”, not that the author is right or wrong. Just that reality might be a bit more nuanced than they suggest.

3 thoughts on “Is flash a flash in the pan?”

  1. They’re wrong. Flash may not have the reliability of mechanical drives today, but even spinning rust wasn’t that reliable 30 years ago. It took widespread adoption to iron the last bugs out to get those extra two nines on reliability.
    Like you said, 20 years ago blue LEDs lasted 5 minutes.
    15 years ago LCD flatscreens were the same. (the first Matrix movie had CRT screens in the world of the future, Matrix 2 and 3 had LCDs….)
    Apple just released their new Macbook Pros. 2880 x 1800 screen resolution on a 15-inch panel. 5 years ago that wasn’t reliable, you couldn’t guarantee that 5 million pixels would all work. You can’t order those Macbooks with mechanical drives now, you can have a SSD, a bigger SSD or an even bigger SSD. Apple obviously thinks they’re reliable enough.
    Solid-state storage is here to stay. The technology will move around as reliability increases, just like every magnetic, mechanical storage technology has as it’s evolved. It might not be perfectly ready for prime-time now, but in 5 years only a few people will even remember the discussion.

  2. Or RAM that runs on such a trickle charge when “off” that it can last 5+ years on a watch battery, or… Starts to become a game of density. Once there’s a market, that improves too. Shocking. There’s no reliable complexity theory for marketed and manufactured goods providing lower bounds. 😉
    That PCIe will be taken off-board is a given, but the particular signaling’s still up in the air (IMHO). Hence the purchase from Cray.

  3. I think the real takeaway here is that papers written with partisan viewpoints (i.e., Flash is dying or Flash will totally destroy the need for HDDs) tend to fair much better both in terms of acceptance and readership. While I agree things could be better reliability-wise, and believe they (flash) are here to stay, I don’t many will end up reading the paper(s) that figures out how to fix reliability for this medium. Nor do I think people will give much heed to those works discussing how they, HDDs, and Tape can all play nicely together. That’s just too friendly ;).

Comments are closed.