Another article about the supply crisis hitting #SSD, #flash, #NVMe, #HPC #storage in general

I’ve been trying to help Scalable Informatics customers understand these market realities for a while. Unfortunately, to my discredit, I’ve not been very successful at doing so … and many groups seem to assume supply is plentiful and cheap across all storage modalities.

Not true. And not likely true for at least the rest of the year, if not longer.

This article goes into some depth that I’ve tried to explain to others in phone conversations, private email threads. I am guessing that they thought I was in a self-serving mode, trying to get them to pull in their business sooner rather than later to help Scalable, versus helping them.

Not quite.

At Scalable we had focused on playing the long game. On winning customers for the long haul, on showing value with systems we produced, with software we built, with services and support we provided, with guidance, research and discussions. I won’t deny that orders would have helped Scalable at that time, but Scalable would still have had to wait for parts to fulfill them, which, curiously, would have put the company at even greater exposure and risk.

The TL;DR of the article goes like this, and they are not wrong.

  1. There is insufficient SSD/Flash supply in market
  2. Hard disks which have been slowly dropping in manufacturing rates are not quite able to take up the demand slack at the moment, there are shortages there as well
  3. 10k and 15k RPM drives are not long for this world
  4. New fab capacity needed to increase SSD supply is coming online (I dispute that enough is coming online, or that it is coming online fast enough, but more is, at least organically coming online)
  5. There is no new investment in HDD/SRD capacity … there is a drying up of HDD/SRD manufacturering lines

All of this conspires to make storage prices rise fast, and storage products harder to come by. Which means if you have a project with a fixed unadaptable budget, and a sense that you can order later, well … I wouldn’t want to be in anyones shoes that had to explain to their management team, board, CEO/CFO, etc. why a critical project suddenly was going to be both delayed and far more expensive (really, I’ve seen the price rises over the last few months, and it ain’t pretty).

This isn’t 5, 15, 25 percent differences either. The word “material” factors into it. Sort of like the HDD shortage of several years ago with the floods in Thailand.

It is curious that we appear to not have learned, with the fab capacity located in similarly risky areas … but that would be a topic for another post some day.

Even more interesting to me personally in this article, is a repetition of something I’ve been saying for a while:

To deal with supply uncertainty, as we move from an industry based on mechanical hard drives (which has dedicated production facilities) to one based on commodity NAND, vertically integrated solutions will be optimal. Organizations that control everything from NAND supply to controllers to the software will be in a much better position to deliver consistently than those that don’t.

Vertical integration matters here. You can’t just be a peddler of storage parts, you need to work up the value chain. I’ve been saying this to anyone who would listen for the last 4 years or so.

Also

This may cause an existential crisis for the external storage array. Creating, validating and successfully marketing a new external storage array in a saturated market is difficult. It is unlikely today’s storage vendors will be trying to move up the value chain by reinventing the array.

Yeah, array as a market is shrinking fast. There are smaller faster startups and public companies all feasting on a growing fraction of a decreasing market (external storage arrays). And they stick to what they know, and try to ignore the rest of the world.

There is a component of the market which insists on “commodity” solutions, where the world “commodity” has a somewhat opportunistic definition, so the person espousing the viewpoint can make an argument for their preferred system. These arguments are usually wrong at several levels, but it seems to be a mantra across a fairly wide swath of the industry. It is hard to hold back a tide flowing the wrong way by shouting at it. Sometimes it is simply better to stop resisting, let it wreak its damage, and move on to something else. We can’t solve every problem. Some problem “solutions” are less focused upon analyses, and more focused upon fads, and the “wisdom” of the masses.

You may have seen me grouse, and shake my head recently over “engineering by data sheet”. This falls solidly into this realm as well, where people compare data sheets, and not actual systems or designs. I see this happen far more often in our “software eats the world” view, where people whom should know better, don’t.

Reminds me of my days at SGI, when me with my little 90MHz R8k processor was dealing with “engineering by spec sheet” by some end users salivating over the 333MHz Dec Alpha processor. The latter was “obviously” faster, it had better specs.

I asked then, if this were true, how come our actual real-world (constructed by that same customer no less) tests showed quite the opposite?

Some people decide based upon emotions and things they “know” to be true. The rest of us want hard empirical and repeatable evidence. A spec sheet is not empirical evidence. Multiple independent real world benchmark tests? Yeah, that’s evidence.

Hyper-converged solutions, on the other hand, are relatively easy: there are a whole lot of smaller hyper-converged players that can be bought up cheaply and turned into the basis for a storage vendor’s vertically integrated play.

Well, the bigger players are rapidly selling: Nutanix is public, Simplivity was bought by HPE.

Smaller players abound. I know one very well, and it is definitely for sale … the owners are motivated to move quickly. Reach out to me at joe _at_ this domain name, or joe.landman _at_ google’s email system for more information.

For the small virtual administrator, none of this may be relevant. Our needs are simple and we should be able to find storage even if supplies become a little tight. If, however, you measure your datacenter in acres, by the end of next year you may well find yourself negotiating for your virtual infrastructure from a company that last year you would have thought of as just a disk peddler.

This article almost completely mirrors points I’ve made in the past to some of the disk vendors I’ve spoken to, about why they might want to pick up a scrappy upstart with a very good value prop, but insufficient capital to see their plans through. I’ve seen only Seagate take actions to move along the value chain, with the Xyratex purchase, and I thought originally they had done that specifically for the disk testing elements. Turns out I was wrong … they had designs on the storage appliance side as well.

All the disk vendors would do well to cogitate on this. The writing is definitely on the wall. The customers know this. The remaining runway is of limited length, and every single day burns more of it up.

Exactly what are they going to do, and when will they do it?

Customers want to know.

Viewed 51567 times by 4681 viewers