I’ve had some concerns over the business model for this. The price per GB is way … way out there for SLC. The use case for SLC vs MLC (especially with eMLC coming on line) is very similar. The cost of MLC is making these units affordable, and even considerable for people.
There seem to be a consumer/hobbyist version and a professional class. The former has a bad performance rap from the first set of products. As people discovered, RAID0’s of elements that can fail, coupled with some wimpy controllers and slow MLC don’t make for a very positive experience. But a new group of units have come on line, and some of the designs are … er … actually quite good. The question is whether or not they can hold up against the pro versions.
The pro versions have some very serious capabilities including RAID 1,5,6 on the cards. As well as between cards using SW raid.
SSDs, the device instance of Flash, are nice for our arrays, and we leverage them. Yes, we do sell all SSD versions of JackRabbit and DeltaV. And the performance is quite nice. Though designing/building/tuning these things correctly is hard. Its very … VERY … different than spinning rust, and you shouldn’t think of it in the same manner, even if it looks like a disk.
Both of these technologies have places and utility. Pricing on the SSDs is still higher than I want, around $2-4/GB typically, though some capacities are closer to $8+/GB. Yes, we can build a 48 drive 46TB JR5-SSD unit, and yes, it will be a screamer. It just won’t be affordable, by any reasonable definition of the word affordable.
This said, we are shipping more and more SSD and Flash based units. We don’t have a 100TB out in market yet, but are rapidly closing in on that.
But more to the point, we see the use case for each set. One shouldn’t directly compare the SSDs to the Flash units (e.g. the disk instance vs the PCIe instance), as they are really aimed at two very different market segments.
PCIe flash has great utility for a number of use cases. This isn’t an admission of drinking koolaid. This is as a result of some significant internal testing we’ve done, and consideration of the various use cases. You won’t want to build an NFS server out of PCIe flash in most cases. And you probably don’t want to build 1M+ IOP machines out of parts that can do 10k IOPs at a time, and that you have to use in a RAID0 to get this sort of performance.
SSDs do fail. Often spectacularly. Flash, well done flash, should be far more graceful in its failure.
Yeah, there is a real set of differentiated value use cases for each. The mistake is using one where you need the other. Your performance will either suck, or you will feel you’ve overpaid for it.