I have heard this argument come up again several times recently. Lots of folks out there from the enterprise storage realm still love their FC drives. The SCSI crowd like their units. Both handily disparage SATA as being inferior, poorly performing, or with higher failure rates.
This is an interesting point.
As far as I am aware, all the drives come physically off the same manufacturing production line. The only significant difference between the units that I am aware of (modulo newer motors on newer units) are the electronics that connect to the bus.
Cern has done some rather extensive real world [note: this is no longer accessible, you can look here for a summary of it, if someone happens to have the presentation online somewhere, please let me know] (e.g. live) comparisons over the life of their large arrays of disks. They cannot draw a classification hyperplane between the SATA and the SCSI drives in terms of reliability.
If there were signal in the noise, this is exactly where you would expect to see it. Comparing MTBFs is specious at best. Real data trumps (trounces) hypothetical dubious comparisons.
As for performance, we don’t see much of a difference. A 2-disk software RAID0 on a SATA unit sees something like 110 MB/s on our systems and in our testing, and the 2-disk software RAID0 SCSI sees something like 140 MB/s. Now before you go all nuts over this (large block sequeuntial reads on the nt database from NCBI), allow me to point out that the SATA controllers invariably sit on a PCI-33 MHz/32 bit interface in the systems we tested (133 MB/s max theoretical), and the SCSI sits on the PCI-100MHz/64 bit interface (about 800 MB/s max theoretical). Moreover, each SCSI drive costs about 4x the SATA drive of similar size and speed, and the SCSI controller costs more as well.
Hardware RAID performance is fairly pathetic unless you are willing to spend thousands for the controller cards (either case) with decent RAID engines (most use slow processors ill equipped to perform RAID0 quickly, but able to do a decent job on parity calculations).
FC of course costs a bit more than SCSI. No real added benefits for direct attached systems. Could use it in a SAN. Thats about its only saving grace. Then again, you can take a bunch of SATA drives, attach them to a SATA->FC bridge and you have the same effect. Areca makes such a unit.
At the end of the day, one needs to carefully assess whether or not the SCSI vs FC vs SATA debate is a worthwhile one to have. SATA has effectively turned the other technologies into niche entries, with very little added functionality/value advantages, certainly few that we would be comfortable proposing to a customer that didn’t need them. Most customers don’t need them. Sales people like selling them, as they generate more margin. Marketing folks like talking up theoretical MTBF advantages, and use disparaging remarks about the technology, the reliability, and so forth. This is typical when they don’t have a good answer as to why their technology is better. They don’t have such an answer, because it is not in the vast majority of cases.
P.T. Barnum would be proud of these folks.
Viewed 16281 times by 4794 viewers