This is the initial bring up test run. I think you might like these numbers.
First off, configuration:
- RAID6 volumes. These are NOT RAID0. I want to emphasize this.
- Primary storage are SATA 2 drives at 7200 RPM. Not SAS at 10k or 15k RPM. Not SSD drives
- Our tuned kernel/driver stack, and hardware accelerated RAID
- 9TB usable space. Will calculate the storage wall height below. It is quite good (e.g. low)
You can order these machines today, individually, or in siClusters. Imagine aggregating N of these machines for a bandwidth multiplier of N. See below for the bandwidth.
32GB uncached streaming write:
[root@jr4-1 burn-in]# fio sw.fio ... Run status group 0 (all jobs): WRITE: io=32,520MB, aggrb=<strong>2,256MB/s</strong>, minb=2,310MB/s, maxb=2,310MB/s, mint=14413msec, maxt=14413msec
This is 2.3 GB/s. Write. Uncached.
32GB uncached streaming read:
[root@jr4-1 burn-in]# fio sr.fio ... Run status group 0 (all jobs): READ: io=32,520MB, aggrb=<strong>2,183MB/s</strong>, minb=2,236MB/s, maxb=2,236MB/s, mint=14896msec, maxt=14896msec
Ummm … nice. 2.2 GB/s read. Uncached.
Allrighty then. Lets try a little fio 1TB read and write. We don’t have 1TB ram. Nothing close.
1TB streaming write, uncached of course:
[root@jr4-1 burn-in]# fio sw1T.fio ... Run status group 0 (all jobs): WRITE: io=1,024GB, aggrb=<strong>2,180MB/s</strong>, minb=2,232MB/s, maxb=2,232MB/s, mint=480916msec, maxt=480916msec
2.2 GB/s for a streaming write of 1TB requiring 481 seconds. This is a single machine streaming a write of 7.5TB/hour.
now 1TB streaming read, uncached of course:
[root@jr4-1 burn-in]# fio sr1T.fio ... Run status group 0 (all jobs): READ: io=1,024GB, aggrb=<strong>2,194MB/s</strong>, minb=2,247MB/s, maxb=2,247MB/s, mint=477829msec, maxt=477829msec
2.2 GB/s for a streaming write of 1TB requiring 478 seconds. This is a single machine streaming a read of 7.5TB/hour.
So, our storage wall barrier height, defined as the size of the storage divided by the bandwidth you can read/write to it, is about 3900 seconds, a little more than an hour to read/write the entire unit, you can do this 22 times in a day (or so).
Compared to some of our competition, with their units at 1/3 to 1/4 our bandwidth for similar capacity (and a huge price premium over our units) … well …
Will do some more runs and tuning. This is looking quite good.
As noted, the units are available individually, or in siCluster configurations with GlusterFS, Lustre, Nexenta, and soon, ceph, twistedstorage, and other file systems. These allow you to add in new units and seamlessly grow your storage capacity and bandwidth.
Viewed 11889 times by 2863 viewers