Not bad: 1.3 GB/s on reads

root@jr1:~# ./simple-w3.bash
+ sync
+ echo -n 'start at '
start at + date
Thu Feb 14 13:33:03 EST 2008
+ dd if=/dev/zero of=/big/local.file.5962 bs=8388608 count=10000 oflag=direct
10000+0 records in
10000+0 records out
83886080000 bytes (84 GB) copied, 68.8322 seconds, 1.2 GB/s
+ sync
+ echo -n 'stop at '
stop at + date
Thu Feb 14 13:34:12 EST 2008
root@jr1:~# ./simple-w
root@jr1:~# mv /big/local.file.5962 /big/local.file
root@jr1:~# ./simple-read.bash
+ sync
+ echo -n 'start at '
start at + date
Thu Feb 14 13:34:32 EST 2008
+ dd if=/big/local.file of=/dev/null bs=8388608 iflag=direct
10000+0 records in
10000+0 records out
83886080000 bytes (84 GB) copied, 66.5795 seconds, 1.3 GB/s
+ sync
+ echo -n 'stop at '
stop at + date
Thu Feb 14 13:35:39 EST 2008


We are getting there …

Given 12 disks per RAID adapter, in 2 RAID6’s, this is ~1.3 GB/s across 20 disks, or averaging 65 MB/s per disk. Best I have seen out of these disks has been 73 MB/s. This is within 11% of my observed pragmatic maximum performance. Not going to be able to eek out much more without faster drives.

Note also that this is close to 2x the single RAID6 number I put out a few weeks ago, though that one used 13 disks (15 in a RAID6) and we are using 10 disks (in a 12 disk RAID6).

In fact, using (10/13)*750 ~ 577, which is about the average we are seeing (between 570 and 640 MB/s) per RAID.

This is direct IO, so no OS caching is involved. Part of that is due to how little RAM is in this box right now. Fedex just arrived with more, but I have a proposal to work on, so more experimentation will wait until after we get it done for tomorrow.

Viewed 6278 times by 1179 viewers

Facebooktwittergoogle_plusredditpinterestlinkedinmail