Some JackRabbit-S benchmarks

Have a new JackRabbit-S unit in the lab, 5.5TB (16 drive unit, 2 allocated for OS, 1 for hot spare, RAID6 built out of remaining 13 drives).

root@jr1:/local# bonnie++ -u root -d . -f
Version 1.03 ——Sequential Output—— –Sequential Input- –Random-
-Per Chr- –Block– -Rewrite- -Per Chr- –Block– –Seeks–
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
jr1 32152M 383260 53 113137 23 569714 49 498.3 0
——Sequential Create—— ——–Random Create——–
-Create– –Read— -Delete– -Create– –Read— -Delete–
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
16 24574 99 +++++ +++ 26948 92 24184 95 +++++ +++ 23935 91
jr1,32152M,,,383260,53,113137,23,,,569714,49,498.3,0,16,24574,
99,+++++,+++,26948,92,24184,95,+++++,+++,23935,91

and dbench output

Throughput 682.158 MB/sec 20 procs

a really simple script:


#!/bin/bash
sync
echo -n "start at "
date
dd if=/dev/zero of=/local/big.file bs=134217728 count=100 oflag=direct
sync
echo -n "stop at "
date

and its results

root@jr1:/local# ./big.sh
start at Wed Sep 12 12:57:02 EDT 2007
100+0 records in
100+0 records out
13421772800 bytes (13 GB) copied, 16.5194 seconds, 812 MB/s
stop at Wed Sep 12 12:57:19 EDT 2007

Hmmm…. on a 16 GB machine, even with the sync’s I am worried about cache. Fine. Lets blow cache away.

Changing the 100 to 1000 there (130GB). Somewhat of an understatement, but this is well outside of cache.

root@jr1:/local# ./big.sh
start at Wed Sep 12 12:58:48 EDT 2007
1000+0 records in
1000+0 records out
134217728000 bytes (134 GB) copied, 191.932 seconds, 699 MB/s
stop at Wed Sep 12 13:02:00 EDT 2007

and

root@jr1:/local# ls -alF
total 131072012
drwxr-xr-x 2 root root 42 2007-09-12 12:58 ./
drwxr-xr-x 21 root root 4096 2007-09-12 11:13 ../
-rw-r–r– 1 root root 134217728000 2007-09-12 13:01 big.file
-rwxr-xr-x 1 root root 144 2007-09-12 12:58 big.sh*
-rw-r–r– 1 root root 0 2007-09-12 11:42 x

root@jr1:/local# df -h .
Filesystem Size Used Avail Use% Mounted on
/dev/sdb2 5.1T 126G 4.9T 3% /local

root@jr1:/local# du -hal
0 ./x
126G ./big.file
4.0K ./big.sh
126G .

Thats 699MB/s to local file system way outside of cache. Not bad.

With 13 disks in a RAID6, 11 are available for storage (5.5TB as these are 500 GB units). At about 70 MB/s native speed, we have a “theoretical” native sustained speed of 770MB/s. So this controller was driving these disks at about 90.8% efficiency.

Hmmm… where’d my other 9.2% go? Oh yeah, little matter of OS and related.

Not bad for a little unit. Not bad at all.

Update: So here is 1.3TB of file …

root@jr1:/local# ./big.sh
start at Wed Sep 12 13:10:08 EDT 2007
10000+0 records in
10000+0 records out
1342177280000 bytes (1.3 TB) copied, 2191.99 seconds, 612 MB/s
stop at Wed Sep 12 13:46:40 EDT 2007

root@jr1:/local# ls -alF
total 1310720012
drwxr-xr-x 2 root root 42 2007-09-12 13:10 ./
drwxr-xr-x 21 root root 4096 2007-09-12 11:13 ../
-rw-r–r– 1 root root 1342177280000 2007-09-12 13:46 big.file
-rwxr-xr-x 1 root root 145 2007-09-12 13:10 big.sh*
-rw-r–r– 1 root root 0 2007-09-12 11:42 x

root@jr1:/local# du -alh .
0 ./x
1.3T ./big.file
4.0K ./big.sh
1.3T .

root@jr1:/local# df -h .
Filesystem Size Used Avail Use% Mounted on
/dev/sdb2 5.1T 1.3T 3.8T 25% /local

612 MB/s to write 1.3TB of a single file. RAID cache is under 0.2% of the size of this file. System RAM is about 1% the size of this file.

Not bad at all …

BTW: this is 1.0737e+13 bits (10.7 Tb, 10.7 TERA-bits), in 2192 seconds. This puts it at 4.9 Gb/s of IO speed.

Viewed 7156 times by 1441 viewers

Optimization WordPress Plugins & Solutions by W3 EDGE