Some JackRabbit-S benchmarks

Have a new JackRabbit-S unit in the lab, 5.5TB (16 drive unit, 2 allocated for OS, 1 for hot spare, RAID6 built out of remaining 13 drives).

root@jr1:/local# bonnie++ -u root -d . -f
Version 1.03 ——Sequential Output—— –Sequential Input- –Random-
-Per Chr- –Block– -Rewrite- -Per Chr- –Block– –Seeks–
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
jr1 32152M 383260 53 113137 23 569714 49 498.3 0
——Sequential Create—— ——–Random Create——–
-Create– –Read— -Delete– -Create– –Read— -Delete–
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
16 24574 99 +++++ +++ 26948 92 24184 95 +++++ +++ 23935 91
jr1,32152M,,,383260,53,113137,23,,,569714,49,498.3,0,16,24574,
99,+++++,+++,26948,92,24184,95,+++++,+++,23935,91

and dbench output

Throughput 682.158 MB/sec 20 procs

a really simple script:


#!/bin/bash
sync
echo -n "start at "
date
dd if=/dev/zero of=/local/big.file bs=134217728 count=100 oflag=direct
sync
echo -n "stop at "
date

and its results

root@jr1:/local# ./big.sh
start at Wed Sep 12 12:57:02 EDT 2007
100+0 records in
100+0 records out
13421772800 bytes (13 GB) copied, 16.5194 seconds, 812 MB/s
stop at Wed Sep 12 12:57:19 EDT 2007

Hmmm…. on a 16 GB machine, even with the sync’s I am worried about cache. Fine. Lets blow cache away.

Changing the 100 to 1000 there (130GB). Somewhat of an understatement, but this is well outside of cache.

root@jr1:/local# ./big.sh
start at Wed Sep 12 12:58:48 EDT 2007
1000+0 records in
1000+0 records out
134217728000 bytes (134 GB) copied, 191.932 seconds, 699 MB/s
stop at Wed Sep 12 13:02:00 EDT 2007

and

root@jr1:/local# ls -alF
total 131072012
drwxr-xr-x 2 root root 42 2007-09-12 12:58 ./
drwxr-xr-x 21 root root 4096 2007-09-12 11:13 ../
-rw-r–r– 1 root root 134217728000 2007-09-12 13:01 big.file
-rwxr-xr-x 1 root root 144 2007-09-12 12:58 big.sh*
-rw-r–r– 1 root root 0 2007-09-12 11:42 x

root@jr1:/local# df -h .
Filesystem Size Used Avail Use% Mounted on
/dev/sdb2 5.1T 126G 4.9T 3% /local

root@jr1:/local# du -hal
0 ./x
126G ./big.file
4.0K ./big.sh
126G .

Thats 699MB/s to local file system way outside of cache. Not bad.

With 13 disks in a RAID6, 11 are available for storage (5.5TB as these are 500 GB units). At about 70 MB/s native speed, we have a “theoretical” native sustained speed of 770MB/s. So this controller was driving these disks at about 90.8% efficiency.

Hmmm… where’d my other 9.2% go? Oh yeah, little matter of OS and related.

Not bad for a little unit. Not bad at all.

Update: So here is 1.3TB of file …

root@jr1:/local# ./big.sh
start at Wed Sep 12 13:10:08 EDT 2007
10000+0 records in
10000+0 records out
1342177280000 bytes (1.3 TB) copied, 2191.99 seconds, 612 MB/s
stop at Wed Sep 12 13:46:40 EDT 2007

root@jr1:/local# ls -alF
total 1310720012
drwxr-xr-x 2 root root 42 2007-09-12 13:10 ./
drwxr-xr-x 21 root root 4096 2007-09-12 11:13 ../
-rw-r–r– 1 root root 1342177280000 2007-09-12 13:46 big.file
-rwxr-xr-x 1 root root 145 2007-09-12 13:10 big.sh*
-rw-r–r– 1 root root 0 2007-09-12 11:42 x

root@jr1:/local# du -alh .
0 ./x
1.3T ./big.file
4.0K ./big.sh
1.3T .

root@jr1:/local# df -h .
Filesystem Size Used Avail Use% Mounted on
/dev/sdb2 5.1T 1.3T 3.8T 25% /local

612 MB/s to write 1.3TB of a single file. RAID cache is under 0.2% of the size of this file. System RAM is about 1% the size of this file.

Not bad at all …

BTW: this is 1.0737e+13 bits (10.7 Tb, 10.7 TERA-bits), in 2192 seconds. This puts it at 4.9 Gb/s of IO speed.

Viewed 12091 times by 2838 viewers

Facebooktwittergoogle_plusredditpinterestlinkedinmail

2 thoughts on “Some JackRabbit-S benchmarks

  1. So lets say I have 20k to spend (all told – tax&shipping included etc) and want to hang a jackrabbit directly off an existing server with a few PCI-express x8 slots available.. what performance could I get at that price point? Is it possible to approach 1GB/s streaming reads of mmapped files?

  2. Well, you can contact us at our email address off of the website. The system above is a single RAID card based unit, and comes in under $9k. Under 20k$ with 1 GB+/s is quite possible, if you don’t mind using less capacity. The issue is whether or not you want to pull the results out of the box. There we are limited to the connection between the boxes. We can do 10 GbE, DDR IB, etc. So large block sequential will work well in this regard.

    For mmapped files, generally you have other barriers to high performance (specifically 4kb record sizes as it pages things in and out). We might be able to help you with that using some neat tricks, though if you have source code, it likely could be tuned/tweaked so it prefetches.

    We have done some tweaking/tuning of BLAST from NCBI which is a big user of mmap’ed files. Other codes might interact in a different manner though.

    Send me an email at landman _at_ scalableinformatics _dot_ com, and I will be happy to go over this in detail. If you have a code fragment that you would like us to test, we might be able to do that soon as well.

Comments are closed.