Taking a JackRabbit-M for a spin

This is a new 24TB raw JackRabbit-M system we are burning in for a customer. Unit will ship in short order, but I thought you might like to see what happens when we take it for a spin.

And when we crack the throttle.

First the basics:

24x 1TB drives (SATA II nearline drives, not desktop units), 4U case. 2 hot spares, RAID6 (yes, these numbers are with RAID6). System has 16 GB RAM. Any file larger than 16 GB will be streaming from disk. Cache won’t be involved (a number of our competitors conveniently forget that when reporting their benchmarks, only up to and including the size of their system memory).

First: Basic bonnie++

[root@jackrabbit ~]# bonnie++ -u root -d /big -f 
\Using uid:0, gid:0.
Writing intelligently...done
Rewriting...done
Reading intelligently...done
start 'em...done...done...done...
Create files in sequential order...done.
Stat files in sequential order...done.
Delete files in sequential order...done.
Create files in random order...done.
Stat files in random order...done.
Delete files in random order...done.
Version  1.03       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
jackrabbit   32168M           <strong>639163</strong>  67 <strong>199640</strong>  32           <strong>924484</strong>  86 503.2   0
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16 22183  88 +++++ +++ 23133  86 22058  92 +++++ +++ 11004  40
jackrabbit,32168M,,,639163,67,199640,32,,,924484,86,503.2,0,16,22183,88,+++++,+++,23133,86,22058,92,+++++,+++,11004,40

Yes, there is some sort of Linux bug with cached writes, we should be seeing about 1.8x the 639MB/s we measure. Likely it is due to this kernel (and associated patches). Will update later with final OS load numbers.

Bonnie is not, however, directly relevant to any workload that I am aware of. It is just a standard staple of IO benchmarking.

Our customers want to do things like stream lots of data off (or onto) these units. Really fast.

So lets see how well this unit can read. I created a big file, named, curiously, /big/big.file. It is about 80 GB in size (remember 1 GiB != 1 GB, so there are rounding errors of a few percent if you play loosely with the conversion).

[root@jackrabbit ~]# ls -alF /big/big.file
-rw-r--r-- 1 root root 83886080000 2008-07-19 10:07 /big/big.file

[root@jackrabbit ~]# ls -alFh /big/big.file
-rw-r--r-- 1 root root 79G 2008-07-19 10:07 /big/big.file

Ok, rounding errors are not so important. The performance is. How long does it take a simple dd to read this file ?

uncached

[root@jackrabbit ~]# dd if=/big/big.file ...
40000+0 records in
40000+0 records out
83886080000 bytes (84 GB) copied, 55.363 s, <strong>1.5 GB/s</strong>

cached:

[root@jackrabbit ~]# dd if=/big/big.file ... 
40000+0 records in
40000+0 records out
83886080000 bytes (84 GB) copied, 68.0477  s, <strong>1.2 GB/s</strong>

Not bad. What about writing?

uncached

[root@jackrabbit ~]# dd  if=/dev/zero ...
...
83886080000 bytes (84 GB) copied, 71.9762 s, 1.2 GB/s

and cached

[root@jackrabbit ~]# dd  if=/dev/zero ...
...
83886080000 bytes (84 GB) copied, 99.9484 s, 839 MB/s

Note that for files of this size, cached reading and writing make no sense (e.g. you shouldn’t do it)

Some IOzone results

	Run began: Sat Jul 19 20:03:57 2008

	File size set to 16777216 KB
	Record Size 1024 KB
	Command line used: iozone -s 16g -r 1024 -t 4 -F /big/f.0 /big/f.1 /big/f.2 /big/f.3
	Output is in Kbytes/sec
	Time Resolution = 0.000001 seconds.
	Processor cache size set to 1024 Kbytes.
	Processor cache line size set to 32 bytes.
	File stride size set to 17 * record size.
	Throughput test with 4 processes
	Each process writes a 16777216 Kbyte file in 1024 Kbyte records

...

	Children see throughput for  4 initial writers 	=  628427.80 KB/sec
	Parent sees throughput for  4 initial writers 	=  598968.88 KB/sec
	Min throughput per process 			=  154616.17 KB/sec 
	Max throughput per process 			=  162537.95 KB/sec
	Avg throughput per process 			=  157106.95 KB/sec
	Min xfer 					= 15961088.00 KB

	Children see throughput for  4 rewriters 	=  763924.11 KB/sec
	Parent sees throughput for  4 rewriters 	=  751177.77 KB/sec
	Min throughput per process 			=  186018.53 KB/sec 
	Max throughput per process 			=  195353.62 KB/sec
	Avg throughput per process 			=  190981.03 KB/sec
	Min xfer 					= 15975424.00 KB

	Children see throughput for  4 readers 		=  822380.55 KB/sec
	Parent sees throughput for  4 readers 		=  822353.59 KB/sec
	Min throughput per process 			=  183661.97 KB/sec 
	Max throughput per process 			=  223631.67 KB/sec
	Avg throughput per process 			=  205595.14 KB/sec
	Min xfer 					= 13778944.00 KB

	Children see throughput for 4 re-readers 	=  892697.84 KB/sec
	Parent sees throughput for 4 re-readers 	=  892657.62 KB/sec
	Min throughput per process 			=  215557.22 KB/sec 
	Max throughput per process 			=  233765.73 KB/sec
	Avg throughput per process 			=  223174.46 KB/sec
	Min xfer 					= 15470592.00 KB

Performance is very good.

Viewed 12778 times by 2978 viewers

Facebooktwittergoogle_plusredditpinterestlinkedinmail