Updated JackRabbit JR5 results

Lab machine, updated RAID system (to our current shipping specs).

We’ve got a 10GbE and an IB DDR card in there for some end user lab tests over the next 2 weeks.

We just finished rebuilding the RAID unit, and I wanted a baseline measurement. So a fast write then read (uncached of course).

[root@jr5-lab fio]# fio sw.fio
...
Run status group 0 (all jobs):
  WRITE: io=195864MB, aggrb=<strong>3789.1MB/s</strong>, minb=3880.1MB/s, maxb=3880.1MB/s, mint=51680msec, maxt=51680msec

Thats the write.

Here’s the read.

[root@jr5-lab fio]# fio sr.fio 
...
Run status group 0 (all jobs):
   READ: io=195864MB, aggrb=<strong>4639.3MB/s</strong>, minb=4750.6MB/s, maxb=4750.6MB/s, mint=42219msec, maxt=42219msec

Yeah.

Streaming 196GB in 42.2s. 4.6GB/s sustained read.

System has 32GB ram, RAID cache is 512MB. No SSDs were used for caching this file system (reads/writes).

Thats what I call a nice fast read. Maximum theoretical read speed for this configuration is 4.76GB/s with the disks we have in it. This is about 96.6% efficiency. I seem to remember hearing of some very large storage array used for some small supercomputer in Illinois, where the average measured bandwidth per disk is 8 MB/s or so. We are sustaining 117.9 MB/s per disk. Sadly we weren’t in the running for the next revision of this storage, as our name isn’t a TLA for a specific tier 1 company. Go figure. Our tax dollars at work.

Also, I wonder if I should be amused that this system is roughly 3x faster on writes, and 4x faster on reads than the “fastest storage system in the world” on a per chassis basis. Nah.

As I said before, all that matters for raw unapologetic firepower is what you can deploy to an application. The more firepower you have the more you are able to deploy, modulo other issues (inefficient networks, etc.)

Viewed 15915 times by 3871 viewers

5 thoughts on “Updated JackRabbit JR5 results

  1. Your test load is being generated from inside the storage machine itself, is that right?

  2. Amen. Plus, you probably don’t have the legal team that *must*must*MUST* have a specialized NDA for every possible evaluation use-case and repeatedly shrugs off applicable state laws. sigh.

  3. @Michael

    Yes, this is internal on the machine testing. We find that this is absolutely essential to understanding if the baseline platform has enough fundamental firepower to be able to sink/source bits over the network. As we’ve seen on many an email list, end users happily point their fingers at the wrong things when their performance is sub-par. If you don’t start out with a good design on the backend, there is no possible way in heck you are going to get good performance out of the front end. Previous posts on IT storage, and the Unbelievable bit (recent posts) all address this issue.

    @Jason

    I guess I am at a loss to understand your meaning here. Not sure what state laws or NDAs have to do with this stuff, though in the context of benchmarks, we’ve found some marketing organizations demand absolute control over performance messaging. Enough so that we got a minor hand slap in the past when we published performance data we measured without someone’s approval. I could see why some folks want that approval … helps them “eliminate the negative”.

    We have two customers on our systems (remotely) now, and anticipate a 3rd in the next few hours. We don’t require NDAs unless they are working on un-released gear that we have to protect (that we are under NDA w.r.t).

  4. Thanks for the explanation, Joe. I was just trying to clarify, not to argue that testing of that kind is not important. It’s just that whenever I read your amazing b/w figures, I wonder what kind of interconnect could possibly move data to/from the JR5 fast enough to take advantage of it. Let’s see. 4.6GB/s = 36.8Gb/s, even without taking protocol overhead into account. That’s a whole lotta bits.

  5. @Michael

    QDR Infiniband should be able to keep these disks busy. Multiple QDR IBs should be able to completely saturate the system.

Comments are closed.