Raw data

So here are are generating load for a JackRabbit test. A prospective partner wants to know what it can handle. Fair enough, we would like to know if we can push it to its limits.
Basic test: 4-way channel bond gigabit, with NFS export. 4 client machines mounting this, all generating load via iozone. Iozone run like this:

/root/iozone3_283/src/current/iozone -f /mnt/jr1l/dragonfly/iozone -n 16g -y 1g -g 16g -q 1g -Ra -b /mnt/jr1l/results/jrs-dragonfly.xls

I started out with a motley crew of client hosts, all running whatever version of Linux, figuring this would be fine.
Nuh uh.
First, the nVidia ck804 gigabit port, you know, forcedeth, is terrible. Under heavy load, it appears to corrupt packets.
Fine. Replace this with an Intel e1000 based 1000/MT. I cannot say enough good things about this card. Really, I can’t.
Second, the d-link or link-sys in my old crappy Athlon (32 bit) unit, a machine I have had for about 4 years now, does the same thing under load. Since I use this machine as a CD/DVD burner, this concerns me.
Now here is where we leave the realm of hardware, and start talking about OS distributions.
The clients have (er, had) SuSE 10.2, SuSE 10.1, RHEL 4, and Ubuntu 7.04. In the Ubuntu case, I found that our built kernel ( is generally better at packet generation (not sure why) than the kernels. Maybe I will understand this some day.
But here is what is interesting. All units are connected by cat 5e/6 into the same switch (not the worlds greatest switch, but a switch). All can do some great data rates with things like netperf and other benchmarks.
But stick in something like IOzone, all run the same way, over the same mount point, and … well …
The ubuntu machines give very similar results, near theoretical peak performance.

KB reclen write rewrite read reread read write read rewrite read fwrite frewrite fread freread
16777216 1048576 90194 87612 108712 114211 129081 107406 111610 109467 704803 99190 88701 99635 95944


KB reclen write rewrite read reread read write read rewrite read fwrite frewrite fread freread
16777216 1048576 85518 91480 108564 113106 125317 108438 110811 108584 649709 96596 86938 99982 95645

Its the SuSE machines that worry me. These are with 2.6.x kernels, x< =18

KB reclen write rewrite read reread read write read rewrite read fwrite frewrite fread freread
16777216 1048576 97195 86594 13153 63745 112203 93241

Note that the read performance and the rewrite performance are terrible.
All benchmarks were started within 2 seconds of each other, the 2 ubuntu finished within 20 seconds of each other, and the SuSE version is still crunching.
Looking at the output of dstat (dstat -N bond0,eth2,eth3,eth4,eth5), I see two of the 4 channel bond ports running at full capacity during the test, and one limping along at sub-capacity.
You will notice that I haven’t spoken about RHEL yet?
Well, now may be time.
So I used RHEL, ok, Centos 4.2, on the itanic box here as the last test case. Or at least I did until I gave up on it.
iozone would, about 1/3 into the test, hang this box hard. Completely unresponsive. Big red switch time.
My impression in using Ubuntu over the past several months is that it is technically one of the best engineered distros (and that may be due to its being built upon Debian). The kernel we built in this case has all the latest NFS patches. We may be seeing patch conflicts with previous non-patched distros. Performance using this distro has generally been very snappy. Stuff worked, and worked well. Building a custom kernel for patch/driver support for customers is easy (as compared to the other distros such as SuSE and RHEL, where it is akin to pulling teeth, without anaesthetic).
I just took one of the SuSE 10.2 machines down, booted it with Ubuntu CD, and now we are going to see what happens when we have 3 Ubuntu and 1 SuSE client. Then we will try with 4 Ubuntu clients.
Of course, while running these tests, I used atop and other tools to watch JackRabbit. It wasn’t even sweating, panting hard. Had lots of head room.
And this is the small version, the JR-S, 8 TB unit, with 16 drives.

4 thoughts on “Raw data”

  1. Hi Joe,
    To be fair with the ia64 centos distribution, you could have tried the 4.4 version.
    Could you also report with CentOS-4.5 client too? Event if you seem to prefer debian ou Novell 🙂

  2. @Tru
    I have been looking for the 4.5 (or 4.4, 4.3,…) DVD iso for Centos. I would like to find it, and I will load it on this unit.
    FWIW: I can’t find a working SuSE ISO, or Debian ISO, or Ubuntu ISO, or … for IA64. This machine has become an orphan.
    With respect to Centos, our main file server is a Centos 4.3 box

    [root@crunch-r ~]# cat /etc/redhat-release
    CentOS release 4.3 (Final)
    [root@crunch-r ~]# uptime
     09:31:04 up 256 days, 19:10, 15 users,  load average: 0.08, 0.13, 0.11

    While I am generally happy with it (last outage was due to yum as it turns out, went wildly off the reservation), I have serious issues with the 2.6.9 kernel in general.
    If you can provide a pointer to the iso, I would appreciate it. I have been looking for a while on centos.org and other mirrors.

  3. Cool, thanks. I have the 4.2 version of the ISO. I am pulling this one down now.
    I noticed 5.0 isn’t out yet for IA64. Any ETA?

Comments are closed.