zfs un-benchmarking

So we have Solaris 10 installed on a JackRabbit-M. According to Sun’s license, as I have learned last night, we cannot report benchmark results without permission from Sun. Sad, but this is how they wish to govern information flow around their product.

Our rationale for testing was to finally get some numbers that we can provide to users/customers about real zfs performance. There is a huge amount of (largely uncontested) information (emanating mainly from Sun and its agents) that zfs is a very fast file system. We want to test this, on real, live hardware, and report. Well, we can’t do the latter due to Sun’s licensing, but we did do the former.

Paraphrasing Mark Twain: “Rumors of zfs’s performance have been greatly exaggerated”

Ok, this is just our testing. Maybe its an outlier.

We tested on the same hardware (literally, booting from one OS to the other).

Under very heavy loads, we have other issues:


/usr/bin/time dd if=/dev/zero of=/tank/big/big.file bs=8388608 count=10000

which generates 750+ MB/s on a 16 drive RAID6 (15 drives RAID6 + 1 hot spare) under Linux, and results in significantly less performance using raidz2 across the exact same controller and disks on Solaris 10 5/08. So as to not violate the Sun license, I won’t specify what “significantly less” means in public. This performance issue is true regardless of using raw disks for zpool or using a RAID unit.

But the problem is, while it is rock solid stable under Linux, Windows 2008/2003, we can regularly and repeatedly crash Solaris running this simple command. My best guess is a driver issue. This is usually the cause of mysterious crashes on other OSes.

We will try other implementations, not bound by the Sun license, so we can report numbers. It looks to us, that for high performance file systems, you really want to be looking at one of the fs’s we have benchmarked in the past.

Viewed 9718 times by 2420 viewers

Facebooktwittergoogle_plusredditpinterestlinkedinmail

8 thoughts on “zfs un-benchmarking

  1. @Witek

    So what I have been hearing (mostly offline) has been that the zfs in OpenSolaris is *superior* (or more recent/more patched/better performing) than the one in the official Sun product. I have also heard some … well … unflattering FreeBSD zfs implementation comments.

    We are going to be building a new demo unit late this month, and will include OpenSolaris on our testing regime. I’ll see if we can get FreeBSD going as well. Windows 2008 runs beautifully on this box, as does Linux. Performance is amazing on those two.

    We want to use zfs for its ecc-like features. Its not a perfect file system, not completely resistant to corruption, but it has good features to make it resilient in the face of errors. This I like. This we want to use and exploit.

  2. @Joe. I also like zfs for easy of usage, snapshots and checksuming. I don’t pay big attention to performance, but it is very important in real aplications.

  3. Here’s an online thing to hear: The zfs in OpenSolaris is more recent than the version in Solaris 10, so if you’re interested in testing the latest stuff, with all the performance enhancements that have been checked into the code tree, you’ll want OpenSolaris. Better yet, grab a recent “Nevada” build which has even closed-source drivers built into it. The latest released build of Nevada is build 91 and available here:

    http://www.opensolaris.org/os/downloads/sol_ex_dvd/

  4. @Mark

    Thanks. This mirrors what I have heard. I am now pulling the two zip files (why not a nice simple single dvd pull or torrent?).

    I have an older Nevada build (pulled when it was originally announced). Will try this one soon.

Comments are closed.