So we have Solaris 10 installed on a JackRabbit-M. According to Sun’s license, as I have learned last night, we cannot report benchmark results without permission from Sun. Sad, but this is how they wish to govern information flow around their product.
Our rationale for testing was to finally get some numbers that we can provide to users/customers about real zfs performance. There is a huge amount of (largely uncontested) information (emanating mainly from Sun and its agents) that zfs is a very fast file system. We want to test this, on real, live hardware, and report. Well, we can’t do the latter due to Sun’s licensing, but we did do the former.
Paraphrasing Mark Twain: “Rumors of zfs’s performance have been greatly exaggerated”
Ok, this is just our testing. Maybe its an outlier.
We tested on the same hardware (literally, booting from one OS to the other).
Under very heavy loads, we have other issues:
/usr/bin/time dd if=/dev/zero of=/tank/big/big.file bs=8388608 count=10000
which generates 750+ MB/s on a 16 drive RAID6 (15 drives RAID6 + 1 hot spare) under Linux, and results in significantly less performance using raidz2 across the exact same controller and disks on Solaris 10 5/08. So as to not violate the Sun license, I won’t specify what “significantly less” means in public. This performance issue is true regardless of using raw disks for zpool or using a RAID unit.
But the problem is, while it is rock solid stable under Linux, Windows 2008/2003, we can regularly and repeatedly crash Solaris running this simple command. My best guess is a driver issue. This is usually the cause of mysterious crashes on other OSes.
We will try other implementations, not bound by the Sun license, so we can report numbers. It looks to us, that for high performance file systems, you really want to be looking at one of the fs’s we have benchmarked in the past.