Is this a zfs bug or an IOzone bug?

Hmmmmmmmmmm

….

# /opt/csw/bin/iozone -Ra  -n 16m -g 16g -y 16m -m -b sol10-jrm-large.xls
        Iozone: Performance Test of File I/O
                Version $Revision: 3.217 $
                Compiled for 32 bit mode.
                Build: Solaris 

        Contributors:William Norcott, Don Capps, Isom Crawford, Kirby Collins
                     Al Slater, Scott Rhine, Mike Wisner, Ken Goss
                     Steve Landherr, Brad Smith, Mark Kelly, Dr. Alain CYR,
                     Randy Dunlap, Mark Montague, Dan Million, 
                     Jean-Marc Zucconi, Jeff Blomberg,
                     Erik Habbinga, Kris Strecker.

        Run began: Wed Jul  2 18:47:07 2008

        Excel chart generation enabled
        Auto Mode
        Using minimum file size of 16384 kilobytes.
        Using maximum file size of 16777216 kilobytes.
        Using Minimum Record Size 16384 KB
        Multi_buffer. Work area 16777216 bytes
        Command line used: /opt/csw/bin/iozone -Ra -n 16m -g 16g -y 16m -m -b sol10-jrm-large.xls
        Output is in Kbytes/sec
        Time Resolution = 0.000001 seconds.
        Processor cache size set to 1024 Kbytes.
        Processor cache line size set to 32 bytes.
        File stride size set to 17 * record size.
                                                            random  random    bkwd  record  stride                                   
              KB  reclen   write rewrite    read    reread    read   write    read rewrite    read   fwrite frewrite   fread  freread
           16384   16384  534620 1405886

Error in file: Found ?0? Expecting ?a5a5a5a5a5a5a5a5? addr 9430a60
Error in file: Position 4096 
Record # 0 Record size 16384 kb 
where 09430a60 loop 4096

I will rebuild this, and see what is going on. Will apply my large IO patches.

I am thinking it is a problem with a 32 bit code running on a 64 bit host.

Viewed 9860 times by 2469 viewers

5 thoughts on “Is this a zfs bug or an IOzone bug?

  1. gaaak …. seems like building 64 bit code requires rebuilding gcc. Here we have this wonderful 64 bit machine, and a 32 bit compiler.

    Almost like purposefully tying hands behind one’s back …

    Add to this that the OS utilities are 32 bit

    bash-3.00# file /usr/bin/dd
    /usr/bin/dd:    ELF 32-bit LSB executable 80386 Version 1, dynamically linked, stripped
    

    so I guess I shouldn’t be surprised when the performance on zfs is low. Same system showed about 2.5x better IO performance under Linux and xfs. Gotta figure this one out. Driver is latest version, OS is 64 bit … or is it …

    bash-3.00# uname -a
    SunOS jackrabbitm 5.10 Generic_127128-11 i86pc i386 i86pc
    

    That looks supiciously like a 32 bit version of the OS.

    But

    bash-3.00# isainfo -v
    64-bit amd64 applications
            cx16 mon sse3 sse2 sse fxsr mmx cmov amd_sysc cx8 tsc fpu
    32-bit i386 applications
            cx16 mon sse3 sse2 sse fxsr mmx cmov sep cx8 tsc fpu
    

    and

    bash-3.00# isainfo -b
    64
    

    Gaak. 32 bit tools have to go through a thunking layer to deal with a 64 bit OS. Going to have to build the benchmarking tools using the 64 bit compiler, and that means I have to build the 64 bit compiler ….

    Grrrrr…..

  2. normally – a 32bit gcc should be able to build perfect 64-bit binaries.
    Just as if you cross-compile on platform-X for platform-Y, it’s not necessary to have a native compiler for platform-X …
    For example on Itaniums – a lot of distros had compilers as ia32 binaries … and the resulting ia64-code was good 🙂
    My suspicion would be more, that ZFS is just a big marketing-machinery, and not really anything suitable for HPC.
    I heared from some preliminary lustre-oss-with-zfs tests, that they too had very disappointing performance …

  3. Well, I did try the 64 bit compilation with gcc. Will look into it again.

    As for the zfs performance, color me un-impressed. Yeah, I know … I will be taken away for re-education now, accused of bias against the one true innovator.

    I do need to look into more tuning for zfs … though, unlike the xfs and other Linux tuning, it looks like the zfs tuning I have found online suggests that the way to tune zfs is to turn off all the nice reliability features. This seems wrong.

Comments are closed.