Puzzle solved … now good results

Ok, io-bm.c is fixed. I had a typo in a define. That did a pretty good job of removing all the MPI goodness …

Fixed, and ran it.

Looks like we see good performance, with none of the strange loss of IO that bonnie++ has. This is what we see with verbose mode on.


Writing: 4 threads

[root@jr5 ~]# mpirun -np 4 ./io-bm.exe -n 128 -f /data/file -w  -d -v 
N=128 gigabytes will be written in total
each thread will output 32.000 gigabytes
page size                     ... 4096 bytes 
number of elements per buffer ... 2097152  
number of buffers per file    ... 2048  
[tid=1] file name = /data/file.1
[tid=2] file name = /data/file.2
[tid=0] file name = /data/file.0
[tid=3] file name = /data/file.3
Thread=1: time = 70.216s IO bandwidth = 466.677 MB/s
Thread=2: time = 70.188s IO bandwidth = 466.859 MB/s
Thread=0: time = 70.225s IO bandwidth = 466.616 MB/s
Thread=3: time = 70.184s IO bandwidth = 466.890 MB/s
Naive linear bandwidth summation = 1867.042 MB/s
More precise calculation of Bandwidth = 1866.463 MB/s

and Reading: 4 threads

[root@jr5 ~]# mpirun -np 4 ./io-bm.exe -n 128 -f /data/file -r  -d -v 
N=128 gigabytes will be written in total
each thread will output 32.000 gigabytes
page size                     ... 4096 bytes 
number of elements per buffer ... 2097152  
number of buffers per file    ... 2048  
Thread=2: time = 83.702s IO bandwidth = 391.483 MB/s
Thread=3: time = 83.780s IO bandwidth = 391.118 MB/s
Thread=1: time = 83.840s IO bandwidth = 390.841 MB/s
Thread=0: time = 83.814s IO bandwidth = 390.960 MB/s
Naive linear bandwidth summation = 1564.402 MB/s
More precise calculation of Bandwidth = 1563.362 MB/s

Much happier now. Will use this for some additional testing on this machine.

Let me get the code cleaned up a bit before we release it. I’ll include some perl scripts to drive specific tests. Between this and fio, I think we have good I/O pipe filling coverage.

Viewed 6958 times by 1335 viewers

Facebooktwittergoogle_plusredditpinterestlinkedinmail