More fast rabbits …

The rebuild finished. Rebooted, not sure why we were getting the oddities we did. On the test track. Open it up, just a little.

Version 1.01d ——Sequential Output—— –Sequential Input- –Random-
-Per Chr- –Block– -Rewrite- -Per Chr- –Block– –Seeks–
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
jackrabbit1 24024M 660408 94 306595 54 906116 82 400.8 0
jackrabbit1,24024M,,,660408,94,306595,54,,,906116,82,400.8,0,,,,,,,,,,,,,

As I am sitting here, I am watching it spill 500-700 MB/s to disk in writes. Our test case is 2x larger than physical memory. Caching isn’t relevant for reading and writing here.

Now it is switching into “Reading intelligently…”. To understand why this is so interesting, here is some dstat output again.

—-total-cpu-usage—- -dsk/total—-dsk/sda—–dsk/sde—–dsk/sdf– -net/total- —paging– —system–
usr sys idl wai hiq siq| read writ: read writ: read writ: read writ| recv send| in out | int csw
0 26 70 2 0 2| 934M 0 : 0 0 : 470M 0 : 464M 0 | 198B 630B| 0 0 |6300 6976
0 24 71 2 0 4| 925M 0 : 0 0 : 461M 0 : 464M 0 | 126B 210B| 0 0 |6301 6342
0 26 68 1 0 4| 936M 0 : 0 0 : 472M 0 : 464M 0 | 66B 210B| 0 0 |6276 6333
0 27 67 2 0 4| 921M 0 : 0 0 : 456M 0 : 465M 0 | 66B 210B| 0 0 |6295 6799
0 26 69 1 0 3| 945M 9216B: 0 0 : 473M 0 : 471M 0 | 280B 210B| 0 0 |6336 7778
0 28 69 2 0 3| 935M 0 : 0 0 : 463M 0 : 472M 0 | 66B 210B| 0 0 |6265 6410
0 25 70 1 0 3| 983M 17k: 0 0 : 495M 0 : 488M 0 | 66B 210B| 0 0 |6671 6359
0 26 69 1 0 3| 926M 9216B: 0 0 : 464M 0 : 462M 0 | 377B 210B| 0 0 |6252 6001
0 27 68 1 0 3| 932M 0 : 0 0 : 464M 0 : 468M 0 | 66B 210B| 0 0 |6353 6405

Note those 900+ M numbers. That 900+ MB/s being read from platters, not from cache. At 2x memory, this stuff just is not sitting in cache. There just isn’t enough room.

For laughs, I am rerunning and watching using atop. According to atop, during the writes, that 600+ MB/s set of writes, we were using up only 30-35% of the bandwidth of each link. This means ballpark of 2-3x more potential performance, though I am going to bet that we would see about 2x in the optimally tuned case. Of course, these things are also code dependent; quite a few codes will hit their own internal walls due to design before they hit the systems walls. The reads are showing at 40-45% utilization, so I think we are closer to what we should expect. Assuming about 65-70% efficiency.

The other issue we are probably running into is the buffer cache, and the speed at which the processor can interact with it. On this system we see 1235 MB/s reading buffer cache. On other systems scattered about the lab we see 6778 MB/s, 1702 MB/s, 2123 MB/s. On a different JackRabbit we see 2102 MB/s. It is my belief that this is impacting the performance. Not sure why we are seeing this, as our streams numbers on this platform are pretty nice:

—————————————————-
Your clock granularity/precision appears to be 4000 microseconds
—————————————————-
Function Rate (MB/s) Avg time Min time Max time
Copy: 11427.7455 0.0280 0.0280 0.0280
Scale: 11428.1834 0.0280 0.0280 0.0280
Add: 10908.3595 0.0440 0.0440 0.0440
Triad: 11999.0817 0.0436 0.0400 0.0440

This suggests something amiss in the kernel and possibly some tuning bits to be done. Either that or we cannot trust the hdparm test as a reliable measure of cache performance.

If we switch kernels and see this change, then I expect that this is something we shouldn’t worry about.

Will run iozone now, save the results and use them for the benchmark report. Should be fun. Once this is done, we have some requests to load windows x64. I will need to build IOzone for windows, hope it is possible.

Note: this is worth a short discussion. Each RAID controller has 18 disks organized in a RAID6. That gives us 16 effective disks. Each disk can read at about 70 MB/s, and write at about 45 MB/s. 16 disks worth (18 – 2 for parity) will give you a rough bandwidth of 1.1 GB/s. Per RAID controller, operating at 100% efficiency. We are seeing about 40-45% of this. The interfaces between each RAID controller and the machine are PCIe x8. This means that we have 4 GB/s duplex, 2 GB/s of bandwidth available in each direction. Of which the 16 equivalent disks could use 1/2, and we are seeing less than 1/2 of the 1/2 bandwidth. That is we are driving the reads on each controller at 1/4 or less of the pipe bandwidth.

What this suggests is that we should use more and smaller RAID controllers, so that we use more effective bandwidth per RAID card. We have more than enough backplane bandwidth for the IO, what we need is to distribute the load across more RAID units. Ballpark guess of about doubling the performance.

Whats nice is the amount of head room in the design. We aren’t hitting the limits as compared to other units which are badly oversubscribed on bandwidth and overall performance.

We aren’t even near our limits, and this system roars. Our RAID6 numbers are quite a bit nicer than others RAID5 numbers (and in some cases, better than their RAID0 numbers). Imagine what would happen if we had all the controllers we needed.

Viewed 6397 times by 1213 viewers

Facebooktwittergoogle_plusredditpinterestlinkedinmail