Testing a new @scalableinfo Unison #Ceph appliance node for #hpc #storage

Simple test case, no file system … using raw devices, what can I push out to all 60 drives in 128k chunks. Actually this is part of our burn-in test series, I am looking for failures/performance anomalies.

----total-cpu-usage---- -dsk/total- -net/total- ---paging-- ---system--
usr sys idl wai hiq siq| read  writ| recv  send|  in   out | int   csw 
  0   1  95   5   0   0| 513M    0 | 480B    0 |   0     0 |  10k   20k
  4   2  94   0   0   0|   0     0 | 480B    0 |   0     0 |5238   721 
  0   2  98   0   0   0|   0     0 | 480B    0 |   0     0 |4913   352 
  0   2  98   0   0   0|   0     0 | 570B   90B|   0     0 |4966   613 
  0   2  98   0   0   0|   0     0 | 480B    0 |   0     0 |4912   413 
  0   2  98   0   0   0|   0     0 | 584B   92B|   0     0 |4965   334 
  0   2  98   0   0   0|   0     0 | 480B    0 |   0     0 |4914   306 
  0   2  98   0   0   0|   0     0 | 636B  147B|   0     0 |4969   483 
  0   2  98   0   0   0|   0     0 | 570B    0 |   0     0 |4915   377 
  8   8  50  32   0   2|7520k 8382M| 578B    0 |   0     0 |  76k  215k
  9   7  30  52   0   3|8332k   12G| 960B  132B|   0     0 | 109k  279k
 10   5  29  53   0   2|4136k   12G| 240B    0 |   0     0 | 109k  277k
 12   6  29  51   0   2|4208k   12G| 240B    0 |   0     0 | 108k  280k
 11   6  31  50   0   2|2244k   12G| 330B   90B|   0     0 | 109k  281k
 11   6  30  50   0   3|2272k   13G| 240B    0 |   0     0 | 110k  281k

Writes around 12.5-13 GB/s. Reads around the same. Roughly 1/4 of theoretical max for the controller complex on this unit in its current config. For 60 spinning disks, this is sustaining a bit more than 215 MB/s per drive.

Each test reads and write a bit of data:

count=323:    READ: io=614400MB, aggrb=12617MB/s, minb=12617MB/s, maxb=12617MB/s, mint=48695msec, maxt=48695msec
count=323:   WRITE: io=614400MB, aggrb=12538MB/s, minb=12538MB/s, maxb=12538MB/s, mint=49003msec, maxt=49003msec

That is, I am reading and writing 1.2TB of data (614GB for read, and 614GB for write, each with crc writing, crc checking). This has gone through 323 iterations since I started this ~12 hours ago, or roughly 387TB (more than 1/3 PB) of IO.

Basically I am making sure there are no issues in the data path from driver, controller complex, etc. I am sustaining a user load on this unit of about 60, with ample head room left over for other bits (networking, erasure coding, etc.)

root@usn-ramboot:~/burn-in/fio# top

top - 16:27:03 up 10:32,  2 users,  load average: 60.80, 59.76, 59.33
Tasks: 575 total,   4 running, 571 sleeping,   0 stopped,   0 zombie
%Cpu(s):  7.5 us, 13.2 sy,  0.0 ni, 27.6 id, 47.8 wa,  0.0 hi,  3.9 si,  0.0 st
MiB Mem:    257761 total,     8249 used,   249511 free,       29 buffers
MiB Swap:        0 total,        0 used,        0 free,     4369 cached

  PID USER      PR  NI  VIRT  RES  SHR S  %CPU %MEM    TIME+  COMMAND                                                                          
27131 root      20   0  413m  14m 1912 R    21  0.0   0:08.35 fio                                                                              
27132 root      20   0  413m  14m 1896 D    21  0.0   0:08.35 fio                                                                              
27184 root      20   0  466m  14m 1972 D    21  0.0   0:08.39 fio                                                                              
27133 root      20   0  413m  14m 1896 D    21  0.0   0:08.54 fio                                                                              
27161 root      20   0  465m  14m 1984 D    21  0.0   0:08.48 fio                                                                              
27162 root      20   0  465m  14m 1976 D    21  0.0   0:08.41 fio                                                                              
27164 root      20   0  466m  14m 1980 D    21  0.0   0:08.49 fio                                                                              
27167 root      20   0  466m  14m 2036 D    21  0.0   0:08.54 fio                        

I’ve got to run into the lab and connect it into the 40GbE switch, and then start the network tests as well.

Imagine, say … 5 of these units, each with 6TB drives … in one beautiful Unison Ceph system. 1.8 PB raw, performance basically off the charts. Extreme density, extreme performance (and this is before we talk about the NVM that we will be installing).

Assuming I can get the rest of the ducks in a row early next week, we’ll have a bunch of cbt tests done.

More soon.

Viewed 27115 times by 2143 viewers

Facebooktwittergoogle_plusredditpinterestlinkedinmail