[root@jr5-lab ~]# fio sw.fio Run status group 0 (all jobs): WRITE: <strong>io=19,200GB</strong>, aggrb=<strong>2,323MB/s</strong>, minb=<strong>2,379MB/s,</strong> maxb=<strong>2,379MB/s,</strong> mint=<strong>8463222</strong>msec, maxt=<strong>8463222</strong>msec
Thats 8463.2 seconds to you and me. 2.351 hours. 8.17TB/hour
And we didn’t even fill the unit up.
This is what we mean by a low bandwidth wall. You can conceivably read/write the entire storage in a time comparable to single hours. If your platform can’t handle this (and most can’t), then you have a very high wall erected between you and your data.
To put it in perspective, I measure the height of the wall as the total storage capacity divided by the bandwidth to access this storage (measured bandwidth for a realistic end user scenario configuration of disks). In this case we have 36TB usable, and a measured bandwidth of 2.3 GB/s. Our bandwidth wall is then 15,652 seconds (measurement of the wall is a time in seconds to write the entire unit) or 4.3 hours. For a 3.1 GB/s sustained read, the read height is 11,613 seconds, or 3.2 hours.
The narrower the pipe to the disks, the fewer pipes there are, the more oversubscription of the pipes (all found in “classical” data center storage designs), the (far) worse your data bandwidth wall will be. As you collect ever more data, store ever more data, process ever more data, this wall becomes of paramount importance in processing your data. If you have an infinite number of processor cores, that you can’t feed with data because you can’t move it fast enough off your disks, then why bother even storing that data? And if you can’t build a cost effective storage medium to adequately supply the processor cores, why bother building it at all? One-offs don’t scale.
Viewed 9629 times by 2265 viewers