The TB sprint … 12.4 TB/hour write speed

канализацияWe wanted to see what one of the current gen machines could do for writing and reading a 1TB (1000GB) sized file. So we set up a simple fio deck to do this. Then ran it.

Run status group 0 (all jobs):
  WRITE: io=999.78GB, aggrb=3535.7MB/s, minb=3620.6MB/s, maxb=3620.6MB/s, mint=289552msec, maxt=289552msec

The write took 289.6 seconds. Less than 5 minutes, or 12.4 TB/hour write speed. The read

Run status group 0 (all jobs):
   READ: io=999.78GB, aggrb=3595.7MB/s, minb=3681.4MB/s, maxb=3681.4MB/s, mint=284766msec, maxt=284766msec

took 284.8 seconds. This is 12.7 TB/hour read speed.

So we can read and write 304 TB/day (0.3 PB/day), and 299 TB/day (about 0.3 PB/day) on this single unit, with a single thread to a single file system. Slightly more than 3 such units could read/write 1PB/day.

72 such units could, in aggregate, read and write 1PB/hour. Such a system could fit within 9 racks. Each rack would host up to 1.1PB raw, and the system in aggregate would be 10.3PB.

And the storage bandwidth wall height would be (in aggregate) about 36 x 104 seconds.

Yeah. Baby.

Viewed 10455 times by 2353 viewers

One thought on “The TB sprint … 12.4 TB/hour write speed

  1. For reference, the NYSE ingests about 1.5PB/day. You’re in the ballpark of being able to provide that data to 1 – 3 complex analysis kernels running at each day’s scale (not fast enough for more intelligent trading panic thresholds). I don’t know where others are at the leading edge, but I know non-leading-edge providers aren’t even within dreaming distance. Facebook apparently handles about 15 TB of new data daily.

Comments are closed.