# What should 432TB of storage cost?

This is close to 1/2 PB. Assume you are building a very fast storage unit and backup system. What should this cost? Yeah, we can argue about cost per GB/s and cost per IOP/s. Assume 3GB/s, and 10k IOPs. Assume the unit is 144TB raw (108TB usable) primary fast storage, and 288TB raw (216TB usable) storage.
There is a poll for this post, but you have to click the title to be able to participate.

### 8 thoughts on “What should 432TB of storage cost?”

1. twitchtwitchtwitch…
I’m trying to decide if I want to answer sardonically or ideally. The practical answer alas follows the sardonic one. We’re funded to compute things, not store them…

2. @Jason:
Let me add that in as an option … It is a real (and correct) decision point for some people.

3. After voting ends on the 31st, I’ll let everyone know what the real range is.

4. Ok, I admit that computing without seriously storing isn’t fundamentally flawed in institutions where the majority of workers turn over in 2-4-6-10 years… It just makes me sad when fast storage would make debugging and reproducibility easier.

5. We, too, are paid to compute. So we have four dozen workstations, each with 2..4 GB of local storage. These are doing double duty as a personal NFS server for jobs on the compute cluster.
It does work surprisingly well, after all. The key is that bandwidth is needed only when analyzing, but at that point, the data is local. While jobs are running, results are merely trickling to the disk (maybe 1GB/day per job), and raw data rarely changes hands.

6. @kirjoittaessani
One option for the NoW approach (Network of Workstations) is to leverage GlusterFS and its distributed replicated mode. With 3.1.3, you could (in theory, we haven’t tested it yet in this mode) turn off their native NFS server, and use a portion of the local disk for a distributed (and replicated) data store.
BTW: are these 2..4 GB or 2..4 TB? We’ve run tiny footprint OSes in the past for our JackRabbit units on CF cards. Had a few installations right about 900MB total footprint with everything the customer needed. Fit nicely onto a 4GB card.
These days we are using larger units for the OS. Room for a portion as part of this sort of distributed server, but not much more than that.

7. @Joe: Oops. That should have read 2..4 TB, of course.
As for GlusterFS, we did think about it for a time. Eventually, we decided that a simpler, non-distributed approach would suffice, though.

8. The whole thing with support, services, installation, etc. was under $160k USD. That works out to something less than$0.37 USD/GB
This is fast high density storage with mirrored backups, support, etc.