Hmmm … looks like some of these hinted results were run on our siCluster

see this link for more. Specifically the mention of

Tony Asaro: What are your File OPS? Based on what configuration? Are you going to participate in SpecFS?
Gluster Folks: Here’s an interesting test result. We tested a read workload using 128k blocks vs. the more common use of small block size such as 4k (it was a throughput test and we were not optimizing for IOPS). We achieved 131,000 IOPS across 8 storage nodes (16,375 per node). The configuration used 32 clients running IOzone. Each server had 3 RAID controllers and 18TB of storage (142TB total capacity). Interconnect was InfiniBand QDR (one card per server). We are planning to run additional tests to get some eye-opening IOPS numbers, we’ll keep you posted.

Yeah … definitely a siCluster benchmark. Its a shame we weren’t asked for help promoting this. We have quite a few nice results with this system. The benchmarks for end user accessable streaming performance are hard for many folks to believe. You should hear some of the comments we get, such as “there is no way you can achieve these results with your setup.” Seriously.
But we do.

2 thoughts on “Hmmm … looks like some of these hinted results were run on our siCluster”

  1. Yeah, and the whole “Red Hat of storage” thing? Umm, no, as a Red Hat employee who has also worked with Gluster, I’d kind of prefer that Red Hat be the Red Hat of storage. Still, though, this is good fodder for the “it’s user-space so I’ll turn up my nose at it” silliness I get sometimes.

  2. @Jeff
    Redhat’s business model is very different than Gluster’s, as you are well aware. Aggregation and integration of open source bits, value add, and support on one hand, and a somewhat horizontal product (not a point product per se) addressing a number of important storage areas. The article author didn’t do Redhat, nor Gluster, justice, by conflating these things.
    My focus was on the benchmarks. I’ll bug them about getting some real numbers out on our hardware, that we can all talk about.

Comments are closed.