# More ΔV numbers: as a direct attached storage to a Windows 2008 x64 server

This is over a single 10 GbE link, using a pair of Intel cards with the ixgbe driver, and a CX-4 cable. The ΔV is configured in one of our two basic modes, in this case, a RAID10 unit. It is exporting an about 3.5TB partition over iSCSI to the Windows 2008 x64 Server box. This is an Intel dual Woodcrest 2.66 GHz box with 4 GB RAM.

I wanted to see what the bandwidth limits are. Previous work with iSCSI on these systems put a large block sequential read at about 400 MB/s and a write at about 300 MB/s for a single thread. And that is what I saw. The limitation doesn’t appear to be the ΔV (more in a moment).
For 4 worker threads (using IOmeter), I am seeing a sustained 580 MB/s streaming read, and a little north of 400 MB/s streaming write.
ΔV itself has demonstrated pretty good numbers (see previous post). Most of the performance problems we see in single threads on 10 GbE are due, curiously enough, to single threads on 10 GbE. I am not sure I understand why this is the case … it is hard to get a “real world” (e.g. user space->kernel space->across network->remote kernel space-> spinning disk) benchmark that comes close to wire speed on 10 GbE (or IB for that matter). Sure I see iperf, and microbenchmarks galore. The only ones that matter are real world code and someone with a real stop-watch.
This real world test is what I expect next week at SC08. ΔV will be attached to a similar unit performing real application runs. Looking at our configuration, I believe we have some head room.
What is interesting is the price point for this unit. It wasn’t designed to be the fastest unit out there, though it does run pretty well. It was designed to hit particular price points, in the hope of being able to provide excellent storage reliability, capacity, and performance at a reasonable price.
Starting under $1/usable GB in RAID6 configurations, these units can deliver up to 36 TB in a 4U package, at well under$1/usable GB.
Look for our announcement soon.