QDR switches are here, QDR switches are here!

(channeling Steve Martin in “The Jerk” when talking about the new phonebooks …)

40 Gb ports. $400/port or so. See InsideHPC.com for more.

For any Voltaire folks reading this, feel free to fire over a loaner QDR switch and pair of cards. We would love to see if the pair of JackRabbits we are finishing up for a customer will in fact be able to saturate these links. The issue is usually that the buffer copies between the disk and network drivers is slow, so we see significant performance loss with SDR. Will be trying with DDR shortly, and hopefully, QDR as well.

40% of the way to 100 Gb. Woot!

Viewed 14390 times by 3373 viewers

5 thoughts on “QDR switches are here, QDR switches are here!

  1. I’m no Infiniband expert, so correct me if I’m wrong, but Isn’t it more like “32% of the way to 100Gb” since Infiniband QDR throughput is 32Gbps? The way that Infiniband products are marketed seems a bit deceptive to me, advertising the “signaling rate” when the actual throughput is only 80% of that rate.

  2. Well, I am not an Infiniband marketeer. They have been talking about 10 Gb, 20 Gb, and 40 Gb for a while. I think the 10 GbE folks do a similar thing (though I could be wrong).

    This said, if you want to get down to it, it would be 32% * 86% (PCIe overhead … not sure if also true for PCIe2, will look that up). Call it 27 Gb/s. Or about 3.4 GB/s. Thats still pretty fast.

    But your point is valid in that marketeers sometimes take … ah … er … liberties … with numbers.

    This said, we would still love to see what our JackRabbits can do with them.

  3. @Joe: with 10Gb Ethernet you actually can get 10Gbps throughput. When 40Gb Ethernet arrives on the scene you’ll be able to get 40Gbps throughput. The Ethernet folks are not playing the same marketing game.

    Your point about QDR Infiniband being fast is well-taken, though. 🙂

  4. Regarding TCP performance on 10Gbps infrastructure I will refer the honourable gentlemen to Van Jacobsons excellent talk at LCA 2006 in Dunedin where he showcased his radically improved TCP stack.

    It could drive a single TCP stream at 4.3Gbps (compared to just over 2Gbps for the standard kernel) and was limited purely by the memory bandwidth of DDR333 RAM, he commented they swapped in faster memory and got better numbers after the paper had been prepared.

    He estimated then that you’d need DDR800 RAM to be able to drive a single TCP stream at line rate with his code. 🙂

  5. yes. im commenting on a year old thread. infiniband has significant advantages over traditional ethernet. redundancy with failed switches first comes to mind. length of runs vs copper is much better as well.

Comments are closed.