Cluster 10GbE: still in the future

John West at InsideHPC asks about 10 GbE on clusters. The point I made (in two posts), and we verify every time we spec a system out for a customer, is that 10 GbE is still priced higher per port than IB.

This doesn’t mean we don’t like 10GbE. On the contrary, it is simpler/easier to deal with.

But it comes at a price penalty, and a non-trivial one at that.


Again, as we noted many times, I would love to be wrong about this. Sadly, i am not.

A 24 port 10GbE switch would run into the $400/port price range. A 24 port DDR IB switch would run into the $167/port price range. Like it or not the latter is less expensive than the former. All other costs being about the same (they are not, as 10GbE also requires some sort of SFP/XFP type transciever), this comes in at a price disadvantage to 10 GbE.

Again, we would love to be wrong. Really.

I like 10 GbE. It is simpler. But for 128 nodes, the $233/port cost differences add up awful fast.

Is the simplicity of the stack worth the price difference? I don’t think end users care about the simplicity of the stack. They care about whether it works, and what it costs for that work.

Which is why there is a issue.

Again, we would love to be wrong on this. So if you know of a good, low price 10 GbE switch, similar price to the DDR IB switch, by all means, please … let me know.

Viewed 8626 times by 2145 viewers

Facebooktwittergoogle_plusredditpinterestlinkedinmail

7 thoughts on “Cluster 10GbE: still in the future

  1. I agree with you that DDR IB switches are less expensive per port than 10GbE switches at the moment. However, it is not correct that 10GbE requires “some sort of SFP/XFP type transceiver” that adds to the price difference. Multiple vendors are shipping switches and NICs that support SFP+ direct attach twinax copper cables, which are pre-terminated with an SFP+ connector on each end, and are less expensive than Infiniband X4 CX4 cables.

  2. @Nathan

    Ok, work with me on this.

    The motherboards that I am aware of shipping 10 GbE NICs all use CX4. The switches that I am aware of use SFP/XFP type connectors. I am not aware of other non-CX4 motherboards. Are you?

    So you need a cable to connect the CX4 motherboard to the SFP+ on your switch. Correct?

    Looking at your own Arista data sheets (Transceiver_datasheet7.pdf) suggests that you don’t have CX4 to SFP+ cables. You would need to fashion such a thing, and do so with the *FP transceivers.

    Now I am looking on the Myricom website for their price list (I know and have worked with them before, and they are quite reasonable on pricing), and they have the *FP transceivers there. They also have CX4 to mini-CX4 and other variants. Their cables are a bit more expensive than the ones we get, but thats fine.

    Its about $60-ish for DDR CX4 cables. I don’t see *FP to CX4 being less expensive.

    So, my point stands, quite well.

    If you can show me

    Motherboards with SFP+ twinax cable support for 10GbE MB hosted NICs
    A source for Twinax SFP+ cables sitting ~$60/2m cable
    A 10 GbE switch that comes in right around $167/port

    Then I would belive that you would have achieved parity with the available DDR IB platforms.

    Until then, you need to add in a PCI-e card to get the SFP+ interconnects on the unit. This adds ~$500 USD versus the possible 10 GbE port on the MB.

    Again, understand that I want 10GbE on the motherboard. While IB is nice, the IPoIB performance leaves quite a bit to be desired, and the 10 GbE stack is simpler.

    But the costs, which are undeniably higher on 10GbE, scale up in a negative manner for consideration of 10 GbE. Again, this is unfortunately for 10 GbE, a currently unassailable situation. Until the switch pricing achieves parity with the IB DDR pricing, there will be cost issues. That and HCA/NIC pricing in addition to the compute node cost.

    Landed on the motherboard is the way to go for these. There are few choices though.

    There may be a few systems out there that do SFP+ to the motherboard NIC. By all means, let me know if you can, who has them.

  3. I didn’t dispute the main point of your post and I haven’t stated that 10GbE is at cost parity with DDR IB platforms. It’s clearly not, if only because the IB switches are less expensive per port at the moment, as I acknowledged in my previous comment.

    I only wanted to dispute the notion that 10GbE requires some form of transceiver that is more expensive than IB cabling. FWIW, Arista Networks is a source for twinax SFP+ cables at approximately the $60/2m price point you ask about.

    I don’t see any mention of requiring the 10GbE NIC be on the motherboard in your original post, so I wasn’t addressing that. I am not aware of a currently shipping motherboard with 10GbE SFP+ on-board, but I expect we’ll see those become available soon. I also do not think any vendor is shipping an SFP+ to CX4 cable.

  4. @Nathan

    Unfortunately for MB based units you will need transceivers, today.

    When MB vendors get around to supporting SFP+, we can have the SFP+ to SFP+ discussion. Until then we are stuck with transceivers.

    I for one would love to see more 10 GbE on the MB. Really.

  5. Woven Systems has a 144 port Layer2 line rate 10GbE switch with CX-4. That, combined with some cheap CX-4 cables ($100 for 10m?) would be interesting.

    I’m using 2U 8-core servers with 12 1T disks and 16GB RAM each. A 128 node cluster should come in somewhere around $500K. Another $150K for a 10G network doesn’t sound too bad, especially if it speeds you jobs up by 2x or better.

Comments are closed.