# Wherefore art thou, 10GbE?

For quite a while, we have been hearing about how great 10GbE is. I like the idea, it is just ethernet. Plug it in (with CX-4 … ) and off you go.

There is only a small number of flys in this particular ointment.
Cost: Per port costs of 10GbE are huge. The NICs are running in the thousands of USD ($), and the switches … well … lets not go there. Density: 8 port switches are great. Really. Now I want to use a cluster with 64 nodes. How does an 8 port switch help? Oh, have to build a Clos or similar network. Given the huge per port switch costs, this isn’t cost effective. I could always just buy a Force10 E1200 unit and use the 10 GbE line cards. Never mind that the cost of the switch and cards may be 4-10x the cost of the rest of the system. Pay no mind to that. What we need are port costs (NIC + switch) in the several hundred USD region for this to be a meaningful cluster technology. Infiniband is there now. at about$1k/port (switch + NIC). For codes that need it, and backbones, this is a great technology for clusters. The stack is a little heavy (OFED-1.2 anyone?) and it doesn’t build correctly on all distros (and is RPM-distro focused), but other than that …
I expect Infiniband to continue to get less expensive over time as volume increases. We are likely going to include a single port IB HCA in our JackRabbit as a standard feature from now on. The NFSoverRDMA, iSER, and other updated bits, including IPoIB that works well, look like they will be worth using soon. My question is whether 10 GbE will be get there and be cost effective any time soon. I would be happy if this happened (I like IB, but its stack can be a pain, especially building it on a distro that is not officially supported … would like it to work everywhere).