SRP target oddities in RHEL/Centos 5.1

A customer will be running RHEL/Centos 5.1 and wants to attach to a JackRabbit for high performance storage. Should be possible with iSCSI, though it looks like the single connection of the iSCSI initiator limits performance. At first I thought it was card related, though I now see multiple other cards that exhibit very similar performance issues. In fact our numbers are remarkably similar, though their performance was measured relative to ramdisk, and ours relative to JackRabbit disk. Relatively speaking, we are getting very good (real) iSCSI performance.
The folks who make this card have been very helpful, and quite patient as I tried to iron out the issues. I wanted to compare the performance we measured with SRP, NFSoverRDMA, and iSER for IB. We have some older PCI-x Mellanox cards from an older set of tests. So I wired it up, and started to build the SRP target.

To do this, you have to build SCST. And OFED. And then you have to patch the ofa_kernel tree, and rebuild that.
It isn’t a simple process, but, at the end, it worked. I used the debugging SCST (rev 253), and the baseline Centos kernel. Performance wasn’t great. Actually, it was rather pathetic.
I am not sure why. I ran the diagnostics, and ib_rdma_bw and ib_rdma_lat. Got ~1.3 GB/s bandwidth, and 4 microseconds of latency. These are fine for this app. The oddity was the size of the LUN that the initiator reported. This was a 6 TB partition. Yet, the initator reported it as a 2.099TB. Hmmm…. Also, unlike with open-iscsi, I could not tune the queue depth or other items. So I had fewer knobs I could turn
My thought was that somehow, the SRP was going over the IPoIB channel. IPoIB is the internet protocol atop IB. You run stuff, and it just works. The problem is, it is slow, maybe 50-90% faster than gigabit ethernet. This isn’t want you really want. You want the performance (10x GbE, not 1.4x GbE).
SRP gives you SCSI RDMA. Sort of like iSCSI, but a bit simpler. And higher bandwidth (less time is spent in protocol).
This is where SCST comes in. SCST is a high performance target layer for SCSI. Allows you to transport it. A number of targets are available, including iSCSI, SRP, and numerous others.
The combination is rumored to give 800+ MB/s to ram disks. I think we can likely feed this as fast.
So now I am rebuilding it from the development SCST tree. Building the OFED part was complex, I kept screwing that up. Now it is all built, lets see if I did it right this time. My expectations are that I will be able to beat the iSCSI results from before. If not, this is good to know.