more from BioIT World Expo in Boston

Long day, spent most of it talking to people and groups. This is a small conference, attendance is ok, not heavy, not light.
Saw lots of people I know/knew. Some I met today. Met Deepak from BBGM in person, and a number of people I have conversed with in the past through email/phone. Saw a few old colleagues.
On the exhibits/discussions … some memes I see floating about, and have been hearing for a while.
Storage. Storage. Nuthin but storage. Who knew that storing terabytes of data, retrieving terabytes of data, and using terabytes of data would be hard? Ok, rhetorical. This is precisely what JackRabbit was designed to do.
BTW: for people interested, there is a little bit of paper at the booth which gives you a discount if you order a unit with the attached code. Please go visit them.
JackRabbit was designed to do this (move/store/retrieve huge data very quickly and cost effectively) in large part due to the memes I saw emerging about 2 years ago, where data growth rates were going to be more troublesome than computational demand. If you can’t store the data, why collect it? If you can’t retrieve it and distribute it to compute nodes, why try to do that?

HPC was not really there. Ok, put another way … there were a few people talking HPC. I was asking the users/non-HPC vendors about where their pain points where. And HPC wasn’t it. Sure, they would like more cycles.
Put another way, the most interesting comment about HPC came from Katheryn at Schrodinger (apologies if I spelled her name wrong). She asked why people would be interested in distributed computing if their deskside machines could provide 16+ cores?
This is a tremendously important point, and as more people realize this (we have been talking about these things for a while), we might start seeing a meme emerge on what follows the cluster in HPC.
HPC moves relentlessly downmarket. Processors become cheaper and faster. Memory larger. The desktops have many processors, and multiple gigabytes of memory. So can’t we move all calculations onto this? Some might fit, some will not fit. So we can run on clusters. Unless you don’t have them. In which case you need a cluster-in-a-moment via EC2 or similar.
In which case, the barrier to getting a cluster just dropped. Very hard, and very fast.
The next discussion is how to deliver the commercial software atop this (OSS is easy, fits this model to a “T”, pun intended). A utility model (pay per use) is needed. This allows software vendors to lower the barriers to customers using their software. They don’t need to buy a cluster. Here, we can set one up for you. Fast. And you can leave it behind when you are done. You don’t have to pay for the power, upkeep, …
Some people were very excited about this model. Some were not. But even those who weren’t did appear to agree that the model makes sense, and that they should consider it.
An interesting moment occurred when we talked to a vendor about some work at a mutual customer. Entirely unprompted, this vendor gave me an earful … with a very similar description of their … ah … issues … with this customer. Why is this interesting? Well, in a completely different market, I got the same earful from another vendor, about this very same customer. And if we simply substituted one product/vendor pair for the other, the wording about this customer’s actions was virtually identical. Since these two vendors would have trouble spelling the others name, never mind describing their products … I could chalk it up to a very odd coincidence. Yet, what both vendors said, unfortunately, jived well with what I know. I didn’t tell them this, and I won’t name the vendors or the customer.
This all goes to onerous terms and conditions. It is rarely if ever a good idea to accept a bad deal. Sometimes no-deal is a better deal than a bad deal. Bad deals have a way of coming back to haunt you later on. In both these vendors cases, the haunting will go on for a while.
Will be on the expo floor for a short while tomorrow, then meetings, and a drive back to NY for more meetings thursday/friday. If you are there, look me up. I will be wearing my other JackRabbit shirt.

3 thoughts on “more from BioIT World Expo in Boston”

  1. Joe
    Was great meeting you today. Always good to put a face to a name.
    JackRabbit came up in the BioTeam talk today. He also said something which made a lot of sense to me. The small cluster 10-20 nodes is dead. You’re going to get desktop 8 core and higher workstations or you are going to dial up cloud resources.

  2. When it comes to commercial software and utility computing, the question of licensing is usually the brick wall. I’m curious if any of the software vendors at BioIT were actively looking into the utility computing model?

  3. @John
    We spoke to a few. Responses ranged from “why should we” through “ok, we are interested”. The issue is the licensing model they like (macro payments with CPU/core locks) is broken for the micro payments that utility computing requires. We have opportunities here, might need to discuss offline.
    Good to meet you as well! JackRabbit interest does continue to accelerate, and we are working to rapidly increase sales volumes. Interest is broad based, not just HPC, but many markets. Always looking for new customers and to grow existing ones 🙂

Comments are closed.