We’ve been doing the startup thing for a hair under 13 years now. Most of the time we’ve been self funded, and recently we took a small investment in a friends and family round (angel.co link here).
What occurs to me, after we soft announced our 100GbE results via a Mellanox PR today, is that we’ve been building the types of high performance platforms that enable end users to do bigger and better things for the whole time. Our focus has always been on performance, and doing everything possible to make sure end users can make effective use of it.
Performance isn’t as simple as strapping in an SSD or NVMe and calling it a day. Good performance comes from good design, good implementation. You can’t start out with a poor design, add software and instant big data or hyperconverged system (though some folks would really like you to believe this). Performance is an absolute necessity now and going forward, and crappy designs just won’t play.
In the same vein, as much as some folks might claim, this is simply not a race car.
Sort of like BASF, we don’t make the products our customers make, we simply make it possible for them to deliver them faster, better, more accurately, by providing overwhelming computational, storage, and networking firepower. Providing this capability is what we live for. This is our reason for existence.
And its pretty cool.
We aren’t a network company, but our siRouter may well be one of the fastest SDN devices used in market.
Our storage products are being used to accelerate genomics computing at many sites.
Our hyperconverged cloud products are the basis of public clouds.
We like people to think about what they could do if we could reduce the impact of the IO/computational/data motion wait times. If we could enable more efficient computing and storage.
I am very positive about the direction we are going, and I gotta say, its nice to see industry and users align along the directions we’ve been talking about for the last decade plus.
Data motion is hard, so exploit data locality as much as you can, and when you can’t, have massive data pipes and data flows to be able to move a metric ton of it per unit time.
Now off to class before I am late …