While the day job builds (hyperconverged) appliances for big data analytics and storage, our partners build the tools that enable users to work easily with astounding quantities of data, and do so very rapidly, and without a great deal of code.
I’ve always been amazed at the raw power in this tool. Think of a concise functional/vector language, coupled tightly to a SQL database. Its not quite an exact description, have a look at Kx’s website for a more accurate one.
A few years ago, I took my little Riemann Zeta Function test for a spin with a few languages, including kdb+ just to play with it. I am doing some more work with it now (32 bit version for testing/development).
This said, you need to see what this tool can do. Have a look at Fintan Quill’s (@FintanQuill) video of a talk/demo he gave at a meetup in Seattle in 2015. Demos start around 20m mark.
The people in the audience appear to be blown away by the power they see. While we like to think our machine (running the demo db) has something to do with it. Kdb+ is absolutely fantastic for dealing with huge quantities of time series data. You need to be able to store/move/process this quickly (which is where the machine comes in), but being able to so succinctly use the data, as compared to what spark/hive/etc. do in so many more steps/lines of code, requiring so many more machines …
Tremendous power and power density saves money and time. Packing a huge amount of power into a small package lets you use fewer packages to accomplish the same things as the system requiring many more packages. The cloud model is “spin up more instances to get performance by sharding and parallelism”, while kdb+ and the day job suggest “start out using very performant and efficient tools to begin with, so you need fewer of them to do the same things, which costs you less time/effort/money/etc.”
It is, in case you are not sure, the basis for the day job’s Cadence appliance. Massive fire power. Itty bitty box.
Imagine what you could do with this sort of power …