Make sure you look at their data mining demo. DataRush (as indicated in the previous post) is a cool technology, and we are happy to be helping out.
Pervasive Software has a vision for data intensive HPC that aligns well with what we have been saying. Personal supercomputing has been something we have been talking about for about 8 years, since I developed CT-BLAST. That was a tool to completely hide the pain of dealing with clusters for running one application, NCBI BLAST. Later I developed a more refined methodology to accelerate multiple applications, by dividing up the data set, distributing it to a parallel resource with a job scheduler, and recombining on the back end. The point here was to make this a general framework to accelerate informatics applications … apps without significant data dependencies per iteration or instance, and little/no communication between iterations.
After that, I started re-thinking how to do this, and Wu Feng and team developed mpi-BLAST that did what I had wanted to do, and did a better job of it, so I didn’t need to re-develop this yet again. His team built in a work scheduler, and a data transport mechanism into the MPI code. And it scaled, very well.
Ok. So you see I am a fan of a distributed computation by breaking the calculation into smaller chunks, distributing them, performing the calculation, and then recombining them.
Doing this before required quite a bit of software to be developed and interfaced with.
This is what makes DataRush so interesting. It provides the framework for this. So you don’t need to re-invent this wheel. Which means you can focus on the problem and not the protocols. This is important.
But DataRush is going to beat the heck out of your IO resources. This is where ΔV and JackRabbit come in, as they provide the huge data pipes you need, at very reasonable prices.
We are glad to work with a like minded company, to provide high performance IO resources in the ΔV, to help them demonstrate the power of their vision.
Viewed 8094 times by 1541 viewers