Raw end user accessible performance on data motion, data storage is rapidly becoming one of the most important problems in any HPC system. We’ve been talking about it for years, but its getting far more important by the day. And not just in HPC.
I just spent a long time on the phone with someone from a government agency talking about their need for high performance storage, and analytical capability. We hear these refrains quite commonly, FC4/FC8 is simply too slow for their workloads, and they need to go faster. Can we help?
Digging into this, listening to the problem, its self-evident that what this person is describing is very much an HPC problem, but couched outside of the HPC lingo.
They have huge data ingress, then a distributed analysis on these data sets. They have discovered, as have many others, that the IT technologies and designs of old are simply not up to the task of moving/storing/retrieving large amounts of unstructured data quickly.
These are not corner case MPI problems with thousands of readers and writers, but specific software tools performance analytical processing on large data sets in a distributed manner. Getting the data in is one aspect of the problem, and it isn’t easy. Distributing access to the data to the compute engines is another aspect which isn’t terribly easy.
To make all this work, you need really fast underlying technology, and you need a design that enables you to scale. Scaling can’t be designed in as an afterthought. You have to start from the assumption that your loads will scale in a variety of ways, and then act and design from there.
Centralized filer head designs are fundamentally limited … they are great for older IT workloads, but as this next generation of huge data set analytics comes online … these designs show their age.
You need file systems that scale hard and fast. No single points of information flow.
More later, but we are hearing the same thing, again, and again, and again ….
Viewed 18589 times by 4116 viewers