Ok, this has been bouncing around in my head for a while now. Been trying to work up something to really describe it correctly in terms of a mathematical model. I have an idea, but too little time to work on it.
Here is the hypothesis. Information technologies gradually evolve to a point where their performance is fundamentally limited by their interconnection bandwidth.
Recent examples of this are multicore chips. No matter how much bandwidth you throw at something, if you hold that bandwidth, that fixed resource constant, and simply increase the number of cycles available, or if you prefer, the “size” of the resource, then at some point in time you will approach a point where the of resource contention will dominate, and you have to actively work to hide communication behind calculation.
Another example of this are large disk drives. Divide the disk size by the speed of reading/writing. For example, a 750 GB disk, which can be read at 0.075 GB/s, would take about 10,000 seconds to read (a little less than 1/8 of a day).
This is seen in the Clovertown benchmarks. Resource contention, due to a single memory system, is proving to be a significant disadvantage for memory intensive codes. Extra cores add little, and could in fact reduce overall performance by increasing contention.
We need to start thinking about building systems that add bandwidth as they scale up. They add bandwidth as they add disk platters. They add bandwidth as they add cores.
Thinking more, might talk with my friends locally, see if this idea has been explored in depth.