Looks nice, but I still worry about memory contention

Intel announced some details on Penryn and others today. It looks like a sweet chip. The problem I am having is, if the Clovertown is memory bus bound with 4 cores (2 x 2-core chips) for a number of memory intensive workloads, won’t 8+ cores be worse?

Think of this in terms of public expenditure and return on investment. If something you are investing more money in isn’t giving you the return you want/need, doesn’t it make sense to stop throwing more money at it? Same thing with any limited consumable resource. Memory bandwidth needs to scale with the number of sockets and now with the number of cores.

I’ll want to see if they are putting memory controllers on each core, or each chip, or …

As you increase the average number of requesters for a fixed sized resource, the average fraction of that available resource scales as 1/Number of requesters. This is a bottleneck. Goes back to a post I did a while ago on an idea I had about technological evolution. At some point, you will hit a wall in terms of how fast you can efficiently feed a processor core over a shared memory resource. When you hit this wall, it may become more advantageous to you to turn off other cores to preserve power, than to consume additional power without adding performance benefit.

When you hit that limit, additional cores don’t make sense. If you can’t use them efficiently, why use them? Moreover, if you cannot move the limit, then you need to consider other options.

It might be time to rethink memory a bit. Enable scalable memory architectures, not just N banks of memory per socket, or per bus, but per core. Enabling this will be hard. But it looks like we are rapidly getting to the point where we need to do it.

Hopefully I will get to play with these chips at some point.

Viewed 11360 times by 2467 viewers

Facebooktwittergoogle_plusredditpinterestlinkedinmail