If you asked me years ago, I would have said, yes, of course it did.
Now I am having second thoughts.
Our processors have 4, 6, 8, 12 cores, and soon more like 16 and up. All sharing a set of pipes to RAM. Programming these can be done either with a distributed memory interface like MPI, or a much simpler interface like OpenMP.
Vector processors ala GPU, Knights Ferry/Bridge are coming out which are little more than massive numbers of PEs and shared memory.
As you get 48+ processors in your deskside, the refrain I am hearing from customers is, why get a cluster when you can get almost 50 processors next to your desk? The latter are much easier to manage than the former.
I dunno … I think shared memory simply took time to regroup. Some things you will only be able to do with distributed memory coding, but I suspect more interest will be given to shared memory and multi/many core programming going forward.