I have opined here that I do not believe that Solaris will overtake Linux’s lead in HPC clusters. This does not mean that I don’t think it can have a role.
Basically, imagine you have to deliver a service. Something like Google. The end user of Google doesn’t care what OS the underlying software runs on. They care about their usage. Same with the end user of Yahoo, Slashdot, Digg, … .
The end user doesn’t care if Google et al are even running an OS, or an army of bash scripts listening on port 80. What they care about is what the systems that Google et al put up, work simply, easily, and with minimal fuss.
That is, as an appliance.
If Solaris 10 were focused on specific point problems in the HPC cluster space, I think some interesting possibilities could emerge.
Enterprise file systems. When your cluster gets big enough, NFS simply doesn’t cut it with the point NAS server in the cluster. This is when you want/need to separate the enterprise side (home directories, highly reliable storage) from the scratch space (very high speed, less reliable storage). Some people may insist you can do both on a single file system. I haven’t seen an implementation that works very well that addresses the very high performance coupled with the very high reliability yet. If you know of one, drop me a line.
Imagine if you will, a box, say something like the Sun Thumper box (very cool name BTW), with a number of gigabit/infiniband ports on it. Install it in the cluster. Configure it with a web page. Start serving files. Who cares whats under the hood? The important thing is that setting up the highly reliable FS is easy.
This is something that Solaris can likely do well, and is a strength that it could play to, though my concern would be the Infiniband driver support. Disks could be made available via iSCSI or other mechanisms. Something like this exists in the excellent work of OpenNAS. Might be worth the Sun folks looking at using this. Would need to support SAMBA, NFS, iSCSI, … and a few others. All within the realm of possibility.