At my day job, we have delivered a number of dual processor workstations to customers over the years with really nice nVidia graphics. Recently, a customer bought 2 4-core workstations, really nice nVidia graphics, and 32 GB ram, with 1 TB RAID disk. Then another asked for 4-core and 8-core workstations, which we provided. Now one of our larger customers is asking for 8-core and 16-core workstations with 32 GB ram.
The market has been consolidating behind various OSes for a while. Reducing the number of ports reduces ISV costs. It reduces end user management headache. Curiously enough it also reduces the engineering costs of the relevant hardware vendors, but don’t tell a few of them that, as they still perceive value where they feel they can be different. Unfortunately I have a sense of mayhem in two of the converged OSes, Linux and Windows. Sure, some might try to lump Solaris in here as an alternative, but most of us know it isn’t. The market has told us.
Tracking this one down was fun. It turns out someone, either in SuSE-land or Linux-land, has decided that HZ is a dangerous macro to expose to users.
Dangerous. Therefore, they wrap it in a kernel cloak. Which has the net effect of breaking large swaths of code which happen to use the quite innocuous HZ macro.
Working on simplifying and refactoring some Makefiles for DragonFly. Yeah, will mention what it is eventually. In the makefile, I build a bunch of perl modules. The previous version of this system had a pre-pulled set of CPAN modules, and all the bits had file system names like
Which is nice and easy to deal with. In order to make sure we can use this for updating as well, I thought it would be nice to exploit CPAN and the module name without the version. This lets us seamlessly pull down the modules from a CPAN mirror.
To bad it doesn’t work.
Well, this might not be the most appropriate title for this. I need to explain this, but first let me point to the article/email in question. Now that I have pointed to it, I want to note that there is a deeply profound set of statements in this email, which seems to be a series of responses to a discussion. Bear with me.
See here. Though apart from getting the author mixed up with the editor …
I do quite a bit of Perl programming work in support of our products. Perl is sometimes (mistakenly IMO) called a scripting language; it may have been designed to handle that in the past, but it has evolved over the decades into something far more powerful.
But it also has this … well … implicit sense of humor about it. Maybe this is what pisses off people advocating other programming languages. I dunno.
I’ve been talking for the better part of the last decade about one of the more serious problems looming for HPC, and frankly for all computing. Call it a data deluge or exponential data growth, whatever you would like. At the end of the day it means that you have more data than before, and it is growing faster than you think. Usually much faster than Moore’s law which gives you an order of magnitude about every 6 years.
This issue was highlighted for us in some recent conversations, and the acquisition of a JackRabbit by a world renown genome sequencing center in the midwest US, specifically to handle data ingress from sequencing units. Getting 1 TB of new data per week places some very strict lower limits on performance. Not just disk performance.
Installer works. Load a compute node mostly automagically (one step by hand during debugging phase, could automate this trivially) with OpenSuSE 10.2 x86_64, OFED 1.2-rc4, … sets up and configures addresses, mount points, user authentication (using NIS for the moment, anything we can script should work fine, LDAP would be preferred eventually), cluster queuing, yadda yadda yadda.
The goal is to enable load/configure of any OS using PXEboot, without imaging. Some folks like imaging. Some folks do not. Count me in the latter group. Also, we enable running the units diskless, booting thin OSes to run VMware atop, …