Landman’s Laws

I was reminded on a email list I (normally) lurk, currently in the process of self-destructing over parnoid tin-foil-hat discussions, mixed in with political overtones, overcompensation for inferiority complexes … that I have some laws I’ve been using for ages. No, not the inane proposal from the failed cyberbully on that list, but real ones I’ve been using for decades now.

Without further ado, here they are.

  1. Landman’s (first) Law: Portable code is not fast. Fast code is not portable. This had come from a realization a while ago that when we recoded specific regions of slow code to operate much faster than it had, that is, when we tuned it for a particular platform, moving it to a different platform … same architecture, but different features, would often result in lower performance in general. Conversely, when we took a good description of an algorithm in terms of code, and compiled it on various platforms, it didn’t run at optimal speed, but we didn’t have to rewrite it to make it work. You can code at a very low level and have your code tightly tied to your platform, or code at a higher level, giving up performance for the possibility to run it everywhere.
  2. Landman’s (second) Law on benchmarking: Getting performance measurements done right is hard, especially if you don’t understand your code, your hardware, or if you are really measuring what you think you are measuring. There is a corollary to this in that people with strong opinions are often will ignore real data to bolster their opinions. And another corollary that people who don’t understand what they are actually measuring are usually the most vocal about the “quality” of their measurement. FWIW, I see the job of a scientific or engineering benchmarker to very carefully analyze the benchmark, and its impact upon a system, test null hypotheses, and show that there is a causal relationship between what they measure and their inputs and program runs. Sadly, I seem to be one of only a handful of people believing in the sanctity of repeatable, reproducible, and open benchmarks.
  3. Landman’s (third) Law on internet discussions: Given a sufficiently long thread, the signal to noise ratio (S/N) will drop precipitously, invariably when the wearers of tin-foil-hats will come out of the woodwork. They do this as the self appointed and self selecting guardians of “freedom”, imply conspiracies, and bashing people who just really want to see the list resume its technical foundations. Often there will be political overtones, mixtures of attacks on people, countries, cultures, threats, cyberbullying. We witnessed all of these on a (otherwise very good) firewall/router list this past week. The happy corollary of this is that since the tin-foil-hat wearers do self identify, they are often easy to filter. A lemma of this is that often the tin-foil-hat wearers exhibit this behavior elsewhere, so you may use their participation in a list as being indicative of the tendency to go down rat holes.
  4. Landman’s (fourth) Law on generalization: I’ve been using this recursive joke for years, but it is a law, and it is true … Gross overbearing generalizations tend to be incorrect. Think about it. After a while you’ll see the funny part.
  • Landman?s law of crappy designs: A bad design or implementation is going to suck most of the time.

    • Corollary to above law: No amount of tuning of a bad design will turn it into a good design.
    • Corollary to Corollary: No amount of money spent trying to tune a bad design will turn it into a good design, until you bit-can the whole kit-n-caboodle, and start fresh, with a good design. Also called ?Multiply by zero and add what you need.? Also known as Technical Debt.
  • There are a few others, but these are the start. I plan to add to them over time.

    Viewed 4956 times by 2260 viewers

    Facebooktwittergoogle_plusredditpinterestlinkedinmail