We need to get better at weather forecasting

Big HPC area. Yesterday, all the forecast models had us getting ~1.5 inches (about 4cm) of snow with rain/ice afterwords.

We got (locally by me) 12+ inches (30+cm).

Ok. I don’t mind if there are large error bars. Really I don’t. But this ?

I don’t know enough about the models to be able to say anything terribly intelligent about their intrinsic accuracy, or if the omit anything, under/over predict anything … I do know enough to say that they weren’t in the same ballpark as what we got.

Part of this is HPC. More computing over more accurate grids, with better models, would (I would hope) have a better accuracy. This isn’t always the case, especially if there is a process that doesn’t start getting important until you shrink your scales and thus up your data intensity. Error accumulation to be sure, will be quite different. You might need a different renormalization technique. As with molecular dynamics, small errors calculating in delta V’s of particles/air masses will accumulate their error differently than a potential energy function … one is positive definite, one may change sign, so you have to be careful about Hamiltonian-like formalisms (not sure if they are used here, I should look).

But more computing is indicated.

Viewed 16059 times by 3943 viewers

3 thoughts on “We need to get better at weather forecasting

  1. This reminds me of an famous incident in the UK many years ago where the nations favourite weather forecaster, Michael Fish, confidently dismissed rumors of a hurricane on the evening news…
    The next morning the the southern coast of England had been flattened by… a hurricane!

  2. http://en.wikipedia.org/wiki/Lewis_Richardson

    I’ve heard several presentations over the years about Lewis Fry Richardson’s attempts to predict the weather. He was the first to apply what we would now call Geophysical Fluid Dynamics. His failure to make a good prediction in 1910 is explained rather differently in the above article than the way I have heard it described. In the above, it’s attributed to lack of smoothing to rule out non-physical effects. But smoothing is also likely to average out fine-scale features that are just what you are complaining are absent. In presentations I’ve heard, it’s been attributed to the stiffness of the differential equations involved, which lead to large computational rounding error at small time steps and too-coarse sampling intervals at large time steps.

    Other discussions I’ve heard describe weather prediction, like turbulence, as an inherently chaotic problem.

    Also interesting is the question what you mean when you make a prediction, and how to calculate how far off you were. For instance, 12″ vs. 1-1/2″ inches sounds like a lot, but maybe that 12″ fell only a few miles away. In that case, “how much” might have been right, but “where”, or “when” (given the moving storm) might have actually been just a little bit off.

    Richardson was also cited by Mandelbrot (in the latter’s first book on Fractals) as a key influence. That work is also described in the above article.

    Also, in his “Statistics of Deadly Quarrels”, Richardson looked at the number of people killed per time interval in quarrels of different sizes. Large wars kill huge numbers of people but occur infrequently; bar-room brawls kill fewer per incident, but happen all the time. What does the power spectrum look like? I forget exactly, but it was something more regular than you might guess. I think Zipf’s law figured into it.

  3. @Peter

    I do appreciate the links … I’ve know about Richardson’s work on the study of conflict, but not his meteorological work. Very interesting!

    We had similar issues in some of our self-consistent field calcs on orbitals and energy levels. Without some sort of averaging scheme (it almost seemed any sort of scheme would work), we’d get these positive feedback oscillations.

    I noticed a similar problem with “long interval” MD that I tried in the early 90’s. Energy would grow in an unbounded manner. I was able to make some arguments that the error terms should average out apart from those coming in from kinetic energy terms, which would always be positive. Basically we either had to renormalize every so often (which was unphysical), or use a different formalism to avoid the error accumulation. The group switched over to a new code just after I left for SGI, which didn’t have these problems (had others 🙁 ).

    The deadly conflicts thing, and another similar analysis got me thinking about scale free networks. A few years ago, someone attacked our systems with some DoS (a DDoS actually). I had a hypothesis that their system was a single controller and a cloud of attackers. So the best way to deflect/defeat the attack was to induce a failure in the controller. Had others like this too. See here, and the first one and this.

    Every now and then I pine for the fjords … er … fun research problems. Sadly, I don’t have enough time to pursue them (and I don’t take enough airplane trips to work on these things anymore).

Comments are closed.