Science Snapshots: Will Artificial Intelligence Replace Good Old Numerical Modeling for Weather?

An article from the Washington Post (sorry for the potential pay wall folks) is making a big splash in your Captain's professional circles the last couple of days. So, what is Artificial Intelligence (AI), also called Machine Learning, what is weather modeling, and is Google really going to take over the weather enterprise like they try to do everything else?

Let's start with good old fashioned weather modeling.

The father of modern weather forecasting, including Numerical Weather Prediction (NWP, weather models), is Vilhelm Bjerknes of Norway (1862-1951). This is him.

Via Wikipedia

Fun fact: Scandinavia is basically responsible for most of what we have learned about weather in the last 100 or so years, so while the Vikings did nothing but destroy my ancestors, their descendants made my career possible, so I thank them. Anyway, in addition to all the other things he did, he is the first to write down what we call the primitive equations, the basis of modern computer models of atmospheric motion. These are them:

Courtesy of Rutgers

I know it looks scary, because partial differential equations. But it really breaks down to the following:

  • The top two describe the horizontal motion of the wind (north, south, east, west)
  • The third is essentially vertical motion of the air (up and down)
  • The fourth makes sure the first three are in balance with each other
  • The fifth is temperature
  • The final is your standard ideal gas law: pressure, temperature, and density are all related to each other
At the end of the day, these equations (primitively) describe the motion of the atmosphere and allow us to predict horizontal and vertical motion (u, v, w), air pressure (p), density (ρ - basically moisture), and temperature (T).

Modern models are certainly much more complex than this, but all follow the basic principle of a bunch of non-linear equations describing the evolution of the atmosphere over a period of time. Viola - weather forecast! If you ever want to play with some model forecasts freely available in the US, I recommend using the College of DuPage website: https://weather.cod.edu/forecast/

As weather models have increased in complexity, increasing from these primitive equations to tens of thousands of lines of code running for hundreds of thousands of points across the globe, they continue to demand more and more computing power. We're constantly pushing for the latest and greatest supercomputers because this shit is computationally intensive, and we need to have it done with the quickness in order for it to be useful to forecasters.

So, that's the basics of weather models which physically predict the weather. Now on to AI.

You've likely heard of it. It's ChatGPT. It's sometimes terrible, sometimes great images generated by typing in a prompt into a website. It's an attempt to get computers to think like a human, rather than having to give them the blow by blow of how to behave. Here's one of my favorite AI images.

Dashboard footage of Goopy as a creepy creature made with AI. Via Instagram (user thatsspookypod)

For weather research, there are a number of useful applications. Image recognition for weather observations from web cameras. Post-processing model data to improve accuracy. Development of severity indices to indicate impacts of approaching weather systems. All kinds of stuff. And it's been in the works for something like 30 years.

Which leads me to this article about Google. Google is claiming their model consistently outperforms the government models and many times runs faster. Sounds great! The model was trained on 40 years of past weather data and outputs a 10 day forecast every 6 hours in less than a minute on a normal computer (vs. about an hour on a supercomputer for current forecasts). If you want to look up more info, it's called GraphCast.

Warning: rant incoming

Via Giphy

However, unlike the glowing reviews from the Washington Post, this isn't some sort of magic. First - I just don't trust private company verification statistics. Sorry about it. I've seen how they manipulate and cherry pick different verification metrics (error values, essentially) or present them in different ways to make their product looks superior. Try it yourself. For the next 10 days, predict that it will be dry at your location. Let's say on day 3 it rains for 2 hours. You were correct 238 hours out of 240, or 99%. Congratulations on your 99% forecast accuracy! But...did you have any real skill? No, you didn't, you bum who is supposedly paid to be wrong all the time. This is called a persistence forecast: you just assumed it would continue to be dry the whole time because it's dry now.

And on that topic, I've grabbed the paper published in Science.org and see they only used Root Mean Square Error and Anomaly Correlation Coefficient. What these are aren't important, assuming you trust your Captain when she says that this is the MOST BASIC way to do forecast verification. They're useful metrics. They aren't sufficient. The change is also minimal. You can see the paper here: https://www.science.org/doi/10.1126/science.adi2336

This is you, Google verification team.

Via Giphy

And, more accurate? Barely. The values change a very small amount. Faster without a supercomputer, that could be useful. They also used a lot of percentages, which can be a small physical change for a large percentage, making your forecast sound better than it is.

Second, Google is acting like this is something new, which I object to. This isn't new. This is another case of computer engineers thinking they invented weather, or whatever is going through their minds. Google has been to my company and listened to presentations on our research, then gone off and decided to recreate it instead of collaborating like they were acting like when there. But at the end of the day, they're derivative and frankly not always as good. They need weather folks involved, but they won't pay the peanuts someone like me makes for it. 

That was a big rant to say that this approach Google describes is basically an analog approach...using past weather scenarios similar to current conditions to predict future weather. We've been doing that for years. But magically Google made it better! [no, I'm not salty]

Third, there was nothing in the NOAA budget (US weather organization) for AI. Meaning at least at the policy level, we're going to continue churning out physics-based weather models as described above. This isn't some revolution, not yet. Also, models aren't just about predicting the future weather. We use physics-based models all the time to understand the physically processes in the atmosphere and improve our understanding. We use them to teach students. AI is, most often, a black box - it spits out a forecast, but we don't know the steps it took to get there.

At the end of the day, it all does make sense. A good forecaster often uses the "gut feeling" model, which is when they recognize a pattern they have seen time and time again in their area and they know what the outcome will be based on that experience. The AI is basically doing the same thing - mimicking human thought processes.

That said, take this advertisement from Google with a grain of salt. I don't necessarily believe that they are out-performing all the other models, but this is a method of research and forecasting that has been explored in my industry for 30 years, so it's certainly not an invalid approach. Most people I've heard from so far are skeptical. Some love the latest and greatest and are jazzed about it. Time will tell.

And after giving all sorts of shit, I do have to say the writer of the article has solid credentials. So no shade to the writer. All the shade to Google.

Comments

Popular posts from this blog

OPEN POST: Little Girl Looking Downstairs at Christmas Party

OPEN POST: Yvette Nicole Brown Got Married

OPEN POST: Manor Music Monday With The Joyous Etta Jones!