The Met Office was praised for correctly predicting the timing and severity of the severe storm that hit southern parts of the UK in October 2013. It also identified potential flooding and strong storm possibilities throughout the winter. New ways of collecting, measuring and interpreting data have helped to improve its forecasting accuracy. Ingenia talked to Paul Davies, Chief Meteorologist at the Met Office, about the processes involved and the new technology developed to improve UK forecasting.
The Met Office was so confident with its forecast towards the end of October 2013 that it issued an amber warning for potentially damaging winds five days ahead of the storm’s due date on 28 October. Its predictions of gusts in excess of 80mph across southern parts of England turned out to be correct, and the damage was substantial. Without the Met Office warnings, the structural and financial damage caused by St Jude’s storm could have been much worse.
The advance warning meant that many people were able to work from home. Local councils cleared drainage channels of autumn leaves the day before and had staff on standby throughout the night and clearing up during the morning of the storm. Train services were affected by trees on the line, but extra buses got many passengers to their destinations. Campsites were cleared and insurers will have received fewer claims because more properties were secured and protected against the high winds.
Dark storm clouds loom over the River Thames in London on the weekend that the St Jude’s storm hit the south coast of the UK in October 2013 © Guy Corbishley/Demotix/Press Association Images
How did the Met Office get this storm right when it got the severity of the October storm of 1987, which had similar origins, so wrong? It got the details of the storm right partly because of the Met Office supercomputer and new datacollecting technology gathered since then. What would, before, have been observed as a harmless depression, was interpreted differently. The Met Office’s IBM supercomputer took in the data from hundreds of thousands of sources across the globe such as weather stations, satellites, aeroplanes, boats, buoys and argo floats (which lie below the surface of the ocean and send back information about the water temperature).
Meteorologists saw that a particularly fast jet stream carrying a depression across the Atlantic was going to meet an area of unusually warm air over the UK. By modelling the thousands of different permutations of this meeting, the forecasters predicted that this would ‘energise’ the depression and cause a severe storm that would track across the south coast of the UK and later impact on Belgium, the Netherlands and Sweden. They were right.
When the Met Office was established in 1854, forecasters used hand-drawn charts and human observation to predict the weather. Now, hundreds of thousands of atmospheric observations are made around the world, converted into numerical data and fed into a supercomputer to solve the tens of millions of equations necessary to predict the UK’s weather. The calculations take minutes and today’s four-day forecasts are as accurate as the next-day forecasts were 30 years ago.
From forecasting rain for Wimbledon to predicting the number of tropical storms in the North Atlantic hurricane season, the range, complexity and demands on the forecasting abilities of the Met Office are continually growing. It provides forecasts for 10,000 locations worldwide, makes forecasts and provides warnings through a wide range of platforms including the web and mobile phones, and is conducting research into how the climate may change in the coming decades.
Forecasting the weather involves distinct but interlinked processes. The Met Office gathers and processes data on the existing weather conditions, then uses the processed data to set up the initial conditions in an appropriate numerical model, computes the evolving weather conditions with elapsed time and then iterates to refine the forecast.
The first, crucial requirement of forecasting the weather is to ensure accurate data gathering.
The Met Office maintains its own observation network across the UK, and also receives around 500,000 observations recording atmospheric conditions around the world every day. In the UK, it maintains 200 unmanned ground weather stations that report on the present state of the atmosphere and form a key part of the nation’s observation network. These sensor-laden stations are typically spaced around 40km apart to record weather associated with the low pressure and frontal systems that cross the UK.
Fitted with cloud base recorders, snow-depth sensors, rain gauges, platinum resistance thermometers and more, the stations collect a range of meteorological parameters, from air temperature to cloud height. Since the 1970s, these data were automatically collected hourly, but the Meterological Measurement System, introduced in 2008, takes readings every minute. The data logged at these stations are converted to standard format and then transmitted to a central database at the Met Office’s headquarters in Exeter.
Hundreds of supplementary stations gather additional data. There are rainfall stations situated on mountaintops, at airports or within urban environments, while marine data are also collected from ships and buoys. These sources of observation are backed up by weather balloons that use radiosondes (probes that transmit data over radio frequencies to a fixed receiver) to measure atmospheric parameters, and aircraft that gather observations of temperature and wind. Data gathered from weather aircraft and radiosondes can be compared with data from ground-based weather stations and satellites to check their accuracy and improve models of atmospheric activity.
Lowering a golf ball-shaped radome onto a Met Office radar at Chenies in Buckinghamshire. The radome’s weatherproof enclosure protects the antenna and is constructed of material that minimally attenuates the electromagnetic signal transmitted or received by the antenna – it is transparent to radar or radio waves © Met Office
Ground-based weather stations form a crucial part of observations. However, because they are spaced tens of kilometres apart, relatively smallscale weather features, such as thunderstorms, can evade the network altogether. This is why radar and weather satellites now play a vital role in providing better observations to increase the accuracy of today’s weather forecasts.
Radar stations are used to gather weather observations and improve the accuracy of forecasts. The UK weather radar network is operated and maintained by the Met Office and other agencies and has been providing images of rainfall rate for more than 25 years. Over the past decade, the network has grown to 15 radars and undergone several hardware and signal processing upgrades to keep up with evolving user requirements and technological advancements.
These radar stations emit short pulses of radio waves which, when they intercept precipitation, have part of the energy reflected or scattered back to the station. These readings are analysed to determine where the precipitation actually is, and how much exists. Doppler radar provides extra detail by analysing the change in frequency of the radar station’s return pulse, as caused by moving precipitation. In this way, the system detects the motion and intensity of rain and can estimate the radial speed of water and ice particles within its field of view, providing information on wind speed and direction of a rain storm.
Cyclops DP Display. Raw reflectivity data showing precipitation and ground clutter over southwest England and Wales as seen by Cobbacombe Cross radar © Met Office
With full Doppler capability in place across the UK, the Met Office is now introducing dual polarisation to the radar network, so the system will not only detect the motion and intensity of precipitation but also pinpoint whether it is falling as rain, snow or hail. This C-band dual polarisation Doppler weather radar system is now being developed by Met Office engineers based on open system architecture.
New software and hardware attachments will be integrated with each radar dish so that both vertical and horizontal radio pulses will be transmitted and received. The radar station will analyse the amplitude and phase properties of return signals, from both the horizontal and vertical channels, to determine the shape, density and speed at which the precipitation is falling.
Dual polarisation helps forecasters identify ‘false’ readings by providing more information on the origin of the return echo. For example, echoes are not always scattered back to the radar receiver by precipitation, but sometimes return via a bird, insect, aeroplane or even the ground or sea. While spurious echoes severely contaminate radar images, the additional information from dual polarisation will help forecasters to identify and remove noise from these observations.
The dual polarisation design is switchable between single linear transmission and simultaneous horizontal and vertical transmission. The new processing system, Cyclops-DP, is an evolution of the Cyclops-D series of receiver/processors re-engineered as a dual channel data acquisition system for the measurement of all dual polarisation and Doppler parameters, as well as providing refractivity measurements.
Data such as these are vital to weather forecasting, with radial wind measurements providing additional insight to the movement of small-scale but high-impact weather systems. For example, the tornado that struck northwest London in December 2006 was captured by Doppler radar stations in the region, with data gathered used to improve early warnings for similar severe weather events.
View from above
Weather satellites monitor the weather and climate of the Earth, and have revolutionised the forecasting process in recent decades by helping meteorologists spot a weather pattern often days before it would have been logged by ground weather stations. As well as detecting rapidly developing weather patterns, satellites also provide accurate data on temperature and humidity to augment the accuracy of measurements from ground weather stations.
NASA’s Terra satellite captured the result of night temperatures ranging from -5 °C to -10 °C on 6 and 7 January 2010. A strong high-pressure mode of a pattern called the Arctic oscillation had pushed the jet stream further south and allowed Arctic air masses to gather over northern Europe, making for unusually severe and cold weather © NASA, MODIS Rapid Response Team
There are two types of weather satellites: polar orbiting, which orbit the earth from north to south and pass over specific locations twice each day; and geostationary, which orbit the Earth above a fixed point of the equator, and can provide a constant stream of data measuring the hemisphere below. Polar orbiting satellites orbit at a height of 530 miles, and their closeness offers higher resolution than their geostationary counterparts, which orbit at a height of 22,000 miles. The range of atmospheric coverage from both satellite types is important for complete 3D grid-based weather modelling.
Satellites carry radiometers to measure the intensity of visible, infrared, or microwave radiation emitted from the Earth’s surface. These measurements are digitised and converted into greyscale pixels to produce a set of images that can be transmitted to weather forecast centres around the world.
Digital images from several radiometers can be combined to produce multi-spectral images, with different combinations of spectral bands used to glean information about different weather features. This might include detail on ocean currents or air masses that could collide and form clouds, as well as information on more specific features, such as forest fires or ice on the Earth’s surface.
More recently, satellites have been used to monitor volcanic ash. Following the eruption of Eyjafjallajökull in Iceland during April and May 2010, a Meteosat satellite, operated by the European Organisation for the Exploitation of Meteorological Satellites (EUMETSAT), was instrumental in monitoring volcanic ash plumes that could affect weather. The satellite’s imaging radiometers provided real-time data that were used to calculate the height of the plume, total column loading of ash and radius of ash particles.
This information was then fed into Met Office atmospheric models to validate forecasts. Indeed, meteorologists will often use the satellite imagery to verify that weather systems are actually developing in the way predicted by their computer models. Satellite imagery can be compared with graphical data from models and, if differences emerge, meteorologists adjust their forecasts accordingly.
Thanks to successive increases in computing power and a better understanding of atmosphere dynamics, sophisticated assimilation schemes have been developed to more accurately represent the current state of the atmosphere
With accurate observations gathered, the next step in weather forecasting is to combine and convert data into computer code that numerically represents atmospheric conditions. This process is known as atmospheric data assimilation. Thanks to successive increases in computing power and a better understanding of atmosphere dynamics, sophisticated assimilation schemes have been developed to more accurately represent the current state of the atmosphere.
One key example is three-dimensional variational data assimilation, or 3D-Var, first introduced in the late 1990s. This assimilates a wide range of observations and has been instrumental to incorporating radiation data from satellites. The technique statistically assigns a value to an observation depending on its importance, allowing a satellite observation to be weighted more heavily than a ground-based weather station observation, for example.
But while 3D-Var has been broadly adopted in forecasting centres around the world, meteorologists are now reaching out to the ‘Holy Grail’ of data assimilation, 4D-Var. Whereas 3D-Var assimilates all observations at regular time intervals, 4D-Var digests observations as they happen, in real time. In this way, the scheme can assimilate more observations than 3D-Var and better estimate the current state of the atmosphere, leading to more accurate forecasting.
Modelling the weather
After data assimilation, calculations are made as to how the initial, coded, weather conditions will evolve using a numerical weather prediction model. These numerical forecasts involve billions of mathematical calculations and provide guidance to forecasters, forming the basis of more detailed forecasts.
To run a climate model, the earth is rendered as a three-dimensional grid in order to apply basic equations and evaluate the results. Atmospheric models can calculate winds, heat transfer, radiation, relative humidity, and surface hydrology within each grid and evaluate interactions within neighbouring points
To run numerical weather prediction models, meteorologists divide the Earth’s surface and atmosphere into a 3D grid called a global coordinate system. All weather data are mapped onto this grid with each grid box representing the atmospheric processes in that region. Numerical prediction equations are applied to each box of the grid and across an aggregation of medium-range weather forecast some two to fifteen days ahead. At this relatively low resolution, large-scale weather patterns, such as wide areas of frontal rain and regions of shower activity, will be accurately reproduced.
Weather at any given location is influenced by weather patterns long distances away, so numerical weather prediction models run globally. Depending on the type of weather forecast needed, these models can be run at different grid scales, or resolutions. For example, a global configuration would be based on a large grid scale of 25km – every grid box is 25km by 25km – to give a medium-range weather forecast some two to fifteen days ahead. At this relatively low resolution, large-scale weather patterns, such as wide areas of frontal rain and regions of shower activity, will be accurately reproduced.
To capture more regional weather phenomena across Europe, a 4km grid scale is used, producing a more detailed forecast. And thanks to increases in computer processing power, the Met Office recently developed a high-resolution model that uses a 1.5km grid to capture small-scale atmospheric variations that will affect weather across the UK from hours ahead to two days into the future.
This high-resolution model represents local atmospheric processes very well, such as fog filling valleys and enhanced rain over mountains, and can even resolve convection currents within rain showers and snow storms. Indeed, the 1.5km weather model has been instrumental in honing the accuracy of rainfall warnings and snow forecasts.
Back in November 2010, the UK saw numerous heavy snow showers being carried inland from the sea by a northeasterly wind, causing significant disruption in the northeast of England. While coarser models indicated showers would stall over the coast, vastly underestimating inland snowfall levels, the high-resolution model represented the snow showers more realistically, predicting the showers would be brought inland, producing a much better forecast called nowcasting.
Nowcasting involves very short-term weather prediction and is used for severe weather forecasts. Traditionally, nowcasting has been carried out using computer-based extrapolation techniques, but today’s high-resolution forecasting models enable short-term predictions based on numerical weather prediction.
Following an upgrade in 2012, the Met Office supercomputer can now process 1,200 trillion calculations a second
While data assimilation and numerical models have improved enormously over recent years, a single numerical weather prediction forecast looking a few days ahead can still turn out wrong.
To cope with uncertainty, weather forecast centres around the world use so-called ensemble forecasting. Instead of running just a single numerical weather prediction forecast, the model is run a number of times from slightly different starting conditions, based on past and current observations.
For medium-range forecasting, up to 15 days ahead, the Met Office uses the Ensemble Prediction System (EPS) run by the European Centre for Medium-Range Weather Forecasts (ECMWF) and its own Met Office Global and Regional Ensemble Prediction System. At its headquarters in Reading, UK, the ECMWF has one of the largest supercomputer complexes in Europe which is linked by high speed telecommunication lines to the computer systems of the national weather services of its supporting states.
Here, a total of 51 forecasts are run twice daily using the ECMWF Ensemble Prediction System with a resolution of around 80km, rapidly producing global forecasts over this time frame. The same system is used to run monthly and seasonal forecasts, but at lower resolutions, around 140km. Because of the lower resolution, the time taken to run the long range models – out to months ahead – is roughly the same as that taken for shorter-term and higher-resolution forecasts.
If severe weather forecasts from ensemble members are similar, the probability of that weather taking place is high, so forecasters have the confidence to issue early warnings to the public. If the forecasts are different, the probability of the severe weather is lower, but forecasters will still alert emergency planners via a special advisory service.
Meteorologists at the Met Office Exeter HQ collect and interrogate data to build up alternative models of forthcoming weather patterns © Met Office
Computers and human interaction
Progress in data assimilation, numerical weather prediction and ensemble forecasting would never have taken place without increases in computer processing power. Following an upgrade in 2012, the supercomputer can now process 1,200 trillion calculations a second with a peak system performance approaching one petaflop. The recent move to the 1.5km resolution grid to model weather in the UK could only have taken place with this greater processing power.
The human touch is still important. Experienced forecasters use their knowledge to compare supercomputer predictions against actual observations. They can correct for known biases or weaknesses in computer model predictions, as well as deciding on the correct emphasis in weather forecasts.
The ECMWF Medium-Range Forecast Model, for example, performs particularly well through the winter months when the lowest layers of the atmosphere become colder in calm weather. It does, however, have a slight tendency to overforecast mid- to uppertropospheric heights and the resultant calculations are usually too high – they give a ‘warm’ bias. The United Kingdom Met Office (UKMET) Medium-Range Forecast Model, on the other hand, has more difficulty modelling shallow cold air but has other strengths. While more accurate data collection and increased processing capability have led to improvements in these computer models, there will always be a small margin of error and model bias.
Forecasters also tailor forecast information to the interests of different consumer groups which can include the general public, emergency responders, airlines, and energy and insurance companies. With predictions in place, the final part of the forecasting process is to present the forecast to the public and special interest groups.
Nowcasting is a relatively new method of doing this, as is the National Severe Weather Warning Service, which aims to warn the public and emergency services of severe or hazardous weather that has the potential to cause damage or disruption.
These warnings are given a colour depending on a combination of both the likelihood of the event happening and the impact the conditions may have, with yellow (be aware) being the lowest level of warning and red (take action) being the highest. These warnings help communicate the potential impacts expected from heavy rain, snow, strong winds, fog and ice.
As well as day-to-day forecasting at the Met Office, researchers are also looking at how climate is changing over long periods of time. Decadal forecasts are produced using climate models based on observations of the four fundamental elements of the climate system that produce the weather: atmosphere, ocean, land and cryosphere (land ice and sea ice). These elements are combined along with a comprehensive representation of key fluid dynamics, thermodynamics and radiative transfer, and repeatedly run through numerical models to produce an ensemble forecast that provides a range of plausible outcomes.
While decadal forecasting is a challenging area of research – long-term comprehensive observations of the ocean do not yet exist – the forecasts can guide meteorologists on how short-term trends in global warming are likely to evolve in the next few years. The most recent 10-year model, released in December 2012, shows that the Earth is expected to maintain the record warmth observed over the last decade with new record global temperatures being reached in the next five years.
Given that forecasters have already predicted the frequency of Atlantic hurricanes over coming years, decadal forecasting holds promise, but it has its limits. Decadal forecasts provide essential information about ocean weather and how it will evolve over the next few years, in the context of global warming. However, the forecasts cannot provide any information on climate sensitivity, for example, how much the planet will warm in relation to greenhouse gas increases over the long term.
Met Office forecasts provide important support to different sections of the UK national infrastructure: government, agriculture, defence, transport, education and energy. Timely, accurate and reliable forecasts help these sectors to make informed decisions about any weather that may affect their operations and take action to ensure business continuity. They also assist in global planning for the future, working with international colleagues on projects such as decadal forecasting – see Long-term forecasting.
A growing number of businesses are waking up to the benefits of using weather information in a much more sophisticated way to improve their profitability. Current areas of interest lie in how the weather affects people’s behaviour and even their health, but as the weather touches nearly every aspect of our lives, there is huge potential to tap into other specialist areas too. This will require forecasts over different timescales from days to decades ahead. To achieve this, more research and increased computing power will be needed to achieve Met Office targets for ever-greater accuracy.
Paul Davies became the Met Office’s Chief Meteorologist in 2013. He joined the Met Office in 1996 as a forecaster, becoming Chief Forecaster in 2003. In 2007, he was appointed Chief Hydrometeorologist, helping to set up the Flood Forecasting Centre. He then headed up the Met Office’s Hazards Centre from 2010.
The author would like to thank Dr Rebecca Pool, science and technology writer, and Jonathan Stanford at the Met Office for their help writing this article.