32 Climatic Models
Lalu Das
1. Learning outcomes
2. Introduction
3. Roles of climate models
4. Different types of climate models
5. Global Circulation Models (GCMs)
5.1. The basic equations
5.2. Inputs for climate models
6. Model initialization and spin-up
6.1. Spin-Up
6.2. Cold start
6.3. Warm start
7. Model parameterization
8. AOGCMs and coupled GCMs
9. Comparison between NWP models and GCMs
9.1. Differences
9.2. Commonalities
10. Regional Climate Models (RCMs)
11. Model resolution
12. What does the model output represent?
13. Limitations of climate models
14. Summary
- Learning outcomes
After reading the module you shall be able to:
- Define climate model
- Discuss different types of climate model
- Distinguish between GCMs and RCMs
- Tell how GCMs are developed, its fundamental governing equations, basic inputs and the meaning of its final outputs
- Tell various limitations of GCMs and reliability
- Introduction
It is a computer-based model of the climate system to understand and predict its behavior. Climate models incorporate the physics and chemistry of the atmosphere and the oceans and aim to answer questions such as when the next El Niño might occur, and what might happen if greenhouse gas concentrations double. The challenge for climate models is to run forward in time much faster than the real atmosphere and oceans. To do this, they must make a large number of simplifying assumptions and perform huge numbers of calculations. Various types of models are used to analyze different aspects of the climate. They can be relatively simple one, two- or three dimensional, and can be applied to a single physical feature of climatic relevance, or they may contain fully interactive, three-dimensional processes in all three domains: atmosphere, ocean and land surface. Sometimes, where complicated processes vary according to a wide variety of factors, it can be best to begin to explore the processes in one dimension only. For example, when looking at chemical reactions that vary with the physical conditions through the depth of the atmosphere, one approach is to look at the reactions at each level from the ground to the top of the atmosphere. One-dimensional models were initially used for energy-related studies of the climate system. As greater confidence is obtained in the way a simple model handles a particular process, the ideas are then incorporated into more complex two- three- and four (time)-dimensional representations, which incorporate the dynamics of the climate system.
- Roles of climate models
Many climate models have been developed to perform climate projections i.e. to simulate and understand climate changes in response to the emission of greenhouse gases and aerosols. In addition, models can be formidable tools to improve our knowledge of the most important characteristics of the climate system and of the causes of climate variations. Obviously, climatologists cannot perform experiments on the real climate system to identify the role of a particular process clearly or to test a hypothesis. However, this can be done in the virtual world of climate models. For highly non-linear systems, the design of such tests, often called sensitivity experiments, has to be very carefully planned. Though, in simple experiments, neglecting a process or an element of the modeled system (for instance the influence of the increase in CO2 concentration on the radiative properties of the atmosphere) can often provide a first estimate of the role of this process or this element in the system.
- Different types of climate models
Modelers have first to decide the variables or processes to be taken into account and those that will be taken as constants. This provides a method of classifying models as a function of the components that are represented interactively. In the majority of climate studies, at least the physical behavior of the atmosphere, ocean and sea ice must be represented. In addition, the terrestrial and marine carbon cycles, the dynamic vegetation and the ice sheet components are more and more regularly included, leading to what are called Earth-system models.
A second way of differentiating between models is related to the complexity of the processes that are included. At one end of the spectrum, General Circulation Models (GCMs) try to account for all the important properties of the system at the highest affordable resolution. The term GCM was introduced because one of the first goals of these models is to simulate the three dimensional structure of winds and currents of atmosphere and ocean realistically. They have classically been divided into Atmospheric General Circulation Models (AGCMs) and Ocean General Circulation Models (OGCMs). For climate studies using interactive atmospheric and oceanic components, the acronyms AOGCM (Atmosphere Ocean General Circulation Model) and the broader CGCM (Coupled General Circulation Model) are generally chosen.
At the other end of the spectrum, simple climate models (such as the Energy Balance Models, or EBMs) propose a highly simplified version of the dynamic of the climate system. The variables are averaged over large regions, sometimes over the whole Earth, and many processes are not represented or accounted for by the parameterizations. EBMs thus include a relatively small number of degree of freedom.
EMICs (Earth Models of Intermediate Complexity) are located between those two extremes. They are based on a more complex representation of the system than EBMs but include simplifications and parameterizations for some processes that are explicitly accounted for in GCMs. Actually, the EMICs form the broader category of models. Some of them are relatively close to simple models, while others could be considered as slightly degraded GCMs.
- What are GCMs?
Global Climate Models or General Circulation Models (GCMs) are comprised of fundamental concepts (laws) and parameterisations of physical, biological, and chemical components of the climate system. These concepts and parameterisations are expressed as mathematical equations, averaged over time and grid volumes. The equations describe the evolution of many variables (e.g. temperature, wind speed, humidity and pressure) and together define the state of the atmosphere. These equations are then converted to a programming language, defining among other things their possible interactionswith other formulations, so that they can be solved on a computer and integrated forward in discrete time steps.
5.1. The basic equations
The basic equations for atmospheric flows are from the principle of the conservation of mass, momentum and energy. The conservation laws are derived considering the rate of change in mass, momentum and energy per unit volume. The basic equations that govern the atmosphere can be formulated as a set of seven equations with seven unknowns: the three components of the wind velocity (u, v and w), the pressure p, the temperature T, the specific humidity q and the density. The seven equations, written for the atmosphere, are:
Newton’s second law of motion (momentum equation, i.e., force equals mass times acceleration)
Before these equations are used in models some standard approximations have to be performed. For instance, assuming hydrostatic equilibrium, which is a good approximation at the scale of GCMs, provides a considerable simplification of the equation of motion along the vertical. Secondly, the quasi-Boussinesq approximation states that the time variation of the density could be neglected compared to the other terms of the continuity equation, filtering the sound waves. However, supplementary equations for the liquid water content of atmospheric parcels or other variables related to clouds are often added to this set of equations.
Representation of GCMs through grid boxes:
- Ø In a GCM, grid boxes cover the entire planet (ocean and atmosphere)
- Ø Typical size is 100-200 km in the horizontal
- Ø 40-60 layers in the vertical both in atmosphere and ocean
- Ø Typical time step can be 30 min
5.2. Inputs for climate models
Any climate model requires some input data to run it. These are given below:
- Ø Earth’s properties (Earth’s radius and period of rotation, land topography, coastline and bathymetry of the ocean, properties of land/soils (shape of the land, background albedo, surface roughness, some dynamic properties of soil like organic matter, soil structure, bulk density, water and nutrient holding capacity)
- Ø Boundary conditions for all sub-systems not explicitly included in the model (distribution of vegetation, complex topography of ice sheets, coast-lines and land cover inhomogeneity)
- Ø Solar forcing (monthly or annual solar irradiance)
- Ø Emissions (monthly or annual gridded CO2 emissions) and concentrations of GHGs and aerosols (global mean time series of BC (black carbon), organic carbon(OC), CH4, Sulfur)
- Ø Volcanic forcing (dust,H2O and SO2 emissions)
- Ø Ozone (time-evolving 3-D concentrations for forcing in models that do not include interactive chemistry)
- Ø Land-use [emissions (Gt C) from land cover changes]
In general, information needed to run a GCM are: i) Initial state of all variables in all boxes; ii) A description of the land surface (topography and land use); iii) Solar radiation; iv) Gas and aerosol composition of the atmosphere. While the resources needed to run a GCM (atmosphere and ocean) are: Super computer (many processors & 100 TB disk) and time (it takes about 2 weeks for 100 years simulation).
- Model initialization and spin-up
In order to start a model run, the values of certain variable, which is actually going to predict like temperature, salinity, density, sea level, and velocity need to be specified over various model parameters like domains, grids, and bathymetry.
NWP (Numerical Weather Prediction) integrations started from very similar initial conditions (input of the models provided at the beginning of the integration) may result in quite different forecasts. Simulations with climate models can never be directly compared to observations: It is not advisableto compare a single month or year from model simulations to the corresponding month or year of observation, rather comparing statistics of a longer period (decades or more) is justified.
6.1. Spin-Up
Spin-up is the time taken for an ocean model to reach a state of statistical equilibrium under the applied forcing. It is usually difficult for basin-scale and global general circulation models to reach this state, as it can take hundreds of years. The ocean model is initialized with the present ocean state and is integrated forward until the circulation is consistent with the prescribed water mass structure. In other words, it adjusts geostrophically to its initial state. This initial state is imperfect due to the sparseness of data at depth. The deep ocean model can reach the equilibrium hundreds of years to adjust while the upper ocean only requires about 50 years or so.
6.2.Cold start
A cold start usually occurs when a model is first initialized and needs to be spun up. For example, if a regional model is configured in a new domain, it would need to be started from the cold start. A cold start could be from climatology (commonly used to initialize global models), an analysis of data (such as MODAS), a forecast from a different model, or a combination of the above. The model is then run until a statistical equilibrium is achieved.
6.3. Warm start
A warm start is a restart of a model, which is used to eliminate or reduce the model spin up time. The saved fields from a recent forecast of the same model can be used to initialize a new simulation, or continue the previous simulation. The saved fields may be used as a first guess for an analysis including new data, and then that field is used to initialize the new forecast.
If there is a continuous cycle of forecast – data analysis – forecast, this could also be thought of as data assimilation.
- Model parameterization
There are certain physical processes that act at a scale much smaller than the characteristic grid interval of the climate models or are too complex to clearly understand and hence to physically represent them in the model. Some such examples include turbulence, the phase change of water, descent rate of raindrops, convective clouds, simplifications of the atmospheric radiative transfer on the basis of atmospheric radiative transfer codes and cloud microphysics. If the complete physics of these processes, for example, clouds, were to be computed explicitly at each time step and at every grid-point, the huge amount of data produced would swamp the computer. However, these processes cannot be eliminated; so simplifying equations are developed to represent the gross effect of the many small-scale processes within a grid cell as accurately as possible. This approach is called parameterization i.e. it is the method of replacing very small-scale or complex physical process in the model by a simplified process. There is a lot of research going on to devise better and more efficient ways for incorporating these small-scale processes into climate models.
Major steps to do parameterization in the numerical models are: i) ignore some processes ii) simplifications of some processes based on some assumptions iii) statistical/empirical relationships atmosphere, massive amounts of latent and thermal heat. Representing their heat storage, and the absorption of greenhouse gases by the oceans, in long-term simulations of climate requires a full three-dimensional ocean model, which simulates even the deep currents. Changes in the intensity and location of deep-water currents can ultimately have profound effects on the atmosphere. In the past, changes in the circulation of the ocean have produced major atmospheric responses.
The models must also be able to handle shorter-term fluctuations such as those associated with ENSO. Recent developments in climate models, which take into account not only surface processes at the ocean-atmosphere interface but also those acting at depth, have produced considerable improvement to the quality of climate model results. An oceanic GCM typically requires very high spatial resolution to capture eddy processes associated with the major currents, bottom topography and basin geometry. High-resolution ocean models are therefore at least as costly in computer time as are atmospheric GCMs. Further coupling of other climate system component models, especially the cryosphere and the biosphere, are also necessary to obtain more realistic simulations of climate on decadal and longer timescales.
- Comparison between NWP models and GCMs
1. Differences
A general circulation model (also known as a global climate model, both labels are abbreviated as GCM) uses the same equations of motion of NWP model, but the purpose is to numerically simulate changes in climate as a result of slow changes in some boundary conditions (such as the solar constant)or physical parameters (such as the greenhouse gas concentration). NWP models are used to predict the weather in the short (1-3 days) and medium (4-10 days) range future. GCMs are run much longer, for years on end, long enough to learn about the climate in a statistical sense (i.e. the means and variability). A good NWP model accurately predicts the movement and evolution of disturbances such as frontal systems and tropical cyclones. A GCM should do this as well, but all types of models areso much after some time (e.g. 2 weeks), that they become useless from a perspective of weather foresight.
An error in the sea surface temperature by a few degree Celsius, or a small but systematic bias in cloudiness throughout the model, matter little to a NWP model but for a GCM these factors are important, because they matter over a long term. GCMs ignore fluctuating conditions when considering long-term changes, whereas NWP models take no notice of very slow processes.
State-of-the-art GCMs are coupled atmosphere-ocean models, i.e. a model simulating surface and deep ocean circulations is ‘coupled’ to an atmospheric GCM. The interface is the sea surface: that is where the transfers of water (evaporation/precipitation) and momentum occur. GCMs can further be coupled to dynamic models of sea ice and conditions on land. Short to medium range NWP models are usually not coupled to a dynamic ocean model.
9.2.Commonalities
GCMs as well as NWP models, numerically simulate the ‘state’ of the atmosphere using a finite expression of the equations of motion.
The time-step of any numerical simulation of the atmosphere or ocean is constrained by the Courant-Friedrich-Levy (CFL) criterion. According to this criterion the time-step must be too small for the fastest-travelling disturbance to have time to traverse the distance of the grid spacing. Noise travels fastest of course. It travels as waves of expanding and contracting air, therefore models assume that the air is incompressible (the density varies only with height). The fastest meteorologically significant disturbances are large gravity waves, or the air in jet streaks, and their speed rarely exceeds 100 m/s or 200m/s. Then for a model resolution of 100 km, the maximum time-step is 100,000 m divided by 2000 m/s = 500 s or approximately about 8 minutes. Higher-resolution models require shorter time-steps, so that more calculations are needed to simulate climate over the same period.
Sub-grid scale processes are those that have dimensions smaller than the model resolution. Certainly cloud microphysical processes are in this category, therefore they need to be parameterized, i.e. the aggregate effect of the clouds on the resolved scale (in terms of changes in the radiation fluxes or moisture and mass transport, etc.) is calculated. Parameterizations are empirical approximations based on large-scale (resolved) variables. Global models do not resolve cumulus clouds (even thunderstorms), so their presence and effects are parameterized: for instance, when the atmosphere is conditionally unstable and (grid scale) moisture convergence occurs, thunderstorms are assumed which stabilize and the atmosphere deposits rain. Parameterizations may have a theoretical justification but always need to be tested experimentally. For instance, one can assume that the surface albedo depends solely on surface temperature (i.e. the likelihood of ice) or the planetary albedo is simply related to cloud amount. All state-of-the-art models somehow parameterize atmospheric radiation, sub-grid scale motion, chemistry and cloud physics. Clearly, some parameterizations are specific to GCMs, such as very slow land surface changes or slow chemical processes.
Global NWP models and GCMs are generally spectral in design, for two reasons.:One reason is, spectral models assume that the model domain continuously repeats itself and this applies to the Earth: after travelling 360 degrees around the globe we return to the initial point. In other words, a sphere lends itself to a spectral approach. More importantly, spectral models are up to 10 times faster than grid models, especially if complex derivatives are involved (e.g. a Laplacian operator must be solved in the diagnosis of pressure perturbations).
- Regional Climate Models (RCMs)
Simulating climate change at the regional and national levels is essential for policymaking. Only by assessing what the real impact will be on different countries it will be possible to justify difficult social and economic policies to avert a dangerous deterioration in the global climate. Furthermore, understanding processes on the regional scale is a crucial part of global research. Processes acting on local or regional scales, such as mountain ranges blocking air flow or dust clouds interacting with radiation, will ultimately have impacts at the global level.
One technique used to overcome the coarse spatial resolution of coupled GCMs is that of nested modeling, depicted in the image above. This involves the linking of models of different scales within a global model to provide increasingly detailed analysis of local conditions while using the general analysis of the global output as a driving force for the higher resolution model. Results for a particular region from a coupled GCM are used as initial and boundary conditions for the RCM, which operates at much higher resolution and often, with more detailed topography and physical parameterizations. This enables the RCM to be used to enhance the detailed regional model climatology and this downscaling can be extended to even finer detail in local models. This procedure is particularly attractive for mountain regions and coastal zones, as their complexity is unresolved by the coarse structure of a coupled GCM grid.
- Model resolution
It is an important concept in many types of modeling, including climate modeling. Spatial resolution specifies how large (in degrees of latitude and longitude or in km or miles) the grid cells in a model are. A typical climate model might have grid cells with a size of about 100 km (62 miles) on a side. Higher resolution models provide much more detailed information, but take lots more computing time. As a general rule, increasing the resolution of a model by a factor of two means about ten times as much computing power will be needed (or that the model will take ten times as long to run on the same computer.
Typical spatial resolution used in state-of-the-art climate models improved a lot from the IPCC 1st assessment report (FAR, 1990) to the recent assessment reports published in 2013. In 1990, many climate models used a grid with cells of about 500 km (311 miles) on a side while some of the recent model almost have resolution ~60 km. In 1996, resolution had improved by a factor of two, producing grid cells 250 km (155 miles) on a side. By 2001, grid cells had, for the most part, been reduced to about 180 km (112 miles), while models run in 2007 typically used a 110 km (68 mile) wide grid cell. Vertical resolution is also varied in FAR models that had a single-layer “Slab Ocean” and 10 atmospheric layers; AR4 models often include 30 layers in the oceans and another 30 in the atmosphere.
Temporal resolution refers to the minimum size of the time steps used in models. Alternatively it is the least time scale that is required (in simulated or “model time”) for calculations of the various properties of the model.
Climate models are typically run with time steps of about 30 minutes. A climate model run for a century might, therefore, involve 1,753,152 (the number of half-hours in a century) time steps. All model parameters (temperature, wind speed, humidity, etc.) would be calculated at each of the thousands to millions of grid points in the model at each of those time steps. All these indicate lots of calculations and require a supercomputer for huge data storage.
- What does the model output represent?
- Ø GCMs are among the most advanced tools which use transient climate simulations to simulate climatic conditions on earth, hundreds of years into the future.
- Ø Climate model generates large numbers of variables, e.g. temperature, precipitation, humidity, wind etc. over the whole globe and or a specified grid location.
- Ø We can get wide range variables for future projections from GCM outputs depending upon different types of forcing or representative concentration pathways (RCPs).
- Ø Variables calculated at each grid point does not represent the real world value at that location on the earth surface but it is the interpreted one at a grid point as a mean value over the associated grid box.
- Limitations of climate models
- Ø Models perform optimally under circumstances represented by the parameter data sets (specific parameterizations and their semi-empirical settings)
- Ø Might be not optimal for future
- Ø There is no best model; multi-model multi-ensemble approach
- Ø That models agree lends some confidence to the results, but it is no proof that real world develops as models project even if emissions would be exactly as prescribed
- Ø Complexity, i.e. included subsystems, processes and feedbacks (g. time-evolving ice sheets and mountain glaciers are not included in most models)
- Ø Increased complexity alone does not make models better
- Ø Most climate models have resolutions of the order of 100 km or more e. the output presents only a mean value of variables/parameters for the associated grid boxes whereas there could be huge intra-grid variations in terms of actual observations.
- Ø An imperfect ability to transform our knowledge into accurate mathematical equations
- Ø Limited power of computer to resolve the sub grid scale phenomena
- Ø Models’ inability to reproduce important atmospheric phenomena
- Ø Inaccurate representation of the complex natural interconnections
- Summary
- Ø Climate model is the computer-based model of the climate system to understand and predict its behavior.
- Ø Climate models (GCMs and RCMs) are the fundamental tools can be used to simulate the past, present and future climate of the earth
- Ø Its temporal and spatial resolution have improved starting the 1990 (the 1st assessment report, IPCC-FAR(1990) to the latest assessment report ( IPCC-2013) significantly.
- Ø Still to date, GCMs are not adequate to simulate the present climate at the sub-regional or local scale climate satisfactorily indicating the need of regional
- Ø Climate model does not represent the real world value at that location on the Earth surface but it is the interpreted one at a grid point as a mean value over the associated grid box
- Ø Although a climate model has several limitations, but it can be used to represent the mean gross features of the earth and its future climate change projection. There is considerable confidence that climate models provide credible quantitative estimates of future climate change, particularly at continental scales and above. This confidence comes from the foundation of the models in accepted physical principles and from their ability to reproduce observed features of current climate and past climate changes. Confidence in model estimates is higher for some climate variables (g., temperature) than for others (e.g., precipitation). Over several decades of development, models have consistently provided a robust and unambiguous picture of significant climate warming in response to increasing greenhouse gases.
you can view video on Climatic Models |