The tortuous process of drafting, reviewing and revising the third assessment report of the Intergovernmental Panel on Climate Change (IPCC) is well under way, following the meeting of chapter authors in Arusha, Tanzania, on 1–3 September. Even those of us only peripherally involved have been burning the midnight oil to complete analyses ‘in time for IPCC’, and this is nothing compared with the heroic efforts of the lead authors.

Only the most hardened cynic would deny the value of this five-yearly stimulus to the climate research community, and the usefulness of the reports it generates. One of the IPCC's greatest achievements has been to bring national governments ‘on board’. Nevertheless, any such intergovernmental process sits uneasily in the era of stakeholders and focus groups. Large numbers of scientists gather together periodically and attempt to forge a consensus about the nature and scale of the problem of global warming. This is followed by a gathering of an even larger number of policy-makers, primarily politicians and civil servants, to decide what to do about it. A bemused public must then be persuaded, for its own good, to go along with the solution.

Reaching the public

As long as politicians only propose targets that they think they can sell to a generally indifferent electorate, it seems unlikely that we will achieve the order-of-magnitude reductions in greenhouse-gas emissions required to make a significant impact on the problem. And as long as climate research remains confined to large, centralized institutions, this indifference is likely to remain.

Most of our understanding of climate change is still based on physical principles and the simulations of climate models. The emergence of a detectable signal is a welcome new piece of data for model validation, but it does not fundamentally change our understanding of the problem. Moreover, the signal means little to the public, being based on changes in temperature that seem insignificant and which occur on scales too large to matter. Without an unambiguous indicator of climate change with which to capture the public imagination, as the Antarctic ozone hole did in the 1980s, the gap between specialist and public opinion will only widen.

Effective communication of uncertainty is notoriously difficult, and the analysis of uncertainty in climate forecasts remains rudimentary. Modellers do, of course, examine how sensitive their predictions are to changes in model parameters, but only one or two parameters can be varied at a time. It is the subtle interactions between errors that we must analyse. Another approach is to compare results from different climate models, but the few models available share assumptions, algorithms and, sometimes, even source code. They provide, at best, an incomplete estimate of uncertainty.

Perturbing parameters

In an ensemble weather-forecasting system, the ‘best-guess’ initial state is deliberately perturbed in various directions to establish the range of tomorrow's weather that is consistent, at a given level of confidence, with today's observations. In an ensemble climate forecast, perturbations would also have to be applied to all uncertain parameters in the climate model. Unlike weather forecasters, who have several methods of selecting ‘optimal’ perturbations, climate researchers have no option but to perturb parameters, within their ranges of uncertainty, and integrate the model to see what happens.

Almost all such perturbed models would soon drift into an unrealistic climate. Thus, for a 50-year forecast, we would first need to perform large numbers of simulations of the past 50 years, both with and without external influences such as increasing greenhouse gases. We could then discard all those perturbed models that were either inconsistent with the observed record or inconsistent with our (much less certain) estimate of what the twentieth century would have been like in the absence of external influences. Provided the perturbations are large enough for us to be able to discard most models, there are objective grounds for believing that the survivors would span the current range of uncertainty. These could then be integrated onwards to provide a probabilistic climate forecast.

Hundreds of thousands of 50-year integrations would be required to explore all the relevant parameter combinations in a full-scale climate model. This is well beyond the capacity of current and planned modelling facilities. Or is it? The ‘HadCM3’ model from the UK Meteorological Office is one of the latest generation of coupled models used in the IPCC2000 Third Assessment Report. A single 50-year integration would take about six months on a reasonably up-to-date home personal computer, while the memory and disk-storage requirements are no larger than those of an advanced computer game.

More than a million volunteers are currently scanning radio-telescope data on their home PCs for signs of extraterrestrial intelligence (see Nature 400, 804; 1999). Could a similar number be recruited for the more practical, albeit more demanding, task of forecasting the climate of the twenty-first century? After all, it is possible that none of the participants in the SETI@home project will find signals of alien life. But, provided we keep the records, someone could definitely tell their grandchildren that it was they who, on a US$1,650 PC, made the most accurate forecast of the global mean temperature of 2050.

Although the range of predictions would be large, the chances are that only a small fraction of these perturbed models would reproduce the recent warming and show no significant change over the coming 50 years (unless, of course, there is something crucial that we haven't thought of yet). This in itself would be helpful for counteracting the powerful lobby still promoting the notion that there is nothing to worry about. Perturbing parameters is only the beginning: the next step would be to feed in new parametrizations, or even, as PCs improve, higher-resolution models.

We would still, of course, need to make small numbers of integrations of very high-resolution models at supercomputing centres. But the ability to perform million-member ensemble simulations, even at a relatively coarse resolution, would profoundly affect climate modelling. Models would no longer be fixed representations of the climate, revised with great effort only every few years. Instead, they would be probabilistic, fuzzy entities, reflecting our fuzzy understanding of the climate system, in which tunable parameters are represented by uncertainty ranges rather than by single ‘best-guess’ values.

Get the children involved

Anyone respectable enough to sit on a peer-review committee will probably find the idea of getting schoolchildren to run full-scale climate models on their parents' PCs completely daft. But there will be others for whom the idea is as natural as Amazon.com. If one of those happens to be chief executive officer of a major PC manufacturer or software house with an urge to save the planet, perhaps they could get in touch.

If you have access to a PC and would like to participate in a fuzzy-climate modelling initiative, please register on http://www.climate-dynamics.rl.ac.uk