To the Editor —

Integrated assessment models (IAMs) have provided the bulk of the evidence relied on by prominent documents — such as the Stern Report1 and the contributions of Working Group III to the IPCC Assessment Reports2,3 — as well as numerous research articles on the economics of climate change mitigation and related issues. I am concerned, however, that many published IAM-based research articles fail to adequately explain the basis for their findings, and do not justify these findings carefully based on sound scientific and logical argumentation, analysis, and data presented in the article itself (or in published appendices). Often the details of how the IAMs were used to derive the basic results are not described, meaning that reviewers cannot credibly assess the reliability of the results.

One major flaw of most, if not all, peer reviews of IAM-based research reports is that the models relied upon have not been reviewed in themselves. And yet such articles cannot be adequately reviewed without carefully critiquing the underlying models. All too often the original models, and subsequent versions, have never been formally peer reviewed publicly. Due to these shortcomings, even the recent 'model intercomparison projects'4 are, I would argue, of limited value.

Because economics claims to be a science, and because economists have developed many different IAMs, peer reviewers of IAM-based research articles should, in my view, assess: (1) the theory behind each model in light of model's intended purpose; (2) the structure of the model to determine if the theory was properly implemented; (3) the way in which various structural parameters were estimated based on historical data; and (4) the way in which the values of various input parameters were estimated or derived, especially those for the future. The last point is a particular problem because many IAM-based studies involve very long-term, multi-decadal projections. In addition, I believe that peer reviewers must especially assess how the model is being used in relation to the particular research questions being addressed, and what sensitivity analyses have been performed that might illuminate the answers to these questions. If any of these steps are skipped, then confidence in the reported findings is reduced. Of course, if some of these steps have been undertaken for previously published articles using the same IAM, and if the model has not significantly changed since these reviews were completed, then some of the above steps could be deemed to be complete prior to the current review. It would be helpful in this regard if past reviews of the particular IAM were made available in some format. But this is almost never done.

In 2013, the IAM Consortium — which was set up at the request of the IPCC after the Fourth Assessment Report and of which I am a member — set up scientific working groups intending to establish community-wide standards on IAM documentation and the inclusion of key input assumptions in research publications. There has been little or no progress since. It is my contention that this situation should be rectified, so as to usher in a new era for peer reviews in this field.