Elsevier

Health Policy

Volume 63, Issue 2, February 2003, Pages 141-154
Health Policy

Setting priorities for the evaluation of health interventions: when theory does not meet practice

https://doi.org/10.1016/S0168-8510(02)00061-1Get rights and content

Abstract

Priority setting is a key component of the process of evaluating health interventions. This has traditionally been an informal process led by power and influence, but a number of explicit criteria and systematic models have been developed since the late 1980s. This paper presents a review and appraisal of these conceptual models and examines how they have influenced the practice of priority setting in the United States and Europe. The main conclusion is that a significant gap exists between theory and practice. Most models have been developed for the purpose of maximising health gains through an efficient allocation of resources. However, they present at least three important limitations that need to be removed if formal models are to play a more substantial role in decision making: they tend to prioritise interventions for evaluation, rather than evaluations themselves; they fail to address priority setting in a research portfolio perspective; and they fail to adopt an incremental perspective. Existing prioritisation models are not suitable for supporting cost-containment or distributional objectives.

Introduction

The effectiveness and the efficiency of a system for evaluating health interventions depend critically on the setting of evaluation priorities, as well as on the conduct of appropriate evaluations and on the translation of evidence into policy. During the second half of the past century, investments in evaluative research have grown steadily in many industrialised nations, along with calls for the adoption of transparent and systematic approaches to the allocation of resources for evaluations. However, the nature of the political processes through which a large part of such resources is allocated is often at odds with the aspiration to a transparent and systematic setting of evaluation priorities. The multitude of values, interests, and possible criteria for priority setting at play often inhibits the adoption of fully systematic approaches and continues to generate tensions between and within groups of stakeholders.

Advocates of priority setting for health technology assessment research argue that this may improve the efficient use of scarce research resources, and the consistency of research with the aims of health care policy makers. Additionally, priority setting may improve transparency and accountability in the health system. On the other hand, there is no evidence that the application of formal priority setting methods may lead to more desirable outcomes than those achieved through largely informal processes of allocation of research resources.

One of the earliest attempts to investigate opportunities to make research priority setting a systematic activity was made by the U.S. Congressional Office of Technology Assessment (OTA, now no longer in existence). Its conclusion was that ‘the factors that need to be taken into account in research planning, budgeting, resource allocation, and evaluation are too complex and subjective; the payoffs too diverse and incommensurable; and the institutional barriers too formidable to allow quantitative models to take the place of mature, informed judgement’ [1]. An important question is whether the OTA's conclusion still stands in a world that has substantially changed.

The purpose of this paper is to review and appraise the theory and the practice of priority setting for the evaluation of health interventions after the publication of the OTA report. First, the conceptual models developed to make the prioritisation process transparent and systematic are analysed and assessed. The ability of these models to serve different policy aims is discussed. Second, an overview of priority setting used by public and private organisations in Europe and in the U.S. is provided. Finally, policy implications are drawn.

The analysis contained in this paper is based on a comprehensive review of the existing literature and on interviews with health policy makers, research and health care organisations in Europe and in the U.S.

Section snippets

Methods for setting priorities for the evaluation of health interventions

In setting priorities for evaluation decision-makers may aim at identifying interventions likely to produce one or more of the following effects:

  • a significant increase in health care expenditure (health care payers), or a significant financial budgetary burden (providers), or a poor return on investments (industry), or will drain a high level of resources from other effective interventions;

  • a negligible or modest improvement in health and health-related outcomes, or none;

  • significant adverse

Existing prioritisation models: strengths and weaknesses

In the rest of this paper the term ‘prioritisation model’ will be used to indicate a formal process through which evaluation priorities may be set, involving selection criteria, consultation procedures and possibly (but not necessarily) quantitative models to support decisions.

This section is not intended simply as a review of the literature on setting priorities for the evaluation of health interventions. Other such reviews have been published [3], [4] and readers may refer to those, or to the

The practice of priority setting

Despite great efforts to develop adequate priority setting models in the last 15 years, there have been few attempts to implement these in practice. Where formal prioritisation models have been adopted, experimentally or routinely, no evidence has been gathered about their impact within or outside the health care system.

The next part of this paper will examine the practice of priority setting for the evaluation of health interventions in the U.S. and in Europe focusing on implicit, as well as

Conclusions

In the first part of this paper, a comprehensive review of existing models for setting priorities for evaluations of health interventions has been conducted. Each model has been assessed on the basis of four dimensions: comprehensive scanning and advance warning; explicit rating; stakeholder participation; time and resource demands. The models developed to assess ‘payback’ from research, which partly build upon previous priority setting models, appear to provide a useful framework for setting

References (29)

  • Office of Technology Assessment. Research funding as an investment: can we measure the returns? A technical memorandum....
  • P. Carlsson et al.

    Scanning the horizon for emerging health technologies. Conclusions from a European workshop

    International Journal of Technology Assessment in Health Care

    (1998)
  • C. Henshall et al.

    Priority setting for health technology assessment. Theoretical considerations and practical approaches

    International Journal of Technology Assessment in Health Care

    (1997)
  • G. Harper et al.

    The preliminary economic evaluation of health technologies for the prioritisation of health technology assessments

    International Journal of Technology Assessment in Health Care

    (1998)
  • D.M. Eddy

    Selecting technologies for assessment

    International Journal of Technology Assessment in Health Care

    (1989)
  • C.E. Phelps et al.

    Priority setting in medical technology and medical practice assessment

    Medical Care

    (1990)
  • C.E. Phelps et al.

    Correction and update on ‘Priority setting in medical technology assessment’

    Medical Care

    (1992)
  • D. Torgerson et al.

    Using economics to prioritize research: a case study of randomized trials for the prevention of hip fractures due to osteoporosis

    Journal of Health Services and Research Policy

    (1996)
  • B.A. Weisbrod

    Costs and benefits of medical research: a case study of poliomyelitis

    Journal of Political Economy

    (1971)
  • M.S. Thompson

    Decision-analytic determination of study size: the case of electronic fetal monitoring

    Medical Decision Making

    (1981)
  • M. Weinstein

    Cost-effectiveness of cancer prevention

    Science

    (1983)
  • A.S. Detsky

    Using economic analysis to determine the resource consequences of choices made in planning clinical trials

    Journal of Chronic Diseases

    (1985)
  • M.F. Drummond et al.

    Assessing the costs and benefits of medical research: the diabetic retinopathy study

    Social Science and Medicine

    (1992)
  • R.S. Woodward et al.

    Optimum investments in project evaluations: when are cost-effectiveness analyses cost-effective?

    Journal of Medical Systems

    (1996)
  • Cited by (0)

    View full text