Where Are We Now?

The integration of evidence-based medicine, while clinically important for diagnosis and treatment, generally faces two problems: First, collecting reliable, high-level data remains difficult. This is especially true for patients with acute trauma as those trauma settings rarely allow for relaxed, informative conversations. Related to this, patients in these stressful settings often prefer the physician’s recommendations pertaining to therapy. They normally have a limited interest in our unsolved problems, and even less interest in being randomized to therapies with uncertain benefits.

Even if the studies can be performed, or if the data can be acquired in other ways—such as from national registries or large administrative databases—the second problem remains: How do we interpret the results? Large sample sizes in studies allow for the statistical detection of even small treatment effects. While they may be “statistically significant,” what we really wish to know is whether they are “clinically relevant.” Since the differences in question usually come at some cost—in money, risk, or the uncertainties associated with novel treatments—it is important that we make sure they are worth paying for. Many “statistically significant” differences are clinically trivial, and not worth the money, risk, or uncertainty.

These questions were the motivation for the concept and development of the Minimum Clinically Important Difference (MCID). The MCID indicates the smallest change in a clinical score that a patient would recognize as beneficial. Changes smaller than the MCID, by definition, are not clinically relevant, and therefore, can be neglected for most purposes.

Knowledge of clinical relevance is an important aspect of planning and evaluating clinical studies. When formulating a thesis for a prospective or randomized study, the first step is estimating the expected change to the outcomes score that the different therapies in the study might produce. Only effects of this size or larger should be important to clinicians. Sample sizes should be chosen to detect effects as statistically significant, and study findings should be presented in light of the MCID; smaller differences, even if statistically significant, should be identified as likely unimportant (or perhaps imperceptible) to patients. Only differences larger than the MCID should drive decisions to change clinical practice.

Where Do We Need To Go?

The paper by Walenkamp and colleagues determined the MCID for the Patient Rated Wrist Evaluation (PRWE) score in patients with distal radius fractures. Previously published studies [13] had detected the MCID for PRWE in other nonacute diseases such as nerve compression and arthritis. It appears that the MCID varies based on the condition being evaluated. Therefore, the MCIDs for different diseases or surgical procedures may vary even when they involve the same anatomic location and the same outcomes tool. In this study, the MCID for distal radius fractures was 11.5, while patients with more chronic problems require a score change of 14 points to recognize a relevant improvement.

While this finding is interesting and important, unanswered questions remain, including: (1) (1) Since the MCID indeed varies based on the condition being evaluated, which treatments are common or important enough to warrant going through the trouble and expense of calculating an MCID for them? (2) Given that research budgets already are stretched thin, where will the resources (specifically, the funding) come from to calculate the MCIDs on all the important conditions we treat?

To my knowledge, all attempts of simplifying this process, by defining a percentage change for a certain score as MCID, have failed. We do need to calculate MCIDs for all of the common and important conditions that we treat. It is for this reason that we should limit the number of outcomes tools that we use, since each tool calls for its own MCID.

How Do We Get There?

While most of us would like to integrate evidence-based medicine into our practices, we sometimes find that study findings identified as “significant” are at odds with our own experiences. I believe that this is because a number of those findings are small, and possibly below the MCID. Perhaps a deeper understanding of the MCID concept can solve these discrepancies and bridge the gap between statisticians and clinicians.

The value of MCIDs are in their ability to indicate which change in clinical score change are likely to be perceived the patients being treated. This tool is beneficial for patients with diseases and treatments with weak correlation of objective findings. We do not need independent studies just for MCID calculation. We can use data from existing studies to achieve MCID, reducing the need to produce MCIDs. Additionally, in order to help researchers set goals and harmonize priorities, specialty societies might create lists of conditions and outcomes tools that are sequenced based on (1) the frequency of conditions being treated, (2) the morbidity of those conditions, and (3) the costs of those treatments so that researchers can determine which outcomes tools are in most pressing need of MCID calculations.