Skip to main content

Extending analytic methods for economic evaluation in implementation science

Abstract

Background

Economic evaluations of the implementation of health-related evidence-based interventions (EBIs) are conducted infrequently and, when performed, often use a limited set of quantitative methods to estimate the cost and effectiveness of EBIs. These studies often underestimate the resources required to implement and sustain EBIs in diverse populations and settings, in part due to inadequate scoping of EBI boundaries and underutilization of methods designed to understand the local context. We call for increased use of diverse methods, especially the integration of quantitative and qualitative approaches, for conducting and better using economic evaluations and related insights across all phases of implementation.

Main body

We describe methodological opportunities by implementation phase to develop more comprehensive and context-specific estimates of implementation costs and downstream impacts of EBI implementation, using the Exploration, Preparation, Implementation, Sustainment (EPIS) framework. We focus specifically on the implementation of complex interventions, which are often multi-level, resource-intensive, multicomponent, heterogeneous across sites and populations, involve many stakeholders and implementation agents, and change over time with respect to costs and outcomes. Using colorectal cancer (CRC) screening EBIs as examples, we outline several approaches to specifying the “boundaries” of EBI implementation and analyzing implementation costs by phase of implementation. We describe how systems mapping and stakeholder engagement methods can be used to clarify EBI implementation costs and guide data collection—particularly important when EBIs are complex. In addition, we discuss the use of simulation modeling with sensitivity/uncertainty analyses within implementation studies for projecting the health and economic impacts of investment in EBIs. Finally, we describe how these results, enhanced by careful data visualization, can inform selection, adoption, adaptation, and sustainment of EBIs.

Conclusion

Health economists and implementation scientists alike should draw from a larger menu of methods for estimating the costs and outcomes associated with complex EBI implementation and employ these methods across the EPIS phases. Our prior experiences using qualitative and systems approaches in addition to traditional quantitative methods provided rich data for informing decision-making about the value of investing in CRC screening EBIs and long-term planning for these health programs. Future work should consider additional opportunities for mixed-method approaches to economic evaluations.

Peer Review reports

Background

There has been a growing call over the past decade for economic evaluations of evidence-based intervention (EBI) implementation within public health [1,2,3,4,5]. Applying economic evaluations to implementation research involves comparing the costs (e.g., labor, materials) required to support EBI adoption, implementation, and sustainability [1]. Relevant outcomes for implementation research may include intervention reach (i.e., proportion of target population receiving the EBI), fidelity (i.e., adherence to how the EBI was intended to be implemented), and effectiveness (e.g., EBI’s ability to positively affect health outcomes), all of which can be incorporated into economic evaluations (e.g., cost per person reached by an intervention) [1, 5]. Economic evaluations are essential to providing diverse decision-makers with meaningful data about the economic and programmatic feasibility of investing in EBIs across contexts, which implementation strategies work well where and under what circumstances, resources required upfront and over time to execute these strategies, and expected downstream gains (e.g., cost-savings, improved health outcomes) [3, 5]. These data are critical to securing buy-in to implement EBIs [6] and appropriately planning for implementation with respect to costs and resources to support the EBI’s adoption and sustainment [3, 4].

The quantity and quality of economic evaluations in implementation research have improved over time [2]; however, these analyses remain scarce [2, 5, 7,8,9]. Existing studies commonly lack sufficient detail about the costs associated with implementing new interventions, lack justification for the analytic methods used, and rely on data collected retrospectively after implementation has occurred [2, 8]. While existing implementation frameworks have acknowledged the importance of economic factors to the field, they typically provide little guidance on how to collect and analyze cost-related data [10].

Another concern is that economic evaluations have often used strictly quantitative approaches to estimate the value of EBIs. Adding qualitative approaches to these studies has potential to address quantitative data limitations [1, 5, 11, 12]. Dopp and colleagues identified mixed-method opportunities to understand how stakeholders across settings and perspectives interpret findings about implementation costs and cost-effectiveness results (e.g., whether costs collected represent their perspective or seem reasonable), and how implementation resources vary depending on existing infrastructure (e.g., something may or may not have been a cost to them because it was or was not in place) [1]. Due to the large absence of these approaches in prior economic evaluations, questions remain about how to effectively use mixed methods across implementation phases to understand variation in costs by context and inform the projection of downstream costs and outcomes [1]. Robust discussion of the value of these methods in informing EBI implementation, adaptations, and/or sustainability is also missing.

In this paper, we argue that mixed-method approaches should be used to conduct economic evaluations in implementation research, with attention to each phase of the implementation process. We recommend a broader range of analytic methods to develop comprehensive and context-specific estimates of the costs and long-term impacts of EBI implementation. Given that they are particularly challenging and context-dependent, we focus on evaluation of complex EBIs [13,14,15], which are commonly multicomponent and multi-level, use multiple implementation strategies, engage diverse stakeholders and implementation agents at all levels (e.g., quality improvement teams, health management executives, population health managers, clinicians, clinic administrative staff, etc.), and require coordination across systems. There can be substantial heterogeneity in costs and resources required across sites, across populations, and by perspective, and the associated costs and benefits may accrue at different time periods during implementation. The context drives important differences in the types of resources needed and the frequency and intensity with which those resources and health outcomes should be estimated. We provide examples of how we have applied the described methods to analyses of multicomponent colorectal cancer (CRC) screening interventions in two studies.

Main text

Framework

Guided by the Exploration, Preparation, Implementation, Sustainment (EPIS) framework, we demonstrate how diverse analytic methods can be integrated and applied to economic analyses. The EPIS framework describes implementation research as occurring across four phases: (1) Exploration, which involves EBI selection to address the problem and fit the context; (2) Preparation, which involves designing implementation strategies to integrate EBIs into practice; (3) Implementation, during which EBIs and implementation strategies are initiated and evaluated; and (4) Sustainment, during which EBIs are institutionalized [16, 17]. While we selected EPIS because of its wide use [16] and comprehensive set of economic-related constructs [10], other implementation frameworks [18, 19] propose similar phases of intervention implementation and could also be used to guide economic evaluations. It is most important to identify and estimate EBI implementation costs and benefits and inform decisions across all implementation phases, as studies have often focused on Implementation phase activities [16]. Table 1 shows how EPIS can be used to consider the costs and benefits of conducting economic evaluations of EBI implementation by phase; true to the framework of cost-effectiveness analysis, we present examples of the resources expended to conduct these analyses (i.e., the costs) and examples of what is gained by performing economic evaluations, such as information to improve EBI implementation and outcomes (i.e., the benefits).

Table 1 Expected resource requirements and potential benefits of conducting economic evaluations of intervention implementation by implementation phase

Case studies

To illustrate how mixed-method approaches can be used to inform implementation economics [20], we describe our experiences using these approaches in two research studies assessing the implementation costs and outcomes of complex CRC screening interventions. The first study is Scaling Colorectal Cancer Screening Through Outreach, Referral, and Engagement (SCORE), a pragmatic randomized trial comparing the effectiveness of mailed fecal immunochemical testing (FIT) and patient navigation to diagnostic colonoscopy, versus usual care, in improving CRC screening among North Carolina community health center (CHC) patients [21]. The implementation strategies to support the FIT intervention include, but are not limited to, developing and managing a centralized clinical CRC screening registry, creating a mailed FIT outreach center, and conducting cycles of intervention testing and adaptation. Strategies are employed by staff in a centralized outreach center in collaboration with CHC clinicians and administrative staff. SCORE is being conducted as part of the National Cancer Institute (NCI)-funded consortium The Accelerating Colorectal Cancer Screening and Follow-up through Implementation Science (ACCSIS) Program. The overall aim of ACCSIS is to conduct multi-site, coordinated, transdisciplinary research to evaluate and improve CRC screening processes using implementation science strategies.

The second case study is Cancer Control Population Simulation for Healthcare Decisions (Cancer Control PopSim), a series of Centers for Disease Control and Prevention (CDC)-funded studies [22,23,24,25,26] using microsimulation [27] to estimate the projected population health impact and cost-effectiveness of evidence-based CRC screening interventions and health policy changes. In this case, the implementation strategy being employed is modeling and simulating change to motivate adoption of a range of EBIs [28]. This work is intended to support future implementation efforts by quality improvement staff, federal agency partners, providers, clinic administrative staff, and population health leadership. Our simulation modeling has been used to estimate EBI implementation costs and impact on the percent of the target population up-to-date with CRC screening, CRC cases and deaths averted, life-years gained, and long-term cost savings. For each case study, we share how the described methods are used to support implementation and sustainment planning across phases. Since these insights are context-dependent, we start by defining the context in which the work happens.

Identifying the target population and context

Our case studies, like other implemented EBIs, are situated within specific populations and contexts. Economic evaluations require clear understanding of the EBI’s target population, and the context in which they will be reached. This includes the target population’s size, geographic location, level of risk, and sociodemographics, and the characteristics of the context that determine EBI reach and adoption. Researchers should be mindful of existing inequities, how implementation resources may vary to adequately address these disparities, and the extent to which implementation outcomes may improve or worsen these inequities. In low-resource settings and when serving marginalized populations, resource allocation for EBI implementation requires more thoughtful assessment [5]. Implementing the SCORE intervention, for example, focuses on CRC screening among CHC patients, who screen at relatively low rates [29, 30] and face unique barriers [31, 32]. Implementation strategies, such as adding staff (e.g., patient navigator) to deliver centralized services, were used to address patients’ resource needs, including financial and transportation barriers to undergoing follow-up colonoscopy, and to limit the burden placed on CHC staff. Without the added resources planned upfront to develop and support these strategies, the expected gains in CRC screening associated with investment in a multicomponent EBI may not be realized.

Mixed methods

We selected a set of methods to describe how qualitative and quantitative approaches can be integrated to estimate EBI implementation costs and impact. Although not an exhaustive list of methods to support economic analyses, these methods include the approaches used in our two case studies, and which we believe can be used to understand complex systems. In addition to being used to quantify the resources needed for EBI implementation (as we do in this paper), these methods can also be considered implementation strategies on their own; for example, process flow diagramming can be used both as an implementation strategy to assess organizational readiness and as a tool to support and assess resources needed for other implementation strategies [28, 33]. We build on Powell and colleagues’ work [28] to provide a novel way of thinking about implementation strategies as systems science methods that can optimize implementation success. We consider our use of the included methods to be a mixed-method approach because we were intentional about using each method to inform and build on other methods. These methods can be bidirectional and interactive in diverse ways based on the underlying research question. Table 2 identifies the methods and describes how they can be used to specify the boundaries of EBI implementation (i.e., within the scope of implementation) and estimate implementation costs and benefits by EPIS phase. Below, we categorize the methods into three primary groups: (1) methods for eliciting stakeholder, patient, and caregiver input; (2) systems mapping and time-and-motion analysis; and (3) simulation and sensitivity/uncertainty analysis. We assume a decision has already been made to implement a complex EBI.

Table 2 Methods of economic-focused data collection and analysis by phase of implementation

Eliciting stakeholder, patient, and caregiver input

Stakeholder engagement methods

Economic evaluations should identify the appropriate analytic perspective (i.e., the point of view taken during analysis) and use it to determine which costs and benefits are measured [10, 34, 35]. Input from stakeholders (e.g., potential implementation agents, partner organizations, funders, etc.) is needed to consider the relevant costs and benefits associated with different courses of action, and whether and how EBI implementation and selected strategies will fit within their priorities and constraints. Stakeholder biases and preferences may affect the perceived usefulness of EBIs or implementation strategies [36]. For example, in a study about shared decision-making within cancer care, stakeholder interviews revealed widespread concerns about the likelihood of losing revenue as a substantial implementation barrier [6]. Thus, to successfully implement an EBI, especially a complex and resource-intensive intervention, stakeholder perspectives about the utility, feasibility, costs, and benefits of the intervention and implementation strategies must be incorporated across implementation phases.

The Exploration phase provides an opportunity to review stakeholder perceptions of intervention components and implementation strategies documented in prior literature and to engage context-specific stakeholders in discussions about EBI development. During the Preparation phase, interviews and surveys with diverse stakeholders can elicit their expectations and capacity for EBI implementation. These methods may also provide insight into what usual care or other implementation strategies entail in their local contexts to provide a comparator(s) for how the EBI is implemented. In the Implementation phase, these methods along with periodic reflections [37] can be used to solicit input on EBI implementation successes and challenges and to clarify resource use and unexpected or unintended expenses. Periodic reflections, in which agents are asked about their experiences with EBI implementation at multiple time points, can identify potential challenges and adaptation opportunities [37]. As examples, routine discussions with implementation agents may reveal time-consuming or otherwise resource-heavy steps threatening EBI sustainment, or provide information about a policy or contextual change directly impacting EBI implementation. Capturing this information allows for more accurately measuring resources expended and developing solutions, which can help to obtain stakeholder buy-in for EBI sustainment.

In the case of SCORE, during the Exploration phase, we met with state-level stakeholders (e.g., Colorectal Cancer Roundtable) to solicit input on the feasibility of candidate EBIs and fit with current workflows using local consensus discussions [28]. During Preparation, we engaged CHC and endoscopy providers in workgroups to identify resource barriers and facilitators to implementation [28]. The workgroups created process flow diagrams for each of SCORE’s central components (described later). During Implementation, we employed survey and interview methods to estimate resource use and to evaluate implementation agents’ perceptions of intervention implementation and its impact on usual care. These methods included (1) questionnaires about CHC screening processes in the absence of SCORE (e.g., who performs each activity, how frequently, and time spent per patient), (2) brief, electronic surveys assessing intervention acceptability mid-implementation (e.g., are intervention objectives clear?), and (3) semi-structured interviews about how intervention implementation has affected clinic work processes (e.g., how, if at all, has your work changed because of SCORE?). Insights from periodic reflections [37] are being used to proactively determine how to address any possible threats to sustainment and improve outcomes.

Patient and caregiver interviews, surveys, and focus groups

Patients are an important group whose time and costs incurred should be included in economic evaluations [35, 38]; however, patient costs are often excluded or incompletely assessed [38]. The overall cost-effectiveness of an implemented EBI can vary substantially when accounting for patient-level costs. In an economic evaluation of screening colonoscopy versus no screening, the cost per life-year saved with colonoscopy increased by 68% when patients’ time spent prepping for, undergoing, and recovering from the colonoscopy was costed [39]. Like patient costs, caregiver costs should also be tracked if relevant to the analytic perspective (for example, if a societal perspective is assumed). In the case of colonoscopy, a caregiver typically accompanies the patient, requiring additional time and other potential costs (e.g., time off work, childcare, etc.). Patient and caregiver interview or survey guides can be developed in the Preparation phase to map and understand patient-level resources and time required. Interviews and surveys can be conducted during the Implementation phase to prospectively track these resources, assess patient and caregiver burden, and adapt implementation strategies as needed. The costs estimated using these methods can then be included in analyses during the Sustainment phase.

The expected gains of implementing the SCORE intervention depend on patients being receptive to mailed FIT outreach and, if their results are abnormal, completing a follow-up colonoscopy. The implementation strategy of adding centralized staff to navigate FIT-positive patients to their diagnostic colonoscopy will only be successful if patients are willing to respond to the navigator and utilize the services offered. Therefore, we designed an interview guide for FIT-positive patients about their SCORE experiences that inquired about navigation and colonoscopy completion steps. We included a quantitative checklist for patients to report how long each step took (e.g., time spent driving to the pharmacy) and any out-of-pocket costs (e.g., bowel prep kit cost). We then qualitatively assessed which activities are most burdensome to patients and caregivers and how navigation may have alleviated these burdens. This mixed-method approach to estimating patient and caregiver resources will provide detailed cost data specific to the lower-resource CHC population targeted by SCORE. Additionally, it may help to identify ways to adapt, sustain, and/or scale existing implementation strategies to best meet patients’ needs and minimize their burdens.

Systems mapping and time-and-motion

Process flow diagramming and time-and-motion

Process flow diagramming (i.e., process mapping) is a method for visualizing the required steps in a process and areas for potential variation in pathways depending on the outcomes of certain steps [40, 41]. Process maps can aid in setting EBI boundaries during the Exploration and Preparation phases and using those boundaries to inform data collection in subsequent phases. Process mapping during the Preparation phase helps to document all steps involved in EBI implementation, identify resources required for those steps, and create mechanisms for tracking expended resources. During the Implementation phase, process maps of the intervention itself help to collect precise and comprehensive estimates of what it costs to implement the EBI, thus informing cost-effectiveness analyses. In the Sustainment phase, process maps allow teams to identify areas for improved efficiencies and develop strategies to institutionalize an EBI, such as translating resource requirements into staffing plans, job descriptions, and orientation plans for onboarding new staff.

Time-and-motion (TAM) analysis involves estimating labor-related inputs associated with EBI implementation [42]. Key processes involved in EBI implementation are assessed (for example by using process maps), toolkits are designed to track those activities, and identified processes are observed and recorded using the toolkits. This method allows for estimating the time required per activity, which can be used to estimate per-person labor costs. Conducting these observations at multiple time points allows for estimating differences in time (and thus costs) associated with specific activities across implementation agents and evaluating efficiencies over time. TAM data are an integral component of microcosting (i.e., bottom-up cost analysis) and can inform how to assign common resources that do not fit neatly into a single activity or category [43, 44]. Prior studies have demonstrated how related time-driven costing methods allowed for more accurate cost estimation of health interventions, including variation in delivery and associated costs across sites and personnel [45, 46]. Analysis of TAM data captures the total investment of personnel time and resources in EBI implementation and provides insight into how to optimize processes to support sustainment and scale-up.

In the SCORE study, we used process mapping during the Preparation phase to develop the multicomponent intervention, plan for its implementation, and design our TAM analysis. Through consensus discussions with stakeholders [28], we developed “swimlane” process maps, which use lanes (i.e., rows) to delineate which agents perform specific steps and in which settings (e.g., CHCs, laboratories, mailed FIT outreach center). We used these diagrams to identify groups of activities requiring personnel time that could be observed in batches, such as mailing introductory letters. As with our other methods, we considered which steps are research-specific and which would need to be performed outside of the research context for programmatic success, including only the latter in our economic evaluation. For each activity, we developed a TAM costing tool to document the labor steps involved. For example, mailing introductory letters entailed identifying eligible patients, conducting mail-merges, labeling envelopes, and performing quality control. We piloted these tools during the Preparation phase and scheduled periodic observations during the Implementation phase. We used our swimlane diagrams during the Preparation phase to identify other non-labor costs associated with each step and to develop fidelity measures for tracking potential variations or adaptations in EBI implementation, which may have cost implications [12, 47]. For each process step, we documented how the associated costs and fidelity steps were to be measured, the frequency of data collection, and where to report the collected data (see Fig. 1a). In Fig. 1b, we demonstrate how we used these maps in combination with other methods to further develop our measurement tools. Figure 1c provides a hypothetical example of how these maps might be used to integrate quantitative and qualitative results to document gaps in fidelity along with implementation agents’ perception of burden incurred at each process step.

Fig. 1
figure 1

a Use of swimlane diagrams to identify economic and fidelity measures for the SCORE intervention during the Preparation phase. This is a simplified version of a process flow diagram for patient navigation to follow-up colonoscopy provided as part of the SCORE intervention. Examples are provided of how specific process steps are used to develop cost and fidelity measures and appropriate tools for measuring these constructs. CHC community health center, FIT fecal immunochemical test, GI gastrointestinal, SCORE Scaling Colorectal Cancer Screening Through Outreach, Referral, and Engagement. b Use of swimlane diagrams to inform mixed methods approach to estimating costs of implementing the SCORE intervention. This is a simplified version of a process flow diagram for patient navigation to follow-up colonoscopy provided as part of the SCORE intervention. For individual steps involved in implementing the patient navigation intervention, examples are provided for how diverse types of methods can be used to collect and estimate the required resources to implement that step. CHC community health center, FIT fecal immunochemical test, GI gastrointestinal, SCORE Scaling Colorectal Cancer Screening Through Outreach, Referral, and Engagement. c Example integration and presentation of mixed methods results. This is an example using hypothetical data of how we might integrate the quantitative results of our analysis (in this case, the proportion of patients who received each process step) with qualitative data from implementation agents. The color-coding is used to identify process steps from the process flow diagram included in (a and b) with low (< 70% of patients), moderate (between 70 and 84% of patients), and high (85% of patients or higher) fidelity. This structure can also be used to integrate cost estimates per step with qualitative findings

System support mapping

System support mapping (SSM) is a structured systems thinking method that is used to elicit participants’ individual responsibilities in EBI implementation, primary needs and available resources to fulfill each responsibility, and quick reflection on resource adequacy [48]. Figure 2 provides an SSM example. SSM can be completed with individuals, or in a group—similar to a focus group where the facilitator guides participants through a structured assessment of their individual roles and resource adequacy to perform their roles. Previously used to learn how to support state and local maternal and child health professionals leading complex change initiatives [48], SSM aids in evaluating implementation of complex EBIs by understanding the experiences and perceptions of diverse agents. After identifying all individuals with an implementation role during the Preparation phase, SSM sessions can be conducted during the Implementation phase to collect structured agent feedback on specific activities undertaken by each to implement the EBI, critical needs to achieve each responsibility, specific resources used (with feedback on how useful they were), and suggestions for how to improve support for their implementation activities (perhaps including specific implementation strategies).

Fig. 2
figure 2

System support mapping (SSM) example. This is a stylized version of a system support mapping (SSM) diagram. In SSM sessions, each individual with a role in evidence-based intervention implementation reflects on each of the topics (e.g., role, responsibilities, etc.) listed in the rings. The squares represent individual notes or ideas per topic area and are connected across the rings to tell complete stories about each specific responsibility or task they undertake related to intervention implementation (each on its own orange square). To accomplish each responsibility or task, they are asked to name critical needs (green notes), resources they rely on to support those needs (blue notes), and, reflecting on how well those resources work, identify specific wishes for how they could be better supported in accomplishing that responsibility or task (yellow notes). Lines interconnect notes within a story about each named responsibility or task. The numbers of rings and notes per ring will vary across implementation agents and implementation studies. Maps can be made in person, with sticky notes, or virtually. In any case, each individual should verbally describe their map since this will enrich the documented map

For the SCORE intervention, we invited CHC clinicians and administrative staff and quality improvement monitors involved in implementation process steps from the swimlane diagrams to participate in SSM sessions. The information gathered will help identify which agents are employing which implementation strategies and, thus, ensure all responsibilities and resources used are appropriately costed in the economic evaluation. Whereas our process maps detailed the specific steps being carried out by implementation agents, SSM allows for identifying possible redundancies, inefficiencies, or misunderstandings about EBI-related responsibilities individuals undertake, unexpectedly resource-heavy or under-supported activities, and further delineation of roles (e.g., which individual conducts each mailed FIT process step among the larger mailed FIT team). Agents’ recommendations for improvement can also be estimated in terms of their expected costs and benefits during the Implementation or Sustainment phases to inform decision-making about EBI adaptation and/or sustainment. Similarly, SSM may identify opportunities to streamline responsibilities or better support staff needs, which can be evaluated in the economic evaluation.

Modeling and sensitivity/uncertainty analysis

Simulation modeling

Simulation modeling can be used as an implementation strategy [28] to project the health and economic impacts of investment in EBIs. Simulation can help to adapt analyses to particular contexts to understand potential implementation gains and losses. For example, Medicaid enrollment is associated with high turnover and coverage gaps [49]. Using simulation, we can build in realistic assumptions about enrollee turnover, and thus more accurately analyze costs and benefits from the perspective of Medicaid decision-makers considering EBI implementation. In the Preparation and Implementation phases, steps for conducting simulation studies include identifying short-term and long-term outcomes of interest and estimating implementation costs using the aforementioned methods (TAM, stakeholder interviews, etc.). Findings from completed simulations can aid in making decisions about appropriate inputs and outputs. Models can also be used to monitor and provide feedback on implementation progress across sites to better reach implementation targets. During the Implementation and Sustainment phases, models can be developed to project the intervention’s population-level impact and cost-effectiveness in the local context and extrapolate these findings into the future or to other settings. Equity issues can be assessed by projecting outcomes for specific subgroups, such as those at increased risk of poor outcomes or for whom the implementation strategies may be inappropriate or infeasible. Distributional cost-effectiveness analysis [50, 51] and related methods for quantifying equity-efficiency tradeoffs [52] can help to understand and address health inequities. Results of equity-focused analyses can inform EBI sustainment in the local context, and the selection and adoption of EBIs and implementation strategies for other settings (i.e., motivating EBI implementation by other organizations during the Exploration phase).

Our Cancer Control PopSim work shows how simulation can be used to project the downstream impact of EBI implementation in higher-risk populations. Using our model, we estimated the cost-effectiveness of multiple EBIs for improving CRC screening and long-term CRC outcomes (e.g., cancers averted) in priority populations, such as African Americans [25], the uninsured [23, 24], and Medicaid enrollees [22,23,24]. Costs of EBI implementation, CRC screening and diagnostic procedures, and CRC treatment were included. Among Oregon Medicaid enrollees, for example, we found three of five EBIs simulated to be cost-effective compared to usual care if Medicaid decision-makers are willing to spend up to $230 per additional year up-to-date on CRC screening [22]. In North Carolina, we identified mailed reminders for Medicaid enrollees and mass media campaigns for African Americans as cost-effective EBIs, costing approximately $15 and $30, respectively, per additional life-year up-to-date on CRC screening [24]. We also showed that expansion of North Carolina’s Medicaid program would more substantially reduce CRC diagnoses among African Americans, compared to non-Hispanic Whites, and result in greater cost-savings over the long-term due to averted treatment costs [25]. These analyses allow for targeted EBI implementation planning by detailing the funds needed to efficiently address health inequities.

Related to SCORE specifically, our plans are to have a cost-effectiveness model focused on our target CHC population and with more detail on how the different intervention components and implementation strategies affect success at micro-level steps. The model can then help to project the downstream impacts associated with our outcomes (fidelity, reach, etc.) at each process step.

Sensitivity/uncertainty analysis

Estimation and simulation of EBI implementation costs and benefits are critical to evaluating whether investment in EBI implementation should continue. Yet, these decisions remain challenging due to uncertainty about the future and conflicting priorities. While the base-case economic evaluation (i.e., analysis using core model assumptions and most likely input values) may indicate that EBI implementation is cost-effective for improving outcomes, questions remain about under which circumstances this is true. The overall cost-effectiveness of EBI implementation may vary across agents, populations, settings, time horizons, analytic perspectives, model assumptions, and implementation outcomes (e.g., fidelity, reach). Sensitivity/uncertainty analysis can serve multiple purposes, such as estimating how important a particular variable is to the overall cost-effectiveness and understanding how results may change due to differing parameter values and structural assumptions across contexts [53]. These analyses can help stakeholders broaden their thinking about whether to implement or sustain an EBI from simply a yes/no decision using base-case estimates to considering the range of plausible estimates and assumptions that may affect decision-making.

Many types of sensitivity/uncertainty analyses can be conducted using the methods for estimating costs and benefits previously described. Examples include scenario analysis in which variation in conclusions is assessed using specific values for uncertain parameters; threshold analysis to identify the particular value(s) at which EBI implementation becomes or is no longer cost-effective; and probabilistic sensitivity analysis where multiple uncertain parameters are varied simultaneously using distributions of possible estimates [54]. Regardless of which analyses are conducted, areas of uncertainty related to further EBI implementation in the current setting or in other settings should be proactively identified (potentially through systems mapping and stakeholder engagement) as EBIs are planned and implemented. For example, capturing variations in TAM estimates by agent type and over time can provide ranges of estimates for conducting a best-case/worst-case analysis. Questions of most importance to decision-makers about sustaining cost-effective EBIs and implementation strategies should be prioritized; for example, depending on context and perspective, the outcomes of focus in an economic evaluation may vary and may not include patient-level utility estimates. Analytic prioritization could also include comparing different scenarios of how personnel, start-up funds, and other resources are allocated across EPIS phases and their relative impact on cost-effectiveness over time. Varying the analytic time horizon could also reveal important insights, such as how long EBIs may need to be implemented to achieve objectives. The impact of uncertainty on outcomes of interest and sensitivity of the results to changes in EBI implementation should be evaluated for the local context in the Implementation phase and to support long-term planning during the Sustainment phase.

Our Cancer Control PopSim model outputs demonstrate how sensitivity/uncertainty analysis can be used to consider the impact of tradeoffs related to EBI implementation. We used this model to evaluate the effectiveness of multicomponent CRC screening interventions in reaching national screening targets (Hicklin et al: "Assessing the impact of multicomponent interventions on colorectal cancer screening through simulation: what would it take to reach national screening targets?", in progress). We varied the percentage of the target population reached by each intervention, initially considering the differences in impact on selected outcomes assuming 25%, 50%, 75%, and 100% intervention reach levels. We then conducted a threshold analysis to determine which specific level of reach is needed to achieve screening targets under different circumstances. Our analysis demonstrated that the expected downstream effects of implementing EBIs are driven by multiple factors, including intervention effectiveness, intervention reach, implementation costs, and equity considerations (i.e., which subpopulations are targeted by which interventions). With our SCORE model, we will use data on implementation costs and outcomes associated with individual process steps, together with input from stakeholders, to explore how intervention operations and implementation strategies might be adapted to support sustainment. Using mixed methods to collect comprehensive and context-specific costs across implementation outcomes will aid greatly in having meaningful estimates to weigh tradeoffs over the short- and long-terms.

Integration of methods

The methods, described above, can be used individually to support economic evaluations of implementation research or, preferably, in combination with each other. Figure 3 provides a detailed schematic of our mixed-method approach to the SCORE economic evaluation. While it is not necessary for economic evaluations to include this level of complexity, our goal was to show how diverse methods can be used to inform each other across implementation phases when implementing complex EBIs. Below, we provide guidance on how to account for possible adaptations, and how to visualize data collected through economic evaluations—both of which are important to supporting decision-making about EBI implementation.

Fig. 3
figure 3

Example schematic for clarifying cost-related activities for economic evaluation of SCORE intervention across EPIS phases. This figure depicts how we integrated quantitative, qualitative, and systems approaches to estimate the costs and impact of implementing the SCORE intervention across implementation phases. Economic evaluations of other EBIs may vary considerably in the number and types of methods used, as well as how these methods are integrated, for multiple reasons (e.g., available resources, local context, intervention complexity, etc.). We included a highly detailed version to help inform planning for other economic evaluations. Bidirectional arrows indicate that the methods inform each other in a more cyclical process, and brackets indicate that multiple methods are being used simultaneously

Adaptation

Economic evaluations of implementation studies should seek to measure and optimize adaptation of EBI implementation [55]. As complex interventions are implemented, the interventions and implementation strategies are also being changed, potentially due to staff changes, lessons learned from earlier phases, environmental or policy changes, new evidence-based guidance, and changes in available resources. These adaptations may affect or be identified to improve EBI implementation costs and/or benefits. Thus, the monitoring of adaptations should be a continuous process across EPIS phases. Capturing stakeholder and implementation agent feedback, mapping process steps, using periodic reflections, and other mixed-method approaches are all useful for identifying and costing these changes.

Visualizing economic outputs

Visualizing data derived through economic evaluations is important for informing and reflecting on decision-making about EBI implementation. Figure 4 provides examples of how we displayed our Cancer Control PopSim data to guide mobilization of limited resources to achieve the greatest gains in CRC screening in North Carolina. We created maps displaying the expected change in CRC screening by zip code associated with 5-year implementation of multicomponent interventions (Fig. 4a). These maps indicate which regions are expected to most benefit from EBI implementation and which approaches (e.g., increasing reach versus changing intervention components) are likely to be most impactful. Then, we created a value frontier to report the cost per additional person up-to-date on CRC screening of our intervention scenarios (Fig. 4b), building on cost and effectiveness estimates from prior studies [20, 56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76]. Value frontiers help to report the cost per health outcome gained in cases where there is not an established willingness-to-pay threshold for determining cost-effectiveness. Decision-makers can identify which interventions are most cost-effective based on their established budget, or weigh the potential advantages or disadvantages of changing their budget. Visualization tools can be used and updated across EPIS phases as new data become available to guide which intervention(s) are adopted, how they are implemented, whether to adapt implementation strategies, and how best to extend implementation into new settings or over time.

Fig. 4
figure 4

a Percent of eligible North Carolina residents up-to-date on CRC screening by zip code assuming different types of interventions, levels of intervention reach, and health insurance policy after 5 years of intervention. A: Status quo scenario (i.e., absence of intervention or health policy change). B: Implementation of mailed FIT-based multicomponent interventions, assuming 25% reach of eligible population and no Medicaid expansion. C: Implementation of multicomponent interventions prioritizing patient navigation to screening colonoscopy, assuming 25% reach of eligible population and no Medicaid expansion. D: Implementation of mailed FIT-based multicomponent interventions, assuming 75% reach of eligible population and no Medicaid expansion. E: Implementation of multicomponent interventions prioritizing patient navigation to screening colonoscopy, assuming 75% reach of eligible population and no Medicaid expansion. F: Implementation of mailed FIT-based multicomponent interventions, assuming 25% reach of eligible population and Medicaid expansion. G: Implementation of multicomponent interventions prioritizing patient navigation to screening colonoscopy, assuming 25% reach of eligible population and Medicaid expansion. H: Implementation of mailed FIT-based multicomponent interventions, assuming 75% reach of eligible population and Medicaid expansion. I: Implementation of multicomponent interventions prioritizing patient navigation to screening colonoscopy, assuming 75% reach of eligible population and Medicaid expansion. Maps can help to guide decision-making about where and how to best invest limited resources to improve health outcomes. These maps can help to assess the potential impact of various combinations of approaches for increasing CRC screening at the population level by region, all of which have important cost and resource implications. b. Value frontier based on multicomponent CRC screening intervention implementation costs over 5 years. This figure, which is shown for illustrative purposes, compares the incremental number of age-eligible North Carolina residents up-to-date (UTD) on CRC screening (x-axis) and the incremental implementation costs (y-axis) for multicomponent intervention scenarios after 5 years. The incremental cost-effectiveness ratios (ICERs) are reported for each scenario above the data point. Cost and effectiveness estimates are based on prior CRC screening intervention studies [20, 56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76]. Costs of screening tests and required follow-up are excluded. We assumed the level of reach that would be feasible for each intervention scenario. The target population for the scenarios includes all age-eligible state residents, except for one scenario which only reaches Medicaid enrollees

Conclusions

The use of multiple, diverse methods across implementation phases when conducting economic evaluations of complex interventions is important for setting boundaries, collecting rich, context-specific estimates of EBI implementation costs and downstream impacts, informing decisions about EBI investment, and understanding adaptations along the way. The visual display of data collected through these methods can further aid in weighing tradeoffs in how and where to invest limited resources.

We focused on estimating the cost and impact of complex EBIs and implementation strategies, and using this information to inform implementation and sustainment, across EPIS phases. The proposed methods can be supplemented with more traditional costing methods (e.g., microcosting), following best practice guidelines [77]. Economic evaluations of some simpler EBIs would also benefit from the described methods. For example, process maps depicting simpler EBIs may clarify the process steps and implementation agents. However, the effort involved in process flow diagramming may not be prudent if there are few swimlanes and/or process steps.

In our experience with SCORE and Cancer Control PopSim, combining the described methods provided more comprehensive data than we would have obtained using the methods in isolation. Additional research is needed to assess potential patterns in which methods work well together, in what order the methods should optimally be conducted, and which methods are most feasible given resource constraints. There may also be additional methods not described here that could contribute to the planning and execution of economic evaluations of implemented EBIs. Our scope was limited to understanding the value of integrating different types of methods within economic evaluations and providing an initial menu of methods and their functions to select from per implementation phase.

In some contexts, resources may not be available to implement comprehensive sets of economic methods. We encourage teams in these situations to, at a minimum, have staff dedicated to collecting and tracking implementation costs and benefits. Use of methods that can be integrated into existing work processes, such as periodic reflections, might be prioritized. We also emphasize that it is essential for funders to invest in resources needed to conduct economic evaluations in lower-resource settings—otherwise, there is a greater risk of continuing to implement and sustain sub-optional interventions or failing to learn about more cost-effective approaches.

The integration of quantitative and qualitative methods when estimating EBI implementation costs and benefits allows for more nuanced data collection and thoughtful considerations of how to efficiently and equitably support public health initiatives. By extending our analytic options for economic evaluations, we have an opportunity to improve the study of EBI implementation and subsequently, patient and societal outcomes.

Availability of data and materials

The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.

Abbreviations

ACCSIS:

Accelerating CRC Screening and Follow-up through Implementation Science

PopSim:

Population Simulation

CDC:

Centers for Disease Control and Prevention

CHC:

Community health center

CRC:

Colorectal cancer

EBI:

Evidence-based intervention

FIT:

Fecal immunochemical testing

NCI:

National Cancer Institute

SCORE:

Scaling Colorectal Cancer Screening Through Outreach, Referral, and Engagement

SSM:

System support mapping

TAM:

Time-and-motion

References

  1. Dopp AR, Mundey P, Beasley LO, Silovsky JF, Eisenberg D. Mixed-method approaches to strengthen economic evaluations in implementation research. Implement Sci. 2019;14(1):2.

    Article  PubMed  PubMed Central  Google Scholar 

  2. Roberts SLE, Healey A, Sevdalis N. Use of health economic evaluation in the implementation and improvement science fields-a systematic literature review. Implement Sci. 2019;14(1):72.

    Article  PubMed  PubMed Central  Google Scholar 

  3. Barnett ML, Dopp AR, Klein C, Ettner SL, Powell BJ, Saldana L. Collaborating with health economists to advance implementation science: a qualitative study. Implement Sci Commun. 2020;1:82.

    Article  PubMed  PubMed Central  Google Scholar 

  4. Saldana L, Chamberlain P, Bradford WD, Campbell M, Landsverk J. The cost of implementing new strategies (COINS): a method for mapping implementation resources using the stages of implementation completion. Child Youth Serv Rev. 2014;39:177–82.

    Article  PubMed  Google Scholar 

  5. Eisman AB, Kilbourne AM, Dopp AR, Saldana L, Eisenberg D. Economic evaluation in implementation science: Making the business case for implementation strategies. Psychiatry Res. 2020;283:112433.

    Article  PubMed  Google Scholar 

  6. Scholl I, Kobrin S, Elwyn G. “All about the money?” A qualitative interview study examining organizational- and system-level characteristics that promote or hinder shared decision-making in cancer care in the United States. Implement Sci. 2020;15(1):81.

    Article  PubMed  PubMed Central  Google Scholar 

  7. Reeves P, Edmunds K, Searles A, Wiggers J. Economic evaluations of public health implementation-interventions: a systematic review and guideline for practice. Public Health. 2019;169:101–13.

    Article  CAS  PubMed  Google Scholar 

  8. Hoomans T, Evers SM, Ament AJ, Hübben MW, van der Weijden T, Grimshaw JM, et al. The methodological quality of economic evaluations of guideline implementation into clinical practice: a systematic review of empiric studies. Value Health. 2007;10(4):305–16.

    Article  PubMed  Google Scholar 

  9. Grimshaw JM, Thomas RE, MacLennan G, Fraser C, Ramsay CR, Vale L, et al. Effectiveness and efficiency of guideline dissemination and implementation strategies. Health Technol Assess (Winch Eng). 2004;8(6):iii-iv, 1-72.

    Google Scholar 

  10. Vicki B, Huong T, Miranda B, Rachel L, Marj M. A narrative review of economic constructs in commonly used implementation and scale-up theories, frameworks and models. Health Res Policy Syst. 2020;18(1):115.

    Article  PubMed  PubMed Central  Google Scholar 

  11. Wagner TH. Rethinking how we measure costs in implementation research. J Gen Intern Med. 2020;35(Suppl 2):870–4.

    Article  PubMed  PubMed Central  Google Scholar 

  12. Wagner TH, Yoon J, Jacobs JC, So A, Kilbourne AM, Yu W, et al. Estimating costs of an implementation intervention. Med Decis Making. 2020;40(8):959–67.

    Article  PubMed  Google Scholar 

  13. Perez Jolles M, Lengnick-Hall R, Mittman BS. Core functions and forms of complex health interventions: a patient-centered medical home illustration. J Gen Intern Med. 2019;34(6):1032–8.

    Article  PubMed  PubMed Central  Google Scholar 

  14. Tanner-Smith EE, Grant S. Meta-analysis of complex interventions. Annu Rev Public Health. 2018;39:135–51.

    Article  PubMed  Google Scholar 

  15. Trompette J, Kivits J, Minary L, Alla F. Dimensions of the Complexity of Health Interventions: What Are We Talking About? A Review. Int J Environ Res Public Health. 2020;17(9):3069. Published 2020 Apr 28. https://doi.org/10.3390/ijerph17093069.

  16. Moullin JC, Dickson KS, Stadnick NA, Rabin B, Aarons GA. Systematic review of the exploration, preparation, implementation, sustainment (EPIS) framework. Implement Sci. 2019;14(1):1.

    Article  PubMed  PubMed Central  Google Scholar 

  17. Aarons GA, Hurlburt M, Horwitz SM. Advancing a conceptual model of evidence-based practice implementation in public service sectors. Adm Policy Ment Health. 2011;38(1):4–23.

    Article  PubMed  Google Scholar 

  18. Sohn H, Tucker A, Ferguson O, Gomes I, Dowdy D. Costing the implementation of public health interventions in resource-limited settings: a conceptual framework. Implement Sci. 2020;15(1):86.

    Article  PubMed  PubMed Central  Google Scholar 

  19. Goodrich DE, Miake-Lye I, Braganza MZ, Wawrin N, Kilbourne AM. The QUERI Roadmap for Implementation and Quality Improvement. Washington (DC): Department of Veterans Affairs (US); 2020.

    Google Scholar 

  20. Tangka FKL, Subramanian S. Importance of implementation economics for program planning-evaluation of CDC's colorectal cancer control program. Eval Program Plann. 2017;62:64–6.

    Article  PubMed  Google Scholar 

  21. Malo TL, Correa SY, Moore AA, Ferrari RM, Leeman J, Brenner AT, et al. Centralized colorectal cancer screening outreach and patient navigation for vulnerable populations in North Carolina: study protocol for the SCORE randomized controlled trial. Implement Sci Commun. 2021;2(1):113.

    Article  PubMed  PubMed Central  Google Scholar 

  22. Davis MM, Nambiar S, Mayorga ME, Sullivan E, Hicklin K, O'Leary MC, Dillon K, Hassmiller Lich K, Gu Y, Lind BK, Wheeler SB. Mailed FIT (fecal immunochemical test), navigation or patient reminders? Using microsimulation to inform selection of interventions to increase colorectal cancer screening in Medicaid enrollees. Prev Med. 2019;129S:105836. https://doi.org/10.1016/j.ypmed.2019.105836. Epub 2019 Oct 18.

  23. Hassmiller Lich K, O'Leary MC, Nambiar S, Townsley RM, Mayorga ME, Hicklin K, Frerichs L, Shafer PR, Davis MM, Wheeler SB. Estimating the impact of insurance expansion on colorectal cancer and related costs in North Carolina: A population-level simulation analysis. Prev Med. 2019;129S:105847. https://doi.org/10.1016/j.ypmed.2019.105847. Epub 2019 Oct 27.

  24. Hassmiller Lich K, Cornejo DA, Mayorga ME, Pignone M, Tangka FK, Richardson LC, et al. Cost-effectiveness analysis of four simulated colorectal cancer screening interventions, North Carolina. Prev Chronic Dis. 2017;14:E18.

    Article  PubMed  PubMed Central  Google Scholar 

  25. Powell W, Frerichs L, Townsley R, Mayorga M, Richmond J, Corbie-Smith G, et al. The potential impact of the Affordable Care Act and Medicaid expansion on reducing colorectal cancer screening disparities in African American males. PLoS One. 2020;15(1):e0226942.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  26. Wheeler SB, Leeman J, Hassmiller Lich K, Tangka FKL, Davis MM, Richardson LC. Data-powered participatory decision making: leveraging systems thinking and simulation to guide selection and implementation of evidence-based colorectal cancer screening interventions. Cancer J (Sudbury, Mass). 2018;24(3):136–43.

    Article  Google Scholar 

  27. Çağlayan Ç, Terawaki H, Chen Q, Rai A, Ayer T, Flowers CR. Microsimulation modeling in oncology. JCO Clin Cancer Inf. 2018;2:1–11.

    Google Scholar 

  28. Powell BJ, Waltz TJ, Chinman MJ, Damschroder LJ, Smith JL, Matthieu MM, et al. A refined compilation of implementation strategies: results from the Expert Recommendations for Implementing Change (ERIC) project. Implement Sci. 2015;10(1):21.

    Article  PubMed  PubMed Central  Google Scholar 

  29. National Colorectal Cancer Roundtable. Colorectal cancer screening rates reach 44.1% in FQHCs in 2018 2020 [Available from: https://nccrt.org/colorectal-cancer-screening-rates-reach-44-1-in-fqhcs-in-2018/]. Accessed 7 Feb 2022.

  30. Siegel RL, Miller KD, Goding Sauer A, Fedewa SA, Butterly LF, Anderson JC, et al. Colorectal cancer statistics, 2020. CA Cancer J Clin. 2020;70(3):145–64.

    Article  PubMed  Google Scholar 

  31. Muthukrishnan M, Arnold LD, James AS. Patients’ self-reported barriers to colon cancer screening in federally qualified health center settings. Prev Med Rep. 2019;15:100896.

    Article  PubMed  PubMed Central  Google Scholar 

  32. Schneider JL, Rivelli JS, Gruss I, Petrik AF, Nielson CM, Green BB, et al. Barriers and facilitators to timely colonoscopy completion for safety net clinic patients. Am J Health Behav. 2020;44(4):460–72.

    Article  PubMed  Google Scholar 

  33. Leeman J, Rohweder C, Lee M, Brenner A, Dwyer A, Ko LK, et al. Aligning implementation science with improvement practice: a call to action. Implement Sci Commun. 2021;2(1):99.

    Article  PubMed  PubMed Central  Google Scholar 

  34. Drummond MF, Sculpher MJ, Torrance GW, O'Brien BJ, Stoddart GL. Methods for the economic evaluation of health care programmes. 3rd ed. Oxford: Oxford University Press; 2005.

    Google Scholar 

  35. Sanders GD, Neumann PJ, Basu A, Brock DW, Feeny D, Krahn M, et al. Recommendations for conduct, methodological practices, and reporting of cost-effectiveness analyses: second panel on cost-effectiveness in health and medicine. Jama. 2016;316(10):1093–103.

    Article  PubMed  Google Scholar 

  36. Quanbeck A. Using stakeholder values to promote implementation of an evidence-based mobile health intervention for addiction treatment in primary care settings. JMIR Mhealth Uhealth. 2019;7(6):e13301.

    Article  PubMed  PubMed Central  Google Scholar 

  37. Finley EP, Huynh AK, Farmer MM, Bean-Mayberry B, Moin T, Oishi SM, et al. Periodic reflections: a method of guided discussions for documenting implementation phenomena. BMC Med Res Methodol. 2018;18(1):153.

    Article  PubMed  PubMed Central  Google Scholar 

  38. Tai BB, Bae YH, Le QA. A Systematic Review of Health Economic Evaluation Studies Using the Patient's Perspective. Value Health. 2016;19(6):903–8.

    Article  PubMed  Google Scholar 

  39. Jonas DE, Russell LB, Sandler RS, Chou J, Pignone M. Value of patient time invested in the colonoscopy screening process: time requirements for colonoscopy study. Med Decis Making. 2008;28(1):56–65.

    Article  PubMed  Google Scholar 

  40. Agency for Healthcare Research and Quality. Developing a high-level process map and swim-lane diagram: project RED (Re-Engineered Discharge) training program Rockville, MD2011 [updated August 2011. Available from: https://archive.ahrq.gov/professionals/systems/hospital/red/swimlane.html]. Accessed 7 Feb 2022.

  41. SaferPak. Basic Tools for Process Improvement 2006 [Available from: http://saferpak.com/flowchart_articles/howto_flowchart.pdf]. Accessed 7 Feb 2022.

  42. Leafloor CW, Lochnan HA, Code C, Keely EJ, Rothwell DM, Forster AJ, et al. Time-motion studies of internal medicine residents’ duty hours: a systematic review and meta-analysis. Adv Med Educ Pract. 2015;6:621–9.

    PubMed  PubMed Central  Google Scholar 

  43. Chapel JM, Wang G. Understanding cost data collection tools to improve economic evaluations of health interventions. Stroke Vasc Neurol. 2019;4(4):214–22.

    Article  PubMed  PubMed Central  Google Scholar 

  44. Frick KD. Microcosting quantity data collection methods. Med Care. 2009;47(7 Suppl 1):S76–81.

    Article  PubMed  PubMed Central  Google Scholar 

  45. Tibor LC, Schultz SR, Menaker R, Weber BD, Ness J, Smith P, et al. Improving efficiency using time-driven activity-based costing methodology. J Am Coll Radiol. 2017;14(3):353–8.

    Article  PubMed  Google Scholar 

  46. Cidav Z, Mandell D, Pyne J, Beidas R, Curran G, Marcus S. A pragmatic method for costing implementation strategies using time-driven activity-based costing. Implement Sci. 2020;15(1):28.

    Article  PubMed  PubMed Central  Google Scholar 

  47. Feely M, Seay KD, Lanier P, et al. Measuring Fidelity in Research Studies: A Field Guide to Developing a Comprehensive Fidelity Measurement System. Child Adolesc Soc Work J. 32018;5:139–52. https://doi.org/10.1007/s10560-017-0512-6.

  48. Calancie L, Margolis L, Chall SA, Mullenix A, Chaudhry A, Hassmiller LK. System support mapping: a novel systems thinking tool applied to assess the needs of maternal and child health title V professionals and their partners. J Public Health Manag Pract. 2020;26(4):E42–e53.

    Article  PubMed  Google Scholar 

  49. Swartz K, Short PF, Graefe DR, Uberoi N. Reducing Medicaid churning: extending eligibility for twelve months or to end of calendar year is most effective. Health Aff. 2015;34(7):1180–7.

    Article  Google Scholar 

  50. Cookson R, Mirelman AJ, Griffin S, Asaria M, Dawkins B, Norheim OF, et al. Using cost-effectiveness analysis to address health equity concerns. Value Health. 2017;20(2):206–12.

    Article  PubMed  PubMed Central  Google Scholar 

  51. Asaria M, Griffin S, Cookson R. Distributional cost-effectiveness analysis: a tutorial. medical decision making : an international journal of the Society for. Med Decis Making. 2016;36(1):8–19.

    Article  PubMed  Google Scholar 

  52. Baeten SA, Baltussen RM, Uyl-de Groot CA, Bridges J, Niessen LW. Incorporating equity-efficiency interactions in cost-effectiveness analysis-three approaches applied to breast cancer control. Value Health. 2010;13(5):573–9.

    Article  PubMed  Google Scholar 

  53. Briggs A, Sculpher M, Buxton M. Uncertainty in the economic evaluation of health care technologies: the role of sensitivity analysis. Health Econ. 1994;3(2):95–104.

    Article  CAS  PubMed  Google Scholar 

  54. Sculpher MJ, Basu A, Kuntz KM, Meltzer DO. Reflecting uncertainty in cost-effectiveness analysis. Cost-Effectiveness in Health and Medicine; 2016.

    Google Scholar 

  55. Escoffery C, Lebow-Skelley E, Haardoerfer R, Boing E, Udelson H, Wood R, et al. A systematic review of adaptations of evidence-based public health interventions globally. Implement Sci. 2018;13(1):125.

    Article  PubMed  PubMed Central  Google Scholar 

  56. Levy BT, Xu YH, Daly JM, Ely JW. A randomized controlled trial to improve colon cancer screening in rural family medicine: an iowa research network (IRENE) Study. J Am Board Fam Med. 2013;26(5):486–97.

    Article  PubMed  Google Scholar 

  57. Jean-Jacques M, Kaleba EO, Gatta JL, Gracia G, Ryan ER, Choucair BN. Program to improve colorectal cancer screening in a low-income, racially diverse population: a randomized controlled trial. Ann Fam Med. 2012;10(5):412–7.

    Article  PubMed  PubMed Central  Google Scholar 

  58. Myers RE, Bittner-Fagan H, Daskalakis C, Sifri R, Vernon SW, Cocroft J, et al. A Randomized Controlled Trial of a Tailored Navigation and a Standard Intervention in Colorectal Cancer Screening. Cancer Epidemiol Biomarkers Prev. 2013;22(1):109–17.

    Article  PubMed  Google Scholar 

  59. Percac-Lima S, Grant RW, Green AR, Ashburner JM, Gamba G, Oo S, et al. A culturally tailored navigator program for colorectal cancer screening in a community health center: a randomized, controlled trial. J Gen Intern Med. 2009;24(2):211–7.

    Article  PubMed  Google Scholar 

  60. Honeycutt S, Green R, Ballard D, Hermstad A, Brueder A, Haardorfer R, et al. Evaluation of a patient navigation program to promote colorectal cancer screening in rural Georgia, USA. Cancer. 2013;119(16):3059–66.

    Article  PubMed  Google Scholar 

  61. Dougherty MK, Brenner AT, Crockett SD, Gupta S, Wheeler SB, Coker-Schwimmer M, et al. Evaluation of interventions intended to increase colorectal cancer screening rates in the United States: a systematic review and meta-analysis. JAMA Intern Med. 2018;178(12):1645–58.

    Article  PubMed  PubMed Central  Google Scholar 

  62. Gupta S, Halm EA, Rockey DC, Hammons M, Koch M, Carter E, et al. Comparative effectiveness of fecal immunochemical test outreach, colonoscopy outreach, and usual care for boosting colorectal cancer screening among the underserved: a randomized clinical trial. JAMA Intern Med. 2013;173(18):1725–32.

    PubMed  PubMed Central  Google Scholar 

  63. Goldman SN, Liss DT, Brown T, Lee JY, Buchanan DR, Balsley K, et al. Comparative effectiveness of multifaceted outreach to initiate colorectal cancer screening in community health centers: a randomized controlled trial. J Gen Intern Med. 2015;30(8):1178–84.

    Article  PubMed  PubMed Central  Google Scholar 

  64. Phillips L, Hendren S, Humiston S, Winters P, Fiscella K. Improving breast and colon cancer screening rates: a comparison of letters, automated phone calls, or both. J Am Board Fam Med. 2015;28(1):46–54.

    Article  PubMed  Google Scholar 

  65. Liss DT, French DD, Buchanan DR, Brown T, Magner BG, Kollar S, et al. Outreach for Annual Colorectal Cancer Screening A Budget Impact Analysis for Community Health Centers. Am J Prev Med. 2016;50(2):E54–61.

    Article  PubMed  Google Scholar 

  66. Schlichting JA, Mengeling MA, Makki NM, Malhotra A, Halfdanarson TR, Klutts JS, et al. Increasing colorectal cancer screening in an overdue population: participation and cost impacts of adding telephone calls to a FIT mailing program. J Community Health. 2014;39(2):239–47.

    Article  PubMed  PubMed Central  Google Scholar 

  67. Lairson DR, Dicarlo M, Myers RE, Wolf T, Cocroft J, Sifri R, et al. Cost-effectiveness of targeted and tailored interventions on colorectal cancer screening use. Cancer. 2008;112(4):779–88.

    Article  PubMed  Google Scholar 

  68. Lairson DR, DiCarlo M, Deshmuk AA, Fagan HB, Sifri R, Katurakes N, et al. Cost-effectiveness of a standard intervention versus a navigated intervention on colorectal cancer screening use in primary care. Cancer. 2014;120(7):1042–9.

    Article  PubMed  Google Scholar 

  69. Meenan RT, Anderson ML, Chubak J, Vernon SW, Fuller S, Wang CY, et al. An economic evaluation of colorectal cancer screening in primary care practice. Am J Prev Med. 2015;48(6):714–21.

    Article  PubMed  PubMed Central  Google Scholar 

  70. Jandorf L, Stossel LM, Cooperman JL, Zivin JG, Ladabaum U, Hall D, et al. Cost analysis of a patient navigation system to increase screening colonoscopy adherence among urban minorities. Cancer. 2013;119(3):612–20.

    Article  PubMed  Google Scholar 

  71. Lee JK, Groessl EJ, Ganiats TG, Ho SB. Cost-effectiveness of a mailed educational reminder to increase colorectal cancer screening. BMC Gastroenterol. 2011;11:93. https://doi.org/10.1186/1471-230X-11-93.

  72. Wheeler SB, O'Leary MC, Rhode J, Yang JY, Drechsel R, Plescia M, et al. Comparative cost-effectiveness of mailed fecal immunochemical testing (FIT)-based interventions for increasing colorectal cancer screening in the Medicaid population. Cancer. 2020;126(18):4197–208.

    Article  PubMed  Google Scholar 

  73. Dacus HLM, Wagner VL, Collins EA, Matson JM, Gates M, Hoover S, et al. Evaluation of patient-focused interventions to promote colorectal cancer screening among new york state medicaid managed care patients. Cancer. 2018;124(21):4145–53.

    Article  PubMed  Google Scholar 

  74. Kim KE, Randal F, Johnson M, Quinn M, Maene C, Hoover S, et al. Economic assessment of patient navigation to colonoscopy-based colorectal cancer screening in the real-world setting at the University of Chicago Medical Center. Cancer. 2018;124(21):4137–44.

    Article  PubMed  Google Scholar 

  75. Larkey LK, Herman PM, Roe DJ, Garcia F, Lopez AM, Gonzalez J, et al. A cancer screening intervention for underserved Latina women by lay educators. J Womens Health (2002). 2012;21(5):557–66.

    Article  Google Scholar 

  76. Misra S, Lairson DR, Chan W, Chang YC, Bartholomew LK, Greisinger A, et al. Cost effectiveness of interventions to promote screening for colorectal cancer: a randomized trial. J Prev Med Public Health = Yebang Uihakhoe chi. 2011;44(3):101–10.

    Article  PubMed  Google Scholar 

  77. Husereau D, Drummond M, Petrou S, Carswell C, Moher D, Greenberg D, et al. Consolidated Health Economic Evaluation Reporting Standards (CHEERS) statement. Value Health. 2013;16(2):e1–5.

    Article  PubMed  Google Scholar 

Download references

Acknowledgements

We would like to acknowledge the Economics and Implementation Science Workgroup for their formative input and internal review process. The authors would like to specifically thank Dr. Ties Hoomans for his review of and insightful feedback regarding this manuscript; Drs. Heather Gold, Gila Neta, and Todd Wagner for their leadership of the Economics and Implementation Science Workgroup and thoughtful review of this manuscript; Dr. Karen Hicklin for her assistance with the Cancer Control PopSim data analysis and visualization; and Karl Johnson and Alexis Moore for their coordination of SCORE cost-related activities.

Funding

This work was supported through the National Cancer Institute Moonshot Accelerating CRC Screening and Follow-up through Implementation Science (ACCSIS) Program Initiative (4UH3CA233251), Grant No. 1UG3CA233251 and 4UH3CA233251 (PI: Daniel S. Reuland, MD MPH, University of North Carolina at Chapel Hill). This study was also supported by Cooperative Agreement Numbers 3U48-DP005017 (PI: Alice Ammerman, DrPH, University of North Carolina at Chapel Hill), U48 DP006400 (PI: Alice Ammerman, DrPH, University of North Carolina at Chapel Hill), and U48-DP005006 (PI: Tom Becker, MD, PhD, Oregon Health & Science University) from the Centers for Disease Control and Prevention (CDC) Prevention Research Centers (PRC) Program and the National Cancer Institute (NCI), as part of the Cancer Prevention and Control Research Network (CPCRN). MCO was supported by the Cancer Care Quality Training Program, University of North Carolina at Chapel Hill, (Grant No. T32-CA-116339, PI: Ethan Basch, Stephanie Wheeler). The content provided is solely the responsibility of the authors and does not necessarily represent the official views of the funders.

Author information

Authors and Affiliations

Authors

Contributions

MCO conceptualized and drafted the manuscript. KHL and SBW conceptualized and substantively contributed to and revised the manuscript. LF contributed to data visualization and substantively revised the manuscript. JL and DSR substantively revised the manuscript. All authors read and approved the final manuscript.

Authors’ information

Not applicable.

Corresponding author

Correspondence to Meghan C. O’Leary.

Ethics declarations

Ethics approval and consent to participate

Not applicable. This manuscript does not include any human subjects research.

Consent for publication

Not applicable. This manuscript does not contain data from any individual person.

Competing interests

SBW receives funding paid to her institution from Pfizer Foundation and AstraZeneca for unrelated work. The other authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

O’Leary, M.C., Hassmiller Lich, K., Frerichs, L. et al. Extending analytic methods for economic evaluation in implementation science. Implementation Sci 17, 27 (2022). https://doi.org/10.1186/s13012-022-01192-w

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13012-022-01192-w

Keywords