The American Psychiatric Association (APA) has updated its Privacy Policy and Terms of Use, including with new information specifically addressed to individuals in the European Economic Area. As described in the Privacy Policy and Terms of Use, this website utilizes cookies, including for the purpose of offering an optimal online experience and services tailored to your preferences.

Please read the entire Privacy Policy and Terms of Use. By closing this message, browsing this website, continuing the navigation, or otherwise continuing to use the APA's websites, you confirm that you understand and accept the terms of the Privacy Policy and Terms of Use, including the utilization of cookies.

×
Published Online:https://doi.org/10.1176/appi.ps.202100453

Abstract

Objective:

The authors examined whether stakeholders in behavioral health care differ in their preferences for strategies that support the implementation of evidence-based practices (EBPs).

Methods:

Using data collected in March and April 2019 in a survey of stakeholders in Philadelphia Medicaid’s behavioral health care system, the authors compared empirical Bayes preference weights for implementation strategies across clinicians, supervisors, agency executives, and payers.

Results:

Preferences for implementation strategies overlapped among the stakeholders (N=357 survey respondents). Financial incentives were consistently ranked as most useful and performance feedback as the least useful for implementing EBPs. However, areas of divergence were identified. For example, payers preferred compensation for EBP delivery, whereas clinicians considered compensation for time spent on preparing for EBPs as equally useful.

Conclusions:

The observed variation in stakeholder preferences for strategies to implement EBPs may shed light on why the ongoing shift from volume to value in behavioral health care has had mixed results.

HIGHLIGHTS

  • Clinicians, supervisors, agency executives, and payers associated with Philadelphia Medicaid’s behavioral health care system agreed that financial incentives were the most useful strategy to help clinicians deliver evidence-based practices (EBPs).

  • The four stakeholder groups also agreed that performance feedback was the least useful strategy.

  • Clinicians ranked financial incentives for EBP delivery and preparation time equally, whereas payers preferred payment for delivery over preparation time.

The implementation of evidence-based practices (EBPs) is a central component of the ongoing shift from volume to value in behavioral health care, which entails moving from a focus solely on the quantity of care provided to a focus on the quality of care provided (1). Achieving successful uptake of EBPs requires the participation of various stakeholders, including clinicians, supervisors, agency executives, and payers, for whom the costs and benefits of delivering EBPs likely differ (2). These differences may influence stakeholders’ preferences for implementation strategies, which can be defined as the active approaches used to increase the adoption and sustainment of interventions (3).

Previous studies have noted divergent preferences for implementation strategies across different stakeholders in behavioral health care, and these differences could represent barriers to effectively delivering EBPs (2, 4). Therefore, a better understanding of stakeholders’ inclinations toward implementation strategies, including financial incentives and performance feedback, could unveil new opportunities to enhance the quality of behavioral health care (5, 6).

In this study, we leveraged a choice experiment survey on different strategies for implementing EBPs; the survey was administered to multiple stakeholder groups in a large publicly funded behavioral health care system. The survey elicited preferences for 14 implementation strategies, derived from an innovation tournament (7) and the Expert Recommendations for Implementing Change (ERIC) taxonomy (8), that aim to support the delivery of EBPs, enabling us to measure preferences within and across stakeholder groups.

Methods

We used a best-worst scaling choice experiment in a survey conducted in March and April 2019 to measure the preferences for implementation strategies for EBP delivery among different stakeholders associated with Community Behavioral Health, the sole behavioral health, managed-care organization serving Medicaid beneficiaries in Philadelphia (9). Initially, we sent survey invitations via e-mail to leaders (N=210) and clinicians (N=527) of behavioral health organizations. We also e-mailed the invitation to four local electronic mailing lists and asked organization leaders to forward the e-mail. The survey link was opened 654 times; 357 respondents (N=240 clinicians, N=74 direct supervisors, N=29 agency executives, and N=14 payers) completed the survey, and their responses were included in this study.

The survey included 14 implementation strategies for delivering EBPs; the strategies were developed through a crowdsourced process that engaged local clinicians in an innovation tournament, followed by refinement and operationalization of the strategies in partnership with an expert panel of implementation and behavioral scientists (10). The implementation strategies were the following: an EBP performance leaderboard that recognizes clinicians in the agency who met a prespecified benchmark, posted where only agency staff can view it; an EBP performance benchmark e-mail available to a clinician and their supervisor; peer-led consultations comprising monthly telephone calls led by a local clinician with experience implementing EBPs; expert-led consultation including an expert EBP trainer on monthly telephone calls; on-call consultation consisting of a network of expert EBP trainers available for same-day, 15-minute consultations via telephone or Web chat; community mentorship consisting of a one-on-one program in which clinicians are matched with another local clinician treating a similar population; a confidential online forum available only to registered clinicians who use an EBP; a Web-based resource center that includes video examples of how to implement an EBP, session checklists, and worksheets; electronic screening instruments that could be added to an electronic health record to be completed by clients in a waiting room; a mobile app and texting service providing clients with reminders to attend sessions and complete homework assignments; a more relaxing waiting room that better prepares clients to enter the session; a one-time bonus for verified completion of a certification process, consisting of four training sessions, a multiple-choice examination, and the submission of a tape demonstrating EBP use; additional compensation for preparing to use an EBP in a session (e.g., reviewing protocol); and additional compensation for EBP delivery.

Respondents viewed 11 randomly generated quartets of the 14 implementation strategies; within each quartet, they selected the strategy they considered to be “most useful” and “least useful” for helping clinicians to deliver EBPs. Randomization of implementation strategies was balanced with regard to the number of times shown, pairing with other implementation strategies, and ordering within the presentation sequence.

To assess preferences for different implementation strategies among survey respondents, we calculated preference weights for each implementation strategy by using empirical Bayes estimation (11). Higher preference weights indicated that respondents felt the implementation strategy was “more useful.” The range of empirical Bayes weights can vary widely by the extent of favorability and unfavorability, but the same survey will generate empirical Bayes weights that are directly comparable on a ratio scale. For example, a preference weight of 10 is twice as favorable as a preference weight of 5 and five times as favorable as a preference weight of 2 (in our survey, the highest favorability was defined as the “most useful”).

To ease interpretation, we grouped the 14 implementation strategies into six categories developed by the ERIC project: performance feedback, client supports, clinical social supports, clinical consultation, clinical support tools, and financial incentives (which ERIC refers to as “pay for performance”) (8). We averaged preference weights for individual implementation strategies within categories and then calculated 95% confidence intervals (CIs) to allow for pairwise comparisons among the four stakeholder groups. Differences were deemed statistically significant if the 95% CIs did not overlap. In a supplemental table in the online supplement, we tested for differences in preference weights for individual implementation strategies across the four stakeholder groups (clinicians, supervisors, agency executives, and payers) by using both CIs and analysis of variance.

Additional details on the innovation tournament, best-worst scaling choice experiment, and empirical Bayes estimation are available elsewhere (2, 7, 12). This study was approved by the University of Pennsylvania Institutional Review Board.

Results

The four stakeholder groups agreed that the financial incentives category of implementation strategies was most useful for EBP implementation (Figure 1). The average preference weights for financial incentives were 10.6 for clinicians (95% CI=10.1–11.1), 11.5 for supervisors (95% CI=10.6–12.4), 13.0 for agency executives (95% CI=11.7–14.3), and 12.9 for payers (95% CI=11.8–14.1). The stakeholders also agreed that performance feedback strategies were the least useful; mean preference weights for the performance feedback strategies were 1.7 for clinicians (95% CI=1.4–1.9), 1.9 for supervisors (95% CI=1.5–2.3), 2.5 for agency executives (95% CI=1.4–3.5), and 3.3 for payers (95% CI=1.5–5.1).

FIGURE 1.

FIGURE 1. Preferred strategies for implementing evidence-based practices (EBPs) in behavioral health care, by stakeholder groupa

a Error bars denote 95% confidence intervals. Preference weights were derived from 11 questions from a best-worst scaling choice experiment and empirical Bayes estimation; higher scores indicate that respondents felt that the strategies were more useful. The survey included 240 clinicians, 74 supervisors, 29 agency executives, and 14 payers. Performance feedback strategies included an EBP performance leaderboard and a customized performance e-mail; clinical consultation strategies included peer supervision, expert supervision, and on-call consultation; clinical social support strategies included community mentorship and an online forum; clinical support tools included a Web-based resource center and electronic screening; client support strategies included a mobile app and an improved waiting room; and financial incentive strategies included an EBP certification bonus, compensation for EBP preparation time, and compensation for EBP delivery.

There were notable points of divergence in preferences for individual implementation strategies across stakeholder groups (see Table S1 in the online supplement). For example, we observed statistically significant differences regarding the preferred structure of financial incentives. Preference weights for compensation for EBP delivery were significantly higher for payers (preference weight=17.5, 95% CI=15.3–19.8) compared with clinicians (preference weight=11.6, 95% CI=10.9–12.3). In contrast, clinicians ranked the usefulness of compensation for EBP preparation time (preference weight=11.7, 95% CI=11.0–12.4) more highly than did payers (preference weight=10.3, 95% CI=6.8–13.9), although the difference between these two groups was not statistically significant.

Discussion

In this study, we used a choice experiment survey to elicit and compare preferences for EBP implementation strategies among clinicians, supervisors, agency executives, and payers in a large publicly funded behavioral health care system. Of note, innovation tournaments and the best-worst scaling choice experiment used to elicit preferences are relatively low-cost approaches and increase stakeholder engagement (12). Such approaches confer opportunities to rank the order of and customize implementation strategies, and systematic reviews have concluded that participatory design based on crowdsourcing is an effective way to identify innovative solutions to complex issues (12, 13).

Our results indicate that stakeholder groups agreed on the most useful category of implementation strategies (financial incentives) as well as on the least useful category (performance feedback). Given the growing popularity and acceptability of financial incentives in health care (5, 14), a notable point of divergence involved the structure of financial incentive implementation strategies. Clinicians preferred compensation for EBP preparation time and delivery similarly, but payers significantly preferred compensation for EBP delivery over compensation for EBP preparation time.

To date, the shift from volume to value of care has yielded mixed results in behavioral health (5). Financial incentives traditionally reward the delivery of services, overlooking the time and resources clinicians need to prepare for EBP delivery in community settings—particularly in publicly funded behavioral health care (15). Our findings suggest that one way to support clinicians in EBP implementation is to consider restructuring financial incentives to align with clinicians’ preference for reimbursement for EBP preparation time or to explore the possibility of using procedural codes that reimburse EBP preparation time.

Determining whether compensation for EBP preparation time can meaningfully increase EBP delivery and identifying the best ways to operationalize this and other implementation strategies were outside the scope of this study. Qualitative interviews of stakeholder focus groups could be useful for further refinement of these ideas. Our results were also limited in generalizability because our survey targeted stakeholder groups associated with Philadelphia Medicaid. Finally, we note a lack of established cutoffs to determine whether levels or differences in preference weights among stakeholders are clinically meaningful.

Conclusions

The findings of this study provide insights that may be useful for implementation strategies commonly used in value-based behavioral health care. We found that all four stakeholder groups preferred financial incentives to other implementation strategies and that clinicians ranked compensation for time spent on preparing for EBP implementation as equally useful as compensation for EBP delivery. We also demonstrate the utility of an approach that illustrates how implementation strategies are perceived by stakeholder groups differing in responsibilities, priorities, and perspectives. Combined, our findings suggest that a focus on the variation in preferences across stakeholder groups should be considered when policy makers engage in implementation efforts and researchers design implementation studies.

Department of Psychiatry, Center for Mental Health, Perelman School of Medicine, University of Pennsylvania, Philadelphia (Candon, Zentgraf, Beidas, Stewart); Leonard Davis Institute of Health Economics, University of Pennsylvania, Philadelphia (Candon, Buttenheim, Stewart); School of Social Work, Boise State University, Boise, Idaho (Williams); Center for Health Incentives and Behavioral Economics, Perelman School of Medicine, and Department of Family and Community Health, School of Nursing, University of Pennsylvania, Philadelphia (Buttenheim); Department of Medicine and Department of Epidemiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia (Bewtra); Penn Implementation Science Center (PISCE@LDI), Leonard Davis Institute of Health Economics, University of Pennsylvania, Philadelphia (Beidas).
Send correspondence to Dr. Candon ().

This study was funded by the National Institute of Mental Health (P50 MH-113840).

Dr. Candon reports a services contract from Anthem, Inc. Dr. Bewtra reports research funding from Takeda and Iterative Scopes, providing consultant service for Bristol Myers Squibb and Pfizer, and receiving an honorarium for participation in a continuing medical education program sponsored by Integrity Continuing Education and Medical Education. Dr. Beidas reports royalties from Oxford University Press and providing consultation to United Behavioral Health. She is on the advisory boards for Optum Behavioral Health, AIM Youth Mental Health Foundation, and the Klingenstein Third Generation Foundation. The other authors report no financial relationships with commercial interests.

References

1. Mechanic D: More people than ever before are receiving behavioral health care in the United States, but gaps and challenges remain. Health Aff 2014; 33:1416–1424Crossref, MedlineGoogle Scholar

2. Stewart RE, Beidas RS, Last BS, et al.: Applying NUDGE to inform design of EBP implementation strategies in community mental health settings. Adm Policy Ment Health 2020; 48:131–142CrossrefGoogle Scholar

3. Proctor EK, Powell BJ, McMillen JC: Implementation strategies: recommendations for specifying and reporting. Implement Sci 2013; 8:139Crossref, MedlineGoogle Scholar

4. Aarons GA, Wells RS, Zagursky K, et al.: Implementing evidence-based practice in community mental health agencies: a multiple stakeholder analysis. Am J Public Health 2009; 99:2087–2095Crossref, MedlineGoogle Scholar

5. Carlo AD, Benson NM, Chu F, et al.: Association of alternative payment and delivery models with outcomes for mental health and substance use disorders: a systematic review. JAMA Netw Open 2020; 3:e207401Crossref, MedlineGoogle Scholar

6. Bonham AC, Solomon MZ: Moving comparative effectiveness research into practice: implementation science and the role of academic medicine. Health Aff 2010; 29:1901–1905CrossrefGoogle Scholar

7. Williams NJ, Candon M, Stewart RE, et al.: Community stakeholder preferences for evidence-based practice implementation strategies in behavioral health: a best-worst scaling choice experiment. BMC Psychiatry 2021; 21:74Crossref, MedlineGoogle Scholar

8. Waltz TJ, Powell BJ, Chinman MJ, et al.: Expert Recommendations for Implementing Change (ERIC): protocol for a mixed methods study. Implement Sci 2014; 9:39Crossref, MedlineGoogle Scholar

9. Flynn TN, Louviere JJ, Peters TJ, et al.: Best-worst scaling: what it can do for health care research and how to do it. J Health Econ 2007; 26:171–189Crossref, MedlineGoogle Scholar

10. Ranard BL, Ha YP, Meisel ZF, et al.: Crowdsourcing—harnessing the masses to advance health and medicine, a systematic review. J Gen Intern Med 2014; 29:187–203Crossref, MedlineGoogle Scholar

11. Orme B: Accuracy of HB Estimation in MaxDiff Experiments. Sequim, WA, Sawtooth Software, Inc, 2005. www.sawtoothsoftware.com/download/techpap/maxdacc.pdfGoogle Scholar

12. Stewart RE, Williams N, Byeon YV, et al.: The clinician crowdsourcing challenge: using participatory design to seed implementation strategies. Implement Sci 2019; 14:63Crossref, MedlineGoogle Scholar

13. Baker R, Camosso-Stefinovic J, Gillies C, et al.: Tailored interventions to address determinants of practice. Cochrane Database Syst Rev 2015; 2015:CD005470Google Scholar

14. Hoskins K, Ulrich CM, Shinnick J, et al.: Acceptability of financial incentives for health-related behavior change: an updated systematic review. Prev Med 2019; 126:105762Crossref, MedlineGoogle Scholar

15. Ringle VA, Read KL, Edmunds JM, et al.: Barriers to and facilitators in the implementation of cognitive-behavioral therapy for youth anxiety in the community. Psychiatr Serv 2015; 66:938–945LinkGoogle Scholar