Several preventive interventions have been evaluated using rigorous scientific standards and demonstrated to prevent or reduce youth problem behaviors and improve behavioral health outcomes (Catalano et al., 2012; Sandler et al., 2014). Implementing these evidence-based interventions with fidelity (i.e., as intended) is essential to yielding positive outcomes, since poor implementation can undermine effectiveness (Chambers et al., 2013; Durlak & DuPre, 2008). Adhering to fidelity guidelines, however, is challenging when interventions are adapted to realities of the environment, particularly during evolving, large-scale public health crises like a pandemic. This paper aims to inform the scientific community, practitioners, evaluators, funders, and policymakers about important considerations for implementing evidence-based interventions within the context of the Novel Coronavirus Disease 2019 (COVID-19), declared a pandemic by the World Health Organization in March 2020.

Two surveys were conducted in May 2020 and one year later by Blueprints for Healthy Youth Development, one of several online registries across the United States and around the world that collectively serve to “aggregate, standardize, review and rate the evidence base of interventions, acting as repositories that provide input into the decision-making process” (Neuhoff et al., 2015, p. 11). Registries are important stakeholders in the scientific process, operating in the role of intermediary for governmental agencies, funders, and practitioners seeking to make informed choices when adopting behavioral interventions (Buckley et al., 2020; Paulsell et al., 2017). Our goal was to establish a baseline of modifications to evidence-based interventions listed on the Blueprints website that stemmed from COVID-19 so we could document important lessons learned and better understand the timing and substance of changes.

The Impact of COVID-19 on Communities and Families

Various checks were put in place to contain the spread of COVID-19, including quarantine or stay-at-home orders, discontinuation or disturbance of non-essential services, restrictions on public transport, social distancing, and use of masks (Wiersinga et al., 2020). COVID-19 also destabilized many families. Between April 1, 2020 and June 30, 2021, one in 500 U.S. children under the age of 18 years lost a parent or custodial grandparent, with youth of color disproportionately impacted (Hillis et al., 2021). As such, children and families have experienced massive disruption, and rates of mental health challenges stemming from the stress of COVID-19 have intensified, bringing social inequalities and ethnic and racial inequities to the forefront of public health (Bright, 2020; Siu, 2021; Wilson et al., 2020) and exacerbating discrepancies that existed prior to the pandemic (Amadasun, 2020; Singh et al., 2020).

Several groups focused on child and adolescent health care joined together to declare a national state of emergency in children’s mental health in the United States. These groups have advocated for increased funding dedicated to sustaining systems of care that connect families in need of supports with evidence-based behavioral health services in their home, community, health settings, and/or school (American Academy of Child & Adolescent Psychiatry, 2021; American Academy of Pediatrics, 2021; Children’s Hospital Association, 2021).

Caring for young people, their families, and communities experiencing elevated rates of depression, anxiety, trauma, loneliness, and loss requires effective strategies to meet these challenges. As the global community continues to experience repeated waves of COVID-19 from new variants and debates how to prepare for the next pandemic, it is important to identify practical approaches that are evidence based and implementable in the real world to optimize the use of resources and improve behavioral and health outcomes.

Evidence-Based Interventions and Registries

Advances in both intervention design and development and evaluation research have provided a rich body of evidence demonstrating that some behavioral interventions are effective, both for preventing the onset of problem behaviors and for successfully intervening with those experiencing maladaptive behaviors (Greenwood, 2006; Institute of Medicine, 2008; Sherman et al., 2002). These evidence-based interventions also often have positive effects on other outcomes such as mental health, academic achievement, parenting practices and family wellbeing, and employment (Bailey, 2009; Mihalic & Elliott, 2015).

With a need to focus limited resources on behavioral interventions showing evidence of effectiveness, the United States and many countries around the world have invested in the development of registries (also known as online clearinghouses) that seek to identify, evaluate, list, and disseminate information about evidence-based interventions. Online registries have used rigorous scientific methods to evaluate the empirical evidence of thousands of interventions, and a select number have been shown to prevent behavioral health problems (Catalano et al., 2012; National Research Council & Institute of Medicine, 2009; Sandler et al., 2014). Given the increasing volume and scientific complexity of literature evaluating the efficacy and effectiveness of interventions (Bastian et al., 2010; Gottfredson et al., 2015), these registries make the evaluation literature more accessible to practitioners and raise awareness about the existence of evidence-based interventions. The importance of registries acting as intermediaries between researchers and program users is evident in the large number that exist—up to 24 across the United States and Europe alone (Axford et al., 2022; Burkhardt et al., 2015; Means et al., 2015), ten of which are currently funded by the United States federal government (Mayo-Wilson et al., 2021).

Blueprints for Healthy Youth Development (https://www.blueprintsprograms.org/) provides an online registry of scientifically proven and scalable interventions that prevent or reduce the likelihood of antisocial behavior and promote a healthy course of youth development and adult maturity. Certified interventions are ranked as Promising, Model, or Model Plus based on the strength and extent of evaluation findings (Steeger et al., 2021) and readiness of the intervention to be disseminated with fidelity (Buckley et al., 2020). Model and Model Plus programs have demonstrated efficacy for changing outcomes over time and are recommended for large-scale implementation. Promising programs show promise of efficacy but require additional follow-up research before scaling across settings and in public systems.

Launched in 1996, Blueprints is the longest-standing registry and has been at the forefront of evidence-based programming (Elliott & Mihalic, 2004; Mihalic & Elliott, 2015). Blueprints is also recognized around the world. For example, the United Nations Office on Drugs and Crime and World Health Organization recently suggested that national standards globally enforce a requirement of implementing evidence-based strategies only by utilizing registries such as Blueprints, citing the registry by name (United Nations Office on Drugs and Crime and the World Health Organization, 2018, p. 42).

Balancing Fidelity of Implementation and Adaptation

Developing alongside the field of summative evaluation in behavioral health has been a growing appreciation for process (or formative) evaluation given the reality that implementation of evidence-based interventions is affected by their ability to achieve potential outcomes in real world situations. Adjustments are common when interventions are implemented in natural settings (Chambers & Norton, 2016). Fidelity and adaptation, however, can be at odds since both adhering to fidelity guidelines and simultaneously adapting to realities of the environment and context is challenging (Durlak & DuPre, 2008). Specifying a balance between flexibility and fidelity is a topic of much debate in implementation science (Stirman et al., 2019).

Thoughtful and deliberate adaptation to the delivery of an evidence-based intervention to improve its fit in each context can lead to better engagement, acceptability, and outcomes. However, modifications that remove key elements of an intervention may be less effective (Stirman et al., 2019). Adaptations that occur reactively can also lead to program drift and lower impact on outcomes may result, rendering an evidence-based intervention ineffective (Braithwaite et al., 2020; Chambers & Norton, 2016; Norton & Chambers, 2020; Norton et al., 2019; Prusaczyk et al., 2020).

Evaluating Modifications to Evidence-Based Interventions

Registries such as Blueprints advocate for implementing interventions that are known to work (see Steeger et al., 2021) and that can be implemented at scale with fidelity (see Buckley et al., 2020). Historically, Blueprints’ “Gold Standard” has been widespread adoption of Model or Model Plus interventions, implemented with fidelity, and sustained by routine funding sources within the community (Elliott et al., 2020). Adaptations, especially untested, threaten to weaken the chosen intervention, undermine results, and thereby erode public confidence in scientific claims that evidence-based interventions work (Elliott & Mihalic, 2004).

When adaptations consistent with the theoretical rationale for a program are deemed necessary, it is critical to establish the adaptation’s effect on outcomes with a high-quality randomized control trial (see Steeger et al., 2021) or methodologically sound meta-analysis (see Pigott & Polanin, 2020), which are two research designs that allow for causal inferences (Shadish et al., 2002; Weed, 2000; West & Thoemmes, 2010). To our knowledge, only a handful of studies have been completed to evaluate modifications to Blueprints-certified interventions on behavioral outcomes. For example, Drake et al. (2015) described a randomized control trial of an online training that sought to improve implementation fidelity to an evidence-based HIV prevention curriculum, Reducing the Risk (Barbee et al., 2016), rated as Promising on the Blueprints registry. The online training group reported significantly higher overall implementation fidelity compared to the self-preparation control group, leading the authors to conclude that online training with video demonstrations is an effective way to enhance fidelity of implementation to the model.

Steeger et al., (2021) examined whether teachers trained online have similar levels of fidelity (as measured by adherence, dosage, quality of delivery, and student responsiveness; Dane & Schneider, 1998) compared to teachers trained in-person on the Botvin LifeSkills Training (LST) middle school program, a Blueprints Model Plus universal preventive intervention proven to reduce substance use and violence (Botvin et al., 1995, 2006; Spoth et al., 2002). Results showed online training was associated with lower ratings of quality of delivery compared to in-person training, but no significant associations existed between online training and adherence to the curriculum, dosage, or student responsiveness (Massey-Combs et al., 2021).

Lastly, Becker et al. (2014) investigated the perceived feasibility and pattern of implementation following an online training of Promoting Alternative Thinking Strategies (PATHS), a school-based curriculum rated as Model on the Blueprints registry (Malti et al., 2011). Online training for the PATHS program was compared to in-person training, in which all teachers received in-person classroom coaching after training. Coaching included support with preparing materials and classrooms, instructional modeling, observations, and technical assistance. The online training group had similar levels of fidelity of implementation compared to the in-person training group, leading the authors to conclude that the internet has potential in offering a training sequence designed to teach practitioners to deliver preventive evidence-based interventions.

Each of these studies (Massey-Combs et al., 2021; Becker et al., 2014; Drake et al., 2015) identified that training modality was not part of the intervention’s core causal components. Since only one implemented a randomized control trial design (Drake et al., 2015), these findings offer promising (at best) evidence that modifications to the training methods of behavioral preventive interventions can still result in outcomes comparable to the original evidence-based intervention.

Online and Hybrid Behavioral Services

In recent years, there has been growing evidence for the acceptability of online training in the health and mental health fields (Nelson & Sharp, 2016). For example, two synthesis studies and a meta-analysis compared differences in knowledge, attitudes, behavior, and skills between healthcare professionals (e.g., doctors, nurses, dentists, mental health counselors) trained online, in-person, or through blended/hybrid trainings. Cook et al. (2008) conducted a meta-analysis with studies involving healthcare professionals in trainings on medical topics such as evidence-based medicine, communication, and biostatistics; Rohwer et al. (2017) conducted a literature review of studies with medical professionals in trainings on evidence-based healthcare interventions; and Calder et al. (2017) reviewed studies with clinicians who received training on a variety of evidence-based practices, such as cognitive behavioral therapy, motivational interviewing, and medication-assisted treatment. Findings suggest that health professionals who attended online formats had greater knowledge and skills, and more positive attitudes towards the topic, compared to those who received no training. Differences across studies when comparing in-person versus online trainings were often small, leading authors to conclude that online training and traditional in-person training methods showed similar effectiveness.

To document the impact of COVID-19 in K-12 school settings, the American Institutes for Research surveyed a national sample of 721 district leaders in 2020 and 565 district leaders in spring 2021, followed by interviews with 20 district leaders from across the country in summer 2021. The authors concluded the need for continued investment in the further development, implementation, and evaluation of the promising practices identified via survey responses, which included (but were not limited to) virtual learning tools, individualized and personalized instructional approaches, and social-emotional supports for students, families, and staff (American Institutes for Research, 2021). Regarding acquisition of knowledge, online learning is a widely encompassing term and can be defined as educational activities that occur via the internet (U.S. Department of Education, 2009). A meta-analysis comparing in-person learning to online learning, as well as to blended forms of learning across a range of settings including education for K-12, college and graduate students, and professional development of educators concluded there was insufficient evidence to draw conclusions regarding the effectiveness of online learning for K-12 students, but found that outcomes (e.g., researcher developed assessments of knowledge, supervisor’s rating of job performance) for adult learners in online classes exceeded those of learners in traditional face-to-face classes and blended or hybrid models further exceeded purely in-person modalities (U.S. Department of Education, 2009).

Little is known about how well virtual services can work for parenting programs typically delivered via home visits, and questions remain regarding feasibility and effectiveness. A report summarizing survey findings of evidence-based home visiting programs during COVID-19 in the United States suggested that multiple modalities are being used to replace in-home visits, including interactive video conferencing, telephone, and texting (O’Neill et al., 2020). The most noted challenges regarding feasibility were that roughly 50% of families lacked stable internet access. Other noted issues for families documented by the survey included parent challenges with having the emotional capacity to engage in programs during the current circumstances and constraints related to confidentiality. For providers, challenges included reduced capacity to deliver curricula effectively given their own home environment issues (O’Neill et al., 2020).

Purpose

Children and families have experienced enormous adversity and disruption since COVID-19 was declared a pandemic in March 2020. Since then, rates of mental health challenges have surged (Vahratian et al., 2021), particularly for parents compared to adults without children (American Psychological Association, 2020). Consequently, evidence-based yet practical approaches are needed to optimize the use of resources and improve outcomes for children, families, and communities. As Ghosh and Sharma (2021) pointed out, however, several roadblocks have hindered the dissemination of evidence-based interventions in the wake of COVID-19, some of which are systemic (i.e., diversion of resources to services that specifically address COVID-19), structural (e.g., stay-at-home orders, school and facility closures, travel restrictions) and attitudinal (e.g., fear of contracting COVID-19 from in-person gatherings). For these reasons, many developers or purveyors (i.e., organizations created by developers to provide training, coaching, and tools to monitor implementation) modified their evidence-based intervention to ensure the continuity of programming during COVID-19.

To establish a baseline of modifications stemming from COVID-19 mitigations and document lessons learned, we administered a survey twice – before (May 2020) and following (May 2021) implementation of COVID-19-related restrictions. The surveys covered four topics: (1) The extent to which dissemination and/or implementation was affected by COVID-19, (2) The status of dissemination and/or implementation, (3) Modifications made to ensure continuity of programming, and (4) Data collected to examine the relationship between modifications made due to COVID-19 and outcomes. In addition, the follow-up survey gathered information on lessons learned in implementing a Blueprints-certified intervention during the COVID-19 pandemic.

Methods

Sample

The sample consisted of interventions certified by Blueprints with survey data about implementation collected from the developers and evaluators of these interventions. All primary contacts (i.e., intervention developers, purveyors, or evaluators) for Blueprints-certified interventions were emailed an invitation with a link to participate in a 10 minute confidential survey regarding implementation in the changing COVID-19 environment.

Evidence-based interventions were identified using the Blueprints database, which includes articles and reports that evaluate interventions for youth designed to (1) prevent or reduce negative behavioral health outcomes (e.g., mental health problems, substance use, delinquency/crime, and other health-related behaviors) or (2) promote positive development (e.g., academic achievement and prosocial behavioral outcomes). Interventions are typically delivered in family, school, and community-based settings, and target ages 0–25, beginning with infant and early childhood programs and including up to post-secondary education and early employment experiences. Given that the aim of Blueprints is on prevention (including universal, selective, and indicated preventive interventions), the database excludes interventions with a sole focus on evaluating treatment programs for diagnosed or clinical-level mental health problems, including medical or pharmacological interventions. In addition, Blueprints considers randomized control trials (in which individuals are randomly assigned to condition), cluster randomized control trials (i.e., when clusters of individuals, such as students within schools, are randomly assigned to condition), and quasi-experimental designs (in which researchers determine how participants are assigned to conditions) but excludes studies that use a pre-post design without a control group.

Only interventions with the strongest research evidence that have been certified and rated as Promising, Model, or Model Plus by Blueprints were eligible for inclusion in this study. Certification is an ongoing process with several interventions added to the registry each year. In all, out of more than 1500 separate interventions reviewed in the Blueprints database, 99 (6%) had been certified when the study’s timeframe for data collection ended in June 2021. Eighty-eight of these preventive interventions originated in the U.S. and 11 originated in either Australia, Canada, or Europe. In addition, most of the certified interventions in the present study (n = 81) were designated as Promising.

Procedures

The baseline survey, conducted via the Qualtrics platform, was open from May 18 to June 30, 2020. One year later, a follow-up survey was conducted from May 25 to June 30, 2021 to see if or how implementation had changed, and to better understand the prevention community’s response to the pandemic. For each administration, at least two additional reminders were sent requesting survey completion. Only one survey per intervention was accepted, though the survey link could be shared with any staff person knowledgeable about the program’s implementation. Surveys were completed for 58 of 94 eligible preventive interventions (62% response rate) at baseline, and for 57 of 99 preventive interventions at follow-up (58% response rate).

The combined (baseline and follow-up) convenience sample of interventions covered a variety of delivery settings—school (44%), community (9%), school and community combined (15%), and other venues including home, social services, mental health treatment centers and programs delivered within multiple settings (32%). Additionally, the sample represented a continuum of universal, selective, and indicated preventive interventions, and the interventions targeted a broad range of outcomes, including problem behavior, mental health, physical health, positive relationships, and educational skills. Across the baseline and follow-up surveys, 66 preventive interventions from the Blueprints database were represented (with 49 completing both surveys). Table 1 shows sample information.

Table 1 Sample Descriptives of Blueprints-Certified Interventions Surveyed During the Pandemic

Measures

Several of this study’s co-authors (PB, AL, DE and KH) developed the survey items, which included four topics (i.e., impact of COVID-19; implementation status; modifications; data collected to evaluate modifications). Response options for the first topic, asked only in the follow-up survey, were multiple choice. The second through fourth topics allowed respondents to select options via check boxes. If a box was checked, the item was coded as “yes”; if it was unchecked, the item was coded as “no.” In addition, the second survey included an open-ended “lessons learned” item.

Data Analysis

Descriptive statistics were used to examine the frequency of responses to the multiple-choice and binary check box items, and these quantitative results were presented as percentages. To examine the “lessons learned” item, multiple rounds of coding were used to categorize the qualitative responses. The first step was to conduct open coding in which survey responses were broken into discrete segments and then further refined into short descriptive codes. Using axial coding (Simmons, 2017), two of the study co-authors (AL and CS) discussed the appropriateness of the codes and drew connections to combine codes into categories. Ultimately, three major themes and several categories or subthemes (i.e., of two or more codes) were identified in the qualitative data, which were supported using direct quotations.

Results

Quantitative Survey Results

Contacts from 58 (out of 94) evidence-based interventions responded at baseline and 57 of 99 evidence-based interventions responded at follow-up. Table 2 presents quantitative findings by time point. Results from interventions with complete (i.e., baseline and follow-up) data (n = 49) did not differ from what is reported in Table 2 and can be obtained by contacting the study’s lead author.

Table 2 Descriptive Results of Survey Responses

Dissemination and Implementation: Pandemic Impact

Most evidence-based interventions (56%) reported either no significant impact on the scale of dissemination or some difficulties maintaining dissemination but with overall stability from baseline to follow-up. Additionally, slightly more than one-quarter (28%) reported a positive impact providing new opportunities for development. Only a small portion (12%) reported great impact, leading to discontinuation or serious difficulties sustaining dissemination.

Status of Dissemination and Implementation

At baseline, nearly half (48%) reported they had experienced new requests to implement their intervention; this number increased by 10 percentage points one year later. In addition, the majority (78%) received requests for changes to their delivery modality. By follow-up, most had made changes to the delivery modality (70%) and to training and/or support (83%).

Modifications Made to Intervention Delivery

At baseline, the majority reported they had rapidly adjusted their intervention to the changing environment. For example, they provided online resources (55%), turned to tele-sessions or video conferencing (60%), and/or offered online training or lessons to support implementation (72%). By follow-up, delivery of these modified services increased by an additional 7–16 percentage points. A small percentage (12%) started a blog to provide a forum for discussion.

Examining Modified Delivery on Outcomes

Consistent across surveys, roughly one quarter of the interventions were—and just over 10% were planning on—examining the relationship between modifications made due to COVID-19 and outcomes. However, slightly more than a quarter (28%) reported at baseline that they did not have resources to collect these data; this number increased to more than one-third (35%) at follow-up. A small percentage (approximately 5%) said data collection was not necessary as their intervention had not been modified.

Qualitative Open-Ended Survey Results

At the follow-up survey, an open-ended item centered around “lessons learned” about implementing evidence-based interventions in the context of the COVID-19 pandemic. Across responses, three primary themes and several subthemes emerged, which are described below. See supplementary Table 1 for a summary of themes, subthemes, and example participant quotations.

Theme 1: Benefits of Intervention Modifications During the Pandemic

All participants (n = 57) reported benefits in necessary transitions of virtual training and/or implementation of programs during the pandemic. These positive experiences were captured across five subthemes: ease of training and delivery, continued implementation support, reduction of participation barriers, stronger relationships, and generalizability and potential for scale-up. Most intervention developers and purveyors expressed positive experiences in transitioning trainings to virtual formats. Though not always ideal, the opportunity to train intervention providers virtually was reported to have yielded greater affordability and efficiency. Similarly, an online format for intervention delivery was often stated to have reduced barriers to participation, such as transportation and child-care. In some instances, the availability of virtual programming and communication led respondents to report stronger relationships between providers, participants, and stakeholders. Further, respondents reported more opportunities for scale-up given the easier access for some families, schools, and other entities.

Theme 2: Challenges of Intervention Modifications During the Pandemic

Although there were several positive reports of modifications made during the pandemic, participants (n = 17) also described challenges across two subthemes of implementation and technology. Transitioning interventions designed for in-person delivery to virtual formats was difficult and, in some instances, programs were put on hold. Virtual delivery encountered issues around student engagement and attendance, as well as academic progress. Some participants mentioned that virtual sessions placed greater burden upon group leaders to generate discussion and sessions had to be shortened. Additional challenges were encountered, including disparate access to technology and the need for developing capacity in hard and soft technologies and skills among providers.

Theme 3: Lessons Learned and Recommendations for Future Program Implementation

Lastly, based on their experiences adapting interventions to virtual formats, participants (n = 23) reported some lessons learned to apply to future program implementation across three subthemes: maintaining fidelity, flexibility and collaboration, and implementation modality. For example, with necessary transitions between in-person and virtual formats, survey respondents were working hard to maintain fidelity to the essential intervention activities deemed necessary to produce desired outcomes (i.e., core components) by providing sufficient training and supervision to ensure adaptations did not modify core components. Participants also expressed needs to stay flexible in both program training and delivery, which may be different for certain intervention populations, stakeholders, and contexts. For many interventions, training and delivery of certain services were likely to continue in some in-person/online hybrid format.

Discussion

This paper aims to inform the field about important considerations for implementing evidence-based preventive interventions during a pandemic by surveying developers and evaluators about implementation efforts in the context of COVID-19 mitigation efforts. In a sample of approximately 60 interventions that meet Blueprints’ high standards for considering a study’s methodological quality (Steeger et al., 2021) and dissemination readiness (Buckley et al., 2020), the majority reported no significant impact of COVID-19 on implementation scale, and more than half received new requests for adoption of their intervention. However, most modified delivery of services by turning to online platforms, though less than half plan to evaluate the relationship between modifications and outcomes and more than one-third said they lacked the resources to do so. Qualitative findings demonstrated that transition to virtual formats of provider training were reported to be overwhelmingly positive, while reports of online delivery were mixed. The importance of flexibility, collaboration, and creativity in serving youth and families was consistently noted, with some form of virtual or hybrid (i.e., in-person and virtual) options offered going forward.

Significance and Implications

In its first 30 years, the field of prevention science moved from “nothing works” to now having a broad range of tested effective programs that show promise in improving social, psychological, and physical wellbeing (Hawkins et al., 2015). However, many barriers remain in taking these effective prevention programs to scale (Haggerty et al., 2017). If the goal of prevention is to assist every member of society, then we must expand the scope and reach of service delivery strategies (Biglan, 2018). Early pioneering work on the role of the internet and telehealth had already begun when the COVID-19 pandemic emerged (Calder et al., 2017; U. S. Department of Education, 2009). As seen in the experiences of Blueprints-certified interventions reported in the present article, many developers and purveyors have retooled their services in response to the pandemic. Thus, by necessity, the field has been pushed forward to explore new opportunities in evidence-based service delivery, which may have produced many benefits. For example, providers may have been able to adopt a telehealth care and/or online training or meeting model in disseminating their intervention that previously had been in-person as a way of reducing costs and reaching more communities, something that may continue post COVID-19. In addition to the benefits of cost savings, the capacity to implement training or service delivery online has the potential to expand the reach of an evidence-based intervention nationally and even globally (Calder et al., 2017). That is, a well-established online training and intervention model has the potential to reach every global community with internet access.

However, caution is also advised. As providers continue to respond to COVID-19, implementers of interventions currently meeting evidence requirements (e.g., rated as Promising, Model, or Model Plus on the Blueprints registry) may be making modifications to content seeking to keep children, families, and staff safe while delivering services during a pandemic. While these modifications may be necessary and important, they change the way evidence-based interventions are disseminated and have the potential to change their effectiveness (Mihalic & Elliott, 2004). In transitioning to virtual delivery, it is important that developers and purveyors reconsider their messaging about fidelity, particularly if the intervention has strict requirements for the program curriculum, training materials, and/or process of delivery. If providers change the delivery process, developers and purveyors must ensure that the curriculum and materials are flexible enough to shift across delivery modes while still maintaining the active ingredients (or “core components”) that lead to positive outcomes (Self-Brown et al., 2020).

Navigating to online platforms or delivery models may also exacerbate inequity challenges when children or families lack ready access to the technology or high-speed internet needed to fully participate in services (Gibson et al., 2020). Although some interventions may rely on evidence of the effectiveness of telehealth in medicine and behavioral health, many are being delivered remotely for the first time. Purveyors and developers should therefore consider what supports are needed to ensure effective virtual delivery. Offering webinars, connecting providers delivering programming in similar settings (e.g., settings where broadband issues may be greater), and treating best practice guidelines as a fluid document to be revised as lessons are learned are just a few suggestions (Self-Brown et al., 2020). Several program purveyors (for example, in the home visiting field) have released guidance that allows evidence-based services to be delivered remotely during an emergency while still maintaining fidelity to the model. Initiatives in the United States such as the Maternal, Infant, and Early Childhood Home Visiting (MIECHV) Program have accepted these changes to modality (Jordan & McKlindon, 2020). Additionally, changes to the core components of a program may impact fidelity—and, in turn, impact expected child or family outcomes. However, even if intervention providers believe that they are attending to core components of the intervention in the modifications, whether these changes have compromised intervention efficacy or effectiveness is not known until tested (Elliott & Mihalic, 2004).

Beyond service delivery, purveyors and developers may find that certain core components cannot be delivered remotely, leading to adaptations that compromise the essential activities proven necessary to achieve intended outcomes. Furthermore, Cohen and Tisch (2021) noted that shifting from in-person to an online model is not a simple undertaking; adopting a virtual approach requires a high level of technical skill for interventions to still be effective.

Limitations

This paper provides insight into under-reported adaptations to Blueprints-certified interventions, which may facilitate further dialogue concerning delivering evidence-based interventions during a pandemic. There are, however, limitations that call for more studies.

First, respondents were limited to those interventions certified by the Blueprints registry. Many other interventions are listed on other registries and/or are being broadly implemented (Axford et al., 2022; Burkhardt et al., 2015; Means et al., 2015, Mayo-Wilson et al., 2021). Second, among Blueprints certified interventions, not all intervention developers or purveyors responded. Ninety-four developers or purveyors were surveyed in May 2020 (followed by 99 in May 2021), and our response rate was 62% (baseline, n = 58 interventions) and 58% (one year follow-up, n = 57 interventions). This response rate made it difficult to tease out fine differences, while also somewhat limiting generalizability. Thus, the conclusions from this study should be considered preliminary and caution should be exercised if applying these findings beyond the current sample. Third, the survey was confidential but not anonymous and relied on self-reporting. As such, responses could have been subjected to social desirability bias.

Future Research and Recommendations

As illustrated in the current paper, COVID-19 has accelerated the foray of evidence-based interventions into internet-based training and service delivery. Given the great potential advantages of online delivery for scale up, we now need to take advantage of several insights.

First, since many interventions have made modifications to ensure the continuity of programming in the wake of COVID-19, developers should define which elements are "essential" and which ones can be adapted without jeopardizing outcomes and offer guidance to providers to ensure adaptations do not modify core components. In addition, if intervention developers and purveyors have already made modifications then they need to invest in documenting these changes now through data collection and process evaluations. It is important to understand the timing and substance of programmatic changes. Subsequent outcome evaluations can explore whether the intervention remains effective.

Second, there is a need for research on the processes and outcomes of adapting interventions originally designed for in-person interaction to an online mode (Cohen & Tisch, 2021). Intervention developers and other researchers should be designing outcome evaluations to examine whether these translational efforts undermined intervention integrity. These evaluations of online-modified interventions should examine for whom and under what conditions they remain effective. And providers should attend to factors attenuating the success of that effort, such as (for example) the lack of access to high-speed internet services.

Third, as discussed previously, shifting from in-person to online dissemination is not a simple undertaking. Intervention providers need to consider expanding the technical skills and expertise of their teams to be able to effectively implement and manage delivery in this new environment of online training and/or dissemination of services (Cohen & Tisch, 2021).

Fourth, registries such as Blueprints need to consider the impact of a shift in training and delivery on certification status. Programs now operated remotely, when not in accordance with the manual and documentation originally reviewed and rated by the registry, may be considered formal adaptations and require a separate evaluation and review for continued certification. And finally, funders should prioritize funds to examine whether the online modifications’ advantages were gained without undermining intervention integrity and effectiveness.

Conclusion

Selecting interventions based on strong evidence of efficacy and effectiveness is essential (Gottfredson et al., 2015). The field now faces unique challenges due to the need for urgent modifications stemming from the COVID-19 pandemic coupled with minimal evidence indicating how evidence-based interventions should be adapted. Also, certain modifications may be more effective in some areas compared to others, due to differences in need, interventions already in place, the status of the local health care and mental health care systems, or local and national policies (Wasserman et al., 2020). Where feasible, research designs that lend themselves to making causal inferences such as randomized control trials or meta-analysis are needed to confirm which modifications are effective, taking the different cultural, economic and health care context into account. And process evaluations are important to study the relationship between modifications and outcomes. In sum, a rigorous approach that includes both formative and summative evaluations is needed to inform adaptation of behavioral programs and services, even during evolving, large-scale public health crises like a pandemic, to ensure that preventive interventions are relevant, persuasive, and feasible while remaining evidence based.