Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/21712, first published .
Giving Your Electronic Health Record a Checkup After COVID-19: A Practical Framework for Reviewing Clinical Decision Support in Light of the Telemedicine Expansion

Giving Your Electronic Health Record a Checkup After COVID-19: A Practical Framework for Reviewing Clinical Decision Support in Light of the Telemedicine Expansion

Giving Your Electronic Health Record a Checkup After COVID-19: A Practical Framework for Reviewing Clinical Decision Support in Light of the Telemedicine Expansion

Original Paper

1Medical Center Information Technology, NYU Langone Health, New York, NY, United States

2Department of Medicine, NYU Long Island School of Medicine, Mineola, NY, United States

3Department of Medicine, NYU Grossman School of Medicine, New York, NY, United States

4Department of Population Health, NYU Grossman School of Medicine, New York, NY, United States

5Department of Pediatrics, NYU Long Island School of Medicine, Mineola, NY, United States

6Department of Obstetrics and Gynecology, NYU Long Island School of Medicine, Mineola, NY, United States

7Department of Orthopedics, NYU Long Island School of Medicine, Mineola, NY, United States

Corresponding Author:

Jonah Feldman, MD

Medical Center Information Technology

NYU Langone Health

360 Park Ave South, 18th Floor

New York, NY, 10010

United States

Phone: 1 646 524 0300

Email: jonah.feldman@nyulangone.org


Background: The transformation of health care during COVID-19, with the rapid expansion of telemedicine visits, presents new challenges to chronic care and preventive health providers. Clinical decision support (CDS) is critically important to chronic care providers, and CDS malfunction is common during times of change. It is essential to regularly reassess an organization's ambulatory CDS program to maintain care quality. This is especially true after an immense change, like the COVID-19 telemedicine expansion.

Objective: Our objective is to reassess the ambulatory CDS program at a large academic medical center in light of telemedicine's expansion in response to the COVID-19 pandemic.

Methods: Our clinical informatics team devised a practical framework for an intrapandemic ambulatory CDS assessment focused on the impact of the telemedicine expansion. This assessment began with a quantitative analysis comparing CDS alert performance in the context of in-person and telemedicine visits. Board-certified physician informaticists then completed a formal workflow review of alerts with inferior performance in telemedicine visits. Informaticists then reported on themes and optimization opportunities through the existing CDS governance structure.

Results: Our assessment revealed that 10 of our top 40 alerts by volume were not firing as expected in telemedicine visits. In 3 of the top 5 alerts, providers were significantly less likely to take action in telemedicine when compared to office visits. Cumulatively, alerts in telemedicine encounters had an action taken rate of 5.3% (3257/64,938) compared to 8.3% (19,427/233,636) for office visits. Observations from a clinical informaticist workflow review included the following: (1) Telemedicine visits have different workflows than office visits. Some alerts developed for the office were not appearing at the optimal time in the telemedicine workflow. (2) Missing clinical data is a common reason for the decreased alert firing seen in telemedicine visits. (3) Remote patient monitoring and patient-reported clinical data entered through the portal could replace data collection usually completed in the office by a medical assistant or registered nurse.

Conclusions: In a large academic medical center at the pandemic epicenter, an intrapandemic ambulatory CDS assessment revealed clinically significant CDS malfunctions that highlight the importance of reassessing ambulatory CDS performance after the telemedicine expansion.

JMIR Med Inform 2021;9(1):e21712

doi:10.2196/21712

Keywords



The COVID-19 pandemic has ushered in seismic changes in the delivery of care, as telemedicine has revolutionized and likely permanently altered how outpatient care is delivered [1,2]. Telemedicine is not just office medicine virtualized; rather, there are dramatic differences in workflows [3], differences in the composition of and interaction between members of the care team, and differences in the type and quality of clinical data available to clinicians at the time of the telemedicine encounter. With this shift, some unintended consequences for providing preventive and chronic care have been documented [4-7]. The need for rapid transition from ambulatory in-person visits to telemedicine encounters, confounded by limited resources as a byproduct of the pandemic, has further magnified chronic care management challenges.

When properly deployed, clinical decision support (CDS) tools ensure that the right information is presented in the appropriate workflow to support clinical decision making. However, two-thirds of chief medical information officers report at least 1 CDS malfunction annually [8], and a study of electronic health record (EHR) alerts at a leading academic medical center revealed that 22% of active alerts were broken [9]. Ongoing evaluation of an organization's CDS program is critical to advance patient safety, quality, and experience of care [10,11]. As stewards of hard-earned successes in CDS-driven health care improvement, informaticists are responsible for remaining vigilant in supporting CDS-driven general health, well-being, and chronic conditions management. This is perhaps even more important during the pandemic, when our CDS is at higher risk of malfunctioning, and when these aspects of care are at risk of being neglected [12]. Due to practicing medicine during the pandemic, significant competing priorities by necessity force us to employ a time-sparing and straightforward approach to evaluate the health of our outpatient CDS program in the context of the COVID-19 telemedicine expansion.


NYU Langone Health (NYULH) is a large academic health care system in New York, consisting of over 5000 health care providers across 4 hospitals and ≥500 ambulatory locations. Since 2011, NYULH has grown its ambulatory care network across Manhattan, Brooklyn, Queens, Staten Island, Long Island, and Florida, and has maintained its position as a national leader in high-quality outpatient care, receiving the Ambulatory Care Quality and Accountability Award from Vizient Inc in each of the past 4 years. In numerous ways, NYULH's implementation of a single EHR (Epic Systems) and integration of ancillary systems help to facilitate ongoing excellence in ambulatory quality by connecting the vast network of locations, supporting best practice with electronic decision support and presenting dashboards that reinforce the NYULH culture of data-driven performance and accountability. This organizational structure provides the ideal context to assess ambulatory CDS for chronic disease.

In this report, our study period is March 19 to May 31, 2020, a time frame representing the start of the COVID-19 pandemic–related telemedicine expansion up until the end of May. Throughout this time, NYULH had been at the epicenter of the first wave of the national COVID-19 pandemic. NYULH consolidated outpatient practices and redeployed ambulatory providers to the inpatient setting. In-person office visits continued, but most patients opted for telemedicine video visits with their usual ambulatory care providers. During the study period, 2100 providers completed 244,425 telemedicine video visits. These video visits accounted for 59% (244,425/414,076) of the ambulatory visit volume, with in-person office visits accounting for the other 41% (169,651/414,076).

To evaluate how this shift toward telemedicine impacted ambulatory CDS at NYULH, our clinical informatics team developed a basic framework for assessing our CDS program's fitness to navigate the transformation. The framework included the following 4 steps:

  1. Analysis of alert firing volumes and per-encounter firing rates in telemedicine encounters and office visits.
  2. Analysis of action taken rates for the same alerts shown in telemedicine encounters and office visits.
  3. Clinical informaticist review of alerts with significant discrepancies in firing volume or action taken rates using the 5 Rights of CDS to identify optimization opportunities.
  4. Review of optimization opportunities through the existing CDS governance structures and consideration of ways to enhance CDS governance for rapid transformation.

Our framework builds upon previously published work [13] that describes alert malfunction as occurring across two major domains: (1) malfunction in alert display and (2) malfunction in provider response. We chose firing volumes and per-encounter firing rates to assess for dysfunction in CDS alert display. Firing rates for the same alert may vary significantly across clinical care settings [14]. We aimed to understand if telemedicine as a care setting demonstrated significantly different firing volumes or firing rates than office visits. Regular review of alert firing rates is the best practice for identifying alert display malfunctions [15]. Many organizations, like ours, have adopted dashboards for ongoing monitoring of alert firing rates and firing volumes [16-18]. Though we have existing dashboards, because of the comparative nature of our approach (comparing behavior between telemedicine and office visits), for this evaluation we used the Epic system’s Slicer Dicer BPA data model for data extraction.

As the second domain of alert malfunction, we looked at clinician response. Though there are many ways to measure alert performance in this domain [19,20], we chose the action taken rate, defined as the rate at which a clinician takes any action toward acknowledging a displayed alert. This measure allowed us to look for trends across many alerts with different action types. We also sought to understand whether the action taken rates for the same alert differed across telemedicine and office visits. At NYULH, providers experience the same user interface with the same activities and navigators in telemedicine and office visits. Thus, differences in the alert action rates represent actual disparities in the CDS of a single alert presented in two different clinical contexts. Again, we used the Epic system’s Slicer Dicer BPA data model for data extraction and group comparison.


Evaluating Ambulatory Alert Firing Volumes After the COVID-19 Telemedicine Expansion

Figure 1 shows the overall trend in NYULH daily alert firing volumes at baseline and through the study period.

Figure 1. Ambulatory alert firing volumes during the COVID-19 pandemic by date and visit type. CDS: clinical decision support.
View this figure

In total and across ambulatory settings, alert firing volumes were down during the pandemic study period (March 19-May 31, 2020). Still, far fewer alerts were firing in telemedicine encounters (64,938) as compared to office visits (233,636). The relative scarcity of alerts in telemedicine visits was an unexpected finding, even though providers completed more telemedicine visits during this time (244,425 versus 169,651). On a per-encounter basis, during the pandemic, clinicians were shown more than five times as many alerts in office visits (1.37 alerts per encounter) as they were in telemedicine video visits (0.26 alerts per encounter).

We also compared per-encounter alert firing volumes for each alert in two contexts: telemedicine and office visits. Observing for differences in per-encounter firing volumes in these two settings allowed us to quickly identify malfunctioning alerts that were not firing in a telemedicine setting. We noticed that 10 of our top 40 alerts by volume were not firing appropriately in telemedicine encounters. Further investigation revealed that ambulatory alerts restricted by encounter types were often not firing as expected, while other alerts restricted by practice location or provider specialty were performing well. Clinical informaticists and operational leaders reviewed the list of alerts that were not firing and validated that they were appropriate for the telemedicine encounters. The reconfiguration of these alerts to include telemedicine encounter types went live in the production system on May 14. Figure 2 shows the impact on the overall daily alert volume and diversity of clinical alerts in telemedicine.

Figure 2. Telemedicine alert firing volumes during the COVID-19 pandemic by date and alert type.
View this figure

Evaluating Ambulatory Alert Action Taken Rates After the COVID-19 Telemedicine Expansion

To understand whether providers were interacting with alerts displayed in the context of telemedicine encounters at the same rates as during office visits, we looked at the action taken rates in these two clinical contexts during the same study period (March 19-May 31). Table 1 contains the top 5 provider-facing alerts by volume and compares action taken rates for these same alerts displayed in telemedicine and in-person office visits. We found that there were statistically significant differences in the action taken rate in 3 of the top 5 alerts, and in these same 3 alerts, providers were less likely to take action in telemedicine encounters when compared to office visits.

Table 1. Action taken rates for the top 5 provider-facing alerts by volume.
AlertsTelemedicine, n/N (%)Office visit, n/N (%)P value
Shingles vaccine1032/26,458 (3.9%)1576/25,011 (6.3%)<.001
High BMI counseling431/3102 (13.9%)9618/75,144 (12.8%).07
Provider missing weight for BMI24/8101 (0.3%)21/10,572 (0.2%).19
Tobacco use intervention296/6441 (4.6%)1543/15,281 (10.1%)<.001
Pediatric nutrition counseling85/2139 (4.0%)517/6381 (8.1%)<.001

Cumulatively, from March 19-May 31, a total of 64,938 alerts fired in telemedicine encounters, with clinicians taking action on 3257 of those alerts, for an action taken rate of 5.3% (3257/64,938). By comparison, 233,636 alerts fired in office visits, with clinicians taking action on 19,427 alerts, for an action taken rate of 8.3% (19,427/233,636). Although analyses of this type are subject to confounding factors, the superior performance of alerts in office encounters is not surprising. These alerts went through years of iterative improvements specifically for the office setting. Our clinical assessment was that opportunities exist to optimize at least some of these alerts to perform better during virtual visits.

Clinical Evaluation of Ambulatory Alerts After the Telemedicine Expansion Using the 5 Rights of CDS

Based on the analysis described above, we were able to prioritize alerts for review using the following methodology. For each alert, we calculated the difference in the per-encounter firing rate between telemedicine and office encounters and the difference in the action taken rate in these two settings. We prioritized for review the alerts with the most significant differentials.

During the clinical workflow review, our informaticists reflected on the 5 rights of CDS (the right information, to the right person, in the right intervention format, through the right channel, at the right time in the workflow) [21,22]. Physician informaticists evaluated each alert, looking for opportunities to optimize the alert for telemedicine video visits. As an example of our approach, Table 2 summarizes findings for 4 alerts prioritized for clinical review. Textbox 1 details common overall themes from the informaticist review of multiple alerts.

Table 2. Clinical informaticist review of the 5 rights of clinical decision support as applied to NYU Langone Health alerts firing in telemedicine visits.
AlertsRight time?Right information?Right person?Right format?Right channel?
Shingles vaccineVaccines cannot be given virtually. Telemedicine is only the right time if guidance is for the patient to follow up at the pharmacy or officeShould include a link to shingles vaccine administration locator for available locations that have the vaccine in stockYesYesYes
High BMI counselingYes, but alerts not firing without weight being enteredYesYesYesYes
Provider missing weight for BMINo, once the video encounter starts, it is already too late. Weight should be collected before the encounterYesAlert should go to patient or office staffYesConsider patient-facing alert through portal
Tobacco use interventionNo, not showing up at the right time in the workflow without staff documenting social history before the providerYesSupport staff should be encouraged to virtually room the patient and collect historyConsider adding an interruptive alert after provider enters tobacco use historyYes
Themes from clinical informaticist review of NYU Langone Health alerts with discrepant firing rates or action taken rates in telemedicine and office visits.

Theme 1: Telemedicine visits may have different workflows than office visits, and some alerts developed for the office may not be appearing at the optimal time in the telemedicine workflow.

  • Alerts that appear to providers when they enter the encounter during office visits may not appear in a telemedicine encounter until later in the visit.
  • These alerts are triggered by clinical data (eg, history, medical problems, vitals, medications) that are usually entered in the office by support staff before the provider sees the patient.
  • Without support staff rooming the patient during a telemedicine visit, the alert does not appear until later, when the provider enters this data.
  • Noninterruptive alerts are likely to be missed at this later time.

Theme 2: Missing clinical data is a common reason for decreased alert firing rates seen in telemedicine visits.

  • Data like vital signs and point of care testing may not be available at the time of the telemedicine visit, and alerts dependent on this data may not fire.
  • Without the full care team (eg, medical assistant, nurse, nutritionist, physician extender) contributing to the data collection, reason for visit, medical history, surgical history, social history, medications, and problem list may not be complete.

Theme 3: Remote patient monitoring (RPM) and patient-reported clinical data entered through the portal should have a role in replacing data collection usually completed in the office by a medical assistant or registered nurse.

  • The current RPM approach is to collect data between visits. Operational and technical changes will need to be made to optimize RPM for collection on the day of the encounter. This encounter-level data is necessary clinically and would also be available to trigger alerts.
  • As patients enter the video visit through the patient portal, there is an opportunity to enter their own clinical data.

Theme 4: When firing rates are down because clinical data is not available, consider workflows where office staff collect data before the provider enters the virtual visit.

  • Depending on the need, staff could reach out to patients before or on the day of the visit.
  • This strategy would be well paired with RPM and staff playing the role of “virtually rooming” the patient and supporting patient adoption and proper use of remote monitoring.
Textbox 1. Themes from clinical informaticist review of NYU Langone Health alerts with discrepant firing rates or action taken rates in telemedicine and office visits.

Review of Optimization Opportunities Through Existing Governance Structures

At NYULH, we have a multistakeholder CDS governance structure that oversees the CDS life cycle from the initial request to subsequent post–go-live intervention monitoring. Before the COVID-19 pandemic, alert review was conducted on an ad hoc basis. We have now migrated our CDS inventory from an Excel spreadsheet (Microsoft Corp) to a comprehensive knowledge management platform using Collibra’s Data Governance Platform. CDS leadership can initiate automated workflows that send operational owners a message and link to review the CDS metadata and firing rate and document their operational review of CDS. The CDS committee can track these reviews. In parallel, we have made plans to give our operational teams access to Epic’s BPA data model in Slicer Dicer, Epic’s self-service analytics platform. Consequently, the CDS committee and informatics community can more rapidly understand firing rate characteristics to improve the alerts.

With this infrastructure in place, we are currently in the beginning stages of systematically reviewing all CDS interventions, prioritizing ones with high-volume/high-override rates. As we lay the groundwork for success, some early insights include the following: (1) Support from the executive leadership of ambulatory care practices has been particularly critical, even more so than in traditional CDS improvement initiatives, as the next steps involve new operational processes for RPM in virtual care and the changing role of support staff in this context. (2) The first principle of our ambulatory CDS governance is to “avoid interruption of care whenever possible.” Historically, 98.5% of our ambulatory alerts have been noninterruptive. Our CDS stakeholders requested a subgroup analysis of the 2.5% of interruptive ambulatory alerts that fired during the pandemic period; we found that, among interruptive ambulatory alerts, the action taken rate was higher in telemedicine visits (40.5%, 1194/2949) when compared with office visits (29.4%, 691/2370; P<.001). These findings were surprising and warrant further study and review; it is possible that in telemedicine encounters, with providers being more immersed in the system, modal alerts are comparatively more effective. The role of changing the alert format for alerts not performing well in telemedicine will likely be an ongoing point of discussion at our CDS governance committee meetings.


Principal Findings

In this report, we present a framework used to evaluate the impact of telemedicine expansion on our ambulatory CDS program. Based on our findings, we would advocate for other organizations to consider performing their own targeted ambulatory CDS checkup. We provide several vital themes that institutions can target when conducting their own evaluations of CDS in ambulatory telemedicine.

The strength of our approach is in its practical nature, using data that is readily available to prioritize rapid clinical review of CDS alerts most in need of intervention. The weakness may be in its narrow focus. A review of published CDS malfunction taxonomies [23] reveals that the majority of described alert malfunction types may not be discovered using our methodology. We have focused exclusively on best practice advisory alerts, but medication alerts, order sets, documentation templates, and other CDS features should also be re-examined with the shift to telemedicine. There is much work still to be done.

With limitations acknowledged, in a short amount of time, we were able to identify and fix significant CDS malfunctions, recognize alerts in need of optimization, and generate ideas for improving the performance of those alerts. On July 1, 2020, NCQA released “a sweeping set of adjustments to 40 of its widely-used Healthcare Effectiveness Data and Information Set (HEDIS) measures – in support of health plans, clinicians and patients who rely on telehealth services in record numbers as a result of the disruption brought on by the COVID-19 pandemic” [24]. Changes in the HEDIS measures will promote further conversations about quality measurement in telehealth, and will soon lead to increased attention paid to the performance of CDS in this context.

Conclusion

To our knowledge, this is the first description of how the expansion of telemedicine in response to COVID-19 impacted ambulatory CDS. The COVID-19 pandemic presents many new challenges for the management of chronic diseases. We have demonstrated that an ambulatory CDS checkup focused on telemedicine can positively impact the provision of preventative and chronic care. Our practical framework for reviewing CDS in light of the telemedicine expansion helped identify significant CDS malfunctions and important optimization opportunities.

Acknowledgments

The authors thank Nader Mherabi and Suzanne Howard for their leadership during the pandemic and their ongoing support of our clinical informatics and ambulatory CDS teams. We also thank the thousands of NYU Langone Health clinicians who have provided patients with exemplary acute and preventative care during these difficult times.

Conflicts of Interest

None declared.

  1. Mann D, Chen J, Chunara R, Testa P, Nov O. COVID-19 transforms health care through telemedicine: Evidence from the field. J Am Med Inform Assoc 2020 Jul 01;27(7):1132-1135 [FREE Full text] [CrossRef] [Medline]
  2. Hollander JE, Carr BG. Virtually Perfect? Telemedicine for Covid-19. N Engl J Med 2020 Apr 30;382(18):1679-1681. [CrossRef] [Medline]
  3. Ranganathan C, Balaji S. Key Factors Affecting the Adoption of Telemedicine by Ambulatory Clinics: Insights from a Statewide Survey. Telemed J E Health 2020 Feb 01;26(2):218-225. [CrossRef] [Medline]
  4. Tapper EB, Asrani SK. The COVID-19 pandemic will have a long-lasting impact on the quality of cirrhosis care. J Hepatol 2020 Aug;73(2):441-445 [FREE Full text] [CrossRef] [Medline]
  5. Shankar A, Saini D, Roy S, Mosavi Jarrahi A, Chakraborty A, Bharti SJ, et al. Cancer Care Delivery Challenges Amidst Coronavirus Disease – 19 (COVID-19) Outbreak: Specific Precautions for Cancer Patients and Cancer Care Providers to Prevent Spread. Asian Pac J Cancer Prev 2020 Mar 01;21(3):569-573. [CrossRef]
  6. Eccleston C, Blyth FM, Dear BF, Fisher EA, Keefe FJ, Lynch ME, et al. Managing patients with chronic pain during the COVID-19 outbreak: considerations for the rapid introduction of remotely supported (eHealth) pain management services. Pain 2020 May;161(5):889-893 [FREE Full text] [CrossRef] [Medline]
  7. Stoian AP, Banerjee Y, Rizvi AA, Rizzo M. Diabetes and the COVID-19 Pandemic: How Insights from Recent Experience Might Guide Future Management. Metab Syndr Relat Disord 2020 May 01;18(4):173-175 [FREE Full text] [CrossRef] [Medline]
  8. Wright A, Hickman TT, McEvoy D, Aaron S, Ai A, Andersen JM, et al. Analysis of clinical decision support system malfunctions: a case series and survey. J Am Med Inform Assoc 2016 Nov 28;23(6):1068-1076 [FREE Full text] [CrossRef] [Medline]
  9. Aaron S, McEvoy D, Ray S, Hickman T, Wright A. Cranky comments: detecting clinical decision support malfunctions through free-text override reasons. J Am Med Inform Assoc 2019 Jan 01;26(1):37-43 [FREE Full text] [CrossRef] [Medline]
  10. Shahmoradi L, Safadari R, Jimma W. Knowledge Management Implementation and the Tools Utilized in Healthcare for Evidence-Based Decision Making: A Systematic Review. Ethiop J Health Sci 2017 Sep 22;27(5):541-558 [FREE Full text] [CrossRef] [Medline]
  11. Yoshida E, Fei S, Bavuso K, Lagor C, Maviglia S. The Value of Monitoring Clinical Decision Support Interventions. Appl Clin Inform 2018 Jan 07;9(1):163-173 [FREE Full text] [CrossRef] [Medline]
  12. Wu J. An Important but Overlooked Measure for Containing the COVID-19 Epidemic: Protecting Patients with Chronic Diseases. China CDC Weekly 2020;2(15):249-250. [CrossRef]
  13. McCoy AB, Waitman LR, Lewis JB, Wright JA, Choma DP, Miller RA, et al. A framework for evaluating the appropriateness of clinical decision support alerts and responses. J Am Med Inform Assoc 2012 May 01;19(3):346-352 [FREE Full text] [CrossRef] [Medline]
  14. Seidling HM, Phansalkar S, Seger DL, Paterno MD, Shaykevich S, Haefeli WE, et al. Factors influencing alert acceptance: a novel approach for predicting the success of clinical decision support. J Am Med Inform Assoc 2011;18(4):479-484 [FREE Full text] [CrossRef] [Medline]
  15. Wright A, Ash JS, Aaron S, Ai A, Hickman TT, Wiesen JF, et al. Best practices for preventing malfunctions in rule-based clinical decision support alerts and reminders: Results of a Delphi study. Int J Med Inform 2018 Oct;118:78-85 [FREE Full text] [CrossRef] [Medline]
  16. Simpao AF, Ahumada LM, Desai BR, Bonafide CP, Gálvez JA, Rehman MA, et al. Optimization of drug-drug interaction alert rules in a pediatric hospital's electronic health record system using a visual analytics dashboard. J Am Med Inform Assoc 2015 Mar 15;22(2):361-369. [CrossRef] [Medline]
  17. Zimmerman CR, Jackson A, Chaffee B, O'Reilly M. A dashboard model for monitoring alert effectiveness and bandwidth. AMIA Annu Symp Proc 2007 Oct 11:1176. [Medline]
  18. McCoy A, Thomas E, Krousel-Wood M, Sittig D. Clinical decision support alert appropriateness: a review and proposal for improvement. Ochsner J 2014;14(2):195-202 [FREE Full text] [Medline]
  19. Kane-Gill SL, O’Connor MF, Rothschild JM, Selby NM, McLean B, Bonafide CP, et al. Technologic Distractions (Part 1). Critical Care Medicine 2017;45(9):1481-1488. [CrossRef]
  20. McGreevey JD, Mallozzi CP, Perkins RM, Shelov E, Schreiber R. Reducing Alert Burden in Electronic Health Records: State of the Art Recommendations from Four Health Systems. Appl Clin Inform 2020 Jan 01;11(1):1-12 [FREE Full text] [CrossRef] [Medline]
  21. Osheroff JA, Teich J, Levick D, Saldana L, Velasco F, Sittig D, et al. Improving Outcomes with Clinical Decision Support: An Implementer's Guide (Second Edition). Chicago, IL, USA: HIMSS Publishing; 2012.
  22. Kawamoto K, Houlihan CA, Balas EA, Lobach DF. Improving clinical practice using clinical decision support systems: a systematic review of trials to identify features critical to success. BMJ 2005 Apr 02;330(7494):765 [FREE Full text] [CrossRef] [Medline]
  23. Wright A, Ai A, Ash J, Wiesen JF, Hickman TTT, Aaron S, et al. Clinical decision support alert malfunctions: analysis and empirically derived taxonomy. J Am Med Inform Assoc 2018 May 01;25(5):496-506 [FREE Full text] [CrossRef] [Medline]
  24. COVID-Driven Telehealth Surge Triggers Changes to Quality Measures. National Committee for Quality Assurance.   URL: https:/​/www.​ncqa.org/​programs/​data-and-information-technology/​telehealth/​covid-driven-telehealth-surge-triggers-changes-to-quality-measures/​ [accessed 2021-01-06]


CDS: clinical decision support
EHR: electronic health record
HEDIS: Healthcare Effectiveness Data and Information Set
NYULH: NYU Langone Health
RPM: remote patient monitoring


Edited by G Eysenbach; submitted 10.07.20; peer-reviewed by A Adly, A Adly, M Adly, S Sabarguna; comments to author 21.08.20; revised version received 12.10.20; accepted 15.12.20; published 27.01.21

Copyright

©Jonah Feldman, Adam Szerencsy, Devin Mann, Jonathan Austrian, Ulka Kothari, Hye Heo, Sam Barzideh, Maureen Hickey, Catherine Snapp, Rod Aminian, Lauren Jones, Paul Testa. Originally published in JMIR Medical Informatics (http://medinform.jmir.org), 27.01.2021.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Medical Informatics, is properly cited. The complete bibliographic information, a link to the original publication on http://medinform.jmir.org/, as well as this copyright and license information must be included.