Back to Journals » Journal of Healthcare Leadership » Volume 14

An Adaptation of the RAND/UCLA Modified Delphi Panel Method in the Time of COVID-19

Authors Broder MS , Gibbs SN , Yermilov I

Received 4 December 2021

Accepted for publication 26 March 2022

Published 20 May 2022 Volume 2022:14 Pages 63—70

DOI https://doi.org/10.2147/JHL.S352500

Checked for plagiarism Yes

Review by Single anonymous peer review

Peer reviewer comments 3

Editor who approved publication: Dr Pavani Rangachari



Michael S Broder, Sarah N Gibbs, Irina Yermilov

Outcomes Research, Partnership for Health Analytic Research (PHAR), LLC, Beverly Hills, CA, USA

Correspondence: Michael S Broder, Partnership for Health Analytic Research (PHAR), LLC, 280 S Beverly Drive, Suite 404, Beverly Hills, CA, 90212, USA, Tel +1-310-858-9555, Fax +1-310-858-9550, Email [email protected]

Abstract: The RAND/UCLA modified Delphi panel method is a formal group consensus process that systematically and quantitatively combines expert opinion and evidence by asking panelists to rate, discuss, then re-rate items. The method has been used to develop medical society guidelines, other clinical practice guidelines, disease classification systems, research agendas, and quality improvement interventions. Traditionally, a group of experts meet in person to discuss results of a first-round survey. After the meeting, experts complete a second-round survey used to develop areas of consensus. During the COVID-19 pandemic, this aspect of the method was not possible. As such, we have adapted the method to conduct virtual RAND/UCLA modified Delphi panels. In this study, we present a targeted literature review to describe and summarize the existing evidence on the RAND/UCLA modified Delphi panel method and outline our adaptation for conducting these panels virtually. Transitioning from in-person to virtual meetings was not without challenges, but there have also been unexpected advantages. The method we describe here can be a cost-effective and efficient alternative for researchers and clinicians.

Keywords: Delphi panel, expert panel, consensus, virtual meeting, COVID-19, pandemic

Introduction

The RAND/UCLA modified Delphi panel method is a formal group consensus process that systematically and quantitatively combines expert opinion and evidence by asking panelists to rate, discuss, then re-rate items.1 In brief, the steps include a literature review, selection of panelists, generation of a rating form, a first-round rating form survey, an in-person meeting where panelists discuss areas of disagreement, final ratings and analysis of those ratings, and the development of a written summary of areas of agreement.

Such panels have been used to develop medical society guidelines,2 other practice guidelines,3–7 disease classification systems,8 research agendas,9 and quality improvement interventions.10 Guidelines developed using this method have content, construct, and predictive validity. Results of modified Delphi panels conducted using the same evidence base produce similar results, and patients treated according to the resulting guidelines have been shown to have improved outcomes.11,12

The panel meeting, a significant component of the method where panelists discuss areas of disagreement and the rationale for their first-round ratings, is traditionally held in person. During the COVID-19 pandemic, meetings could not be held in person. However, to maintain the integrity of the method while continuing to develop practice guidelines during the ongoing pandemic, we adapted this method and successfully conducted several virtual RAND/UCLA modified Delphi panels since March 2020. In this study, we conducted a targeted literature review to describe and summarize the existing evidence on the RAND/UCLA modified Delphi panel method and outlined our adaptation for conducting these panels virtually.

History of the RAND/UCLA Delphi Panel Method

The “Delphi method” was originally developed by the RAND Corporation in the 1950s as a way to obtain group consensus on military decisions.13 The technique was designed to avoid direct confrontation among experts by not completing ratings in person and instead relied on repeated, individual questioning of experts through surveys, later called “rating forms.” In this way, the method eliminated “specious persuasion,” where the participant with the strongest convictions or greatest supposed authority pushes other participants to agree against their own judgment.14 The method used successive rounds of surveys. Expert respondents were shown the group medians after each round and asked to explain their reasoning if their ratings fell outside a certain range. Several rounds were conducted in this way until “consensus” emerged.

In the 1980s, RAND adapted the method in partnership with UCLA for use in the medical setting15 (becoming the RAND/UCLA modified Delphi panel method). With this adaptation, researchers added the development of a summary of relevant literature to provide uniform context to experts and limited the number of repeated individual questionnaires to two (rather than an unlimited number), one completed before and one after an in-person meeting. Researchers found that when addressing clinical questions, a discussion was vital to debating areas of disagreement.16 However, as in the original design, the group was never required to agree during the meeting. Instead, the second-round ratings were used as a summary of the group consensus (defined mathematically based on the number of low versus high ratings).16

Contemporary Use of the RAND/UCLA Modified Delphi Panel Method

The RAND/UCLA modified Delphi panel method has been used to develop clinical practice guidelines in a variety of areas. For example, it has been used to determine the appropriateness of and treatment preference for coronary angiography, percutaneous transluminal coronary angioplasty, and coronary artery bypass grafting.7 It has been used to develop criteria for the appropriate use of urinary catheters in hospitalized patients,17 peripherally inserted central catheters,18 and transfusions in hepatectomies.19 It has also been used to identify circumstances when spinal mobilization and manipulation are inappropriate for patients with chronic low back pain5 and when and how to screen for neurotrophic keratopathy.20 Guidelines have been developed using the method to enhance the use of medical and surgical measures for recurrent stroke prevention,21 to identify systemic treatments for unresectable metastatic well-differentiated carcinoid tumors,3 and to determine the appropriateness of various biologics for Crohn’s disease.22 The method has helped determine adequate follow up intervals for patients with Cushing’s Disease at different phases in their treatment course4 and to describe specific circumstances when it is appropriate to consider tapering thrombopoietin receptor agonists in primary immune thrombocytopenia.6

Why the Method Works

Multiple aspects of the RAND/UCLA modified Delphi panel method combine to make it a successful tool for developing clinical guidelines. First, the structure of the rating form focuses decision-making. The form includes hundreds of patient scenarios differing across clinically relevant characteristics.16 It is typically structured as a grid, in which scenarios in nearby cells differ only on one or two characteristics. This design encourages panelists to think granularly about whether each characteristic alone has an impact on their ratings. In clinical practice, physicians faced with complex and diverse patients often make intuitive judgments using what has been called “System 1” decision-making—a faster, unconscious, and less intentional decision-making process.23 While medical students are trained to solve clinical problems by separately considering multiple aspects of the clinical picture, expert physicians typically use the more intuitive process. However, medical decision-making relying only on System 1 thinking may be more prone to error due to cognitive biases.24 For example, in clinical practice, when deciding if an intervention is appropriate for an older patient with a complex set of risk factors, a clinician might focus only on age—substituting the simpler question for a more complex one. The detailed ratings used in the Delphi panel rating form encourage panelists to use the slower, more logical, and conscious “System 2” thinking. With each risk factor explicitly listed, they cannot easily be ignored.

The use of two rounds of ratings with group discussion interposed is also likely a factor in the success of the process. It has long been observed that the accuracy of an estimate can be improved by combining estimates from different individuals,25 sometimes called the “wisdom of the crowd.”26 Clinical decisions often hinge on the likelihood of a particular outcome. For example, the likelihood of myocardial infarction in an untreated patient with cardiac risk factors drives the decision to prescribe a statin. Panelists asked to rate the appropriateness of such a prescription are implicitly estimating the risk of doing so. Combining individual panelist ratings should improve the accuracy of this estimation.

More recently, researchers have demonstrated “the wisdom of the inner crowd,” finding that aggregated estimates all made by a single individual may be more accurate than that individual’s “best guess.”26,27 When an individual’s second estimate is constructed from the perspective of someone with whom they disagree, accuracy improves even more.28 During the panel discussion, the moderator guides panelists to share their reasoning on scenarios for which there was disagreement. Panels purposely include experts from different backgrounds (eg specialty, practice type, geographic regions). Explicit discussion of differences of opinion, where panelists must consider the opinion of others with views different than their own, may lead the panelists to more consistent and logical positions. The increased agreement seen in almost all second-round Delphi panel ratings is consistent with this observation. In the second round, each individual panelist re-rates the items, having now considered not only what someone they disagree with (eg another panelist) thinks, but why she thinks that way.

Beyond the cognitive process noted above, the discussion at the in-person meeting also results in decreased disagreement among the group by harmonizing panelists’ interpretations of definitions used in the rating form.16 Panelists discuss definitions, clarify items that were unclear, and come to a common understanding of terms that can then be used in the resulting clinical guidance. For example, in a panel to determine the appropriateness of tapering thrombopoietin receptor agonists in primary immune thrombocytopenia, characteristics that could not be defined numerically (eg major/minor bleeding) versus those that could (eg platelet count), benefited from group discussion that made it more likely that all panelists were interpreting the characteristic in the same way.6

Lastly, by having panelists complete the rating form independently a second time following the meeting rather than requiring the group to agree at the meeting, helps to prevent the discussion from stalling if panelists refuse to agree. In The Wisdom of the Crowds, Surowiecki argues that too much communication from large groups can be unmanageable and inefficient.26 The structured nature of the Delphi panel meeting discussion, in which the moderator works through ratings completed in advance, limits these inefficiencies. If arguments arise, the moderator can simply move to the next item, acknowledging that there may be some remaining areas of disagreement in the second-round results.

Validity and Reliability of the Method

By avoiding certain cognitive biases and harnessing the collection knowledge of experts without the limitations of large crowds, guidelines developed using this method have content, construct, and predictive validity. Using a retrospective medical chart review design, researchers found that patients with heart disease eligible for revascularization and identified as likely to benefit from surgery per guidelines developed using the RAND/UCLA modified Delphi panel process, had better outcomes (lower mortality and prevalence of angina) after revascularization than did patients treated medically.11 This study was repeated prospectively at three hospitals in London,12 as well as through a post-hoc analysis of clinical trial data,28 with similar results. Recently, a group of heart disease specialty societies (eg the American College of Cardiology, Society for Cardiovascular Angiography and Interventions, Society of Thoracic Surgeons, and the American Association for Thoracic Surgery) updated these revascularization guidelines using the same method.29

In addition, the method has been found to be reliable. Studies have reported a test-retest reliability of >0.9 using the same panelists 6–8 months later30 and kappa statistics across several panels with different members similar to those of some common diagnostic tests.31 Independent panels using this method also produce similar ratings to one another, although the degree of similarity depends on the level of evidence available. A review of Delphi panels showed 90% agreement among the panels that used randomized control trial evidence compared to 70–80% agreement in the panels which used a weaker evidence base.31

Adapting the RAND/UCLA Modified Delphi Panel Method to a Virtual Setting

The RAND/UCLA modified Delphi panel method places emphasis on the in-person discussion.16 However, during a time when in-person meetings could no longer take place due to the COVID-19 pandemic, we adapted the discussion to a virtual setting (Figure 1).

Figure 1 RAND/UCLA modified Delphi panel process.

Certain aspects of the method did not need to change. For example, we continued to include 10–13 panelists and were able to discuss a similar range of items (200–700) (Table 1). The comprehensive summary of relevant literature typically developed by the research team had often already been shared electronically. Instead of sharing printed studies at the in-person meeting, we provided an online link to a folder with the electronic publications cited in the literature review. To inform the development of the rating form, researchers had previously conducted and continued to conduct a series of phone conversations with all participating panelists. During these conversations, the research team identifies characteristics critical to clinician decision-making on the panel topic and uses this information to develop the survey. Lastly, prior to the meeting in any format, panelists had been asked to complete their ratings electronically. These first-round ratings allow the moderator to identify patterns of agreement and disagreement and prepare for the meeting.

Table 1 RAND/UCLA Modified Delphi Panel Examples

Instead of meeting in person, we adapted the method by organizing and moderating the panel meetings virtually through video conference. In person, meetings were held over 1–2 full days in a central geographic location. We believed virtual meetings of this length would be unproductive. Instead, we redesigned the discussion into several 2- to 4-hour blocks of time on consecutive days. We have been able to complete these guided discussions in a shorter amount of time virtually (~6–7 hours logged into a meeting) than was required when meeting in person (1–2 full days including travel to attend a 6–9-hour meeting) (Table 1). In addition, shorter virtual segments allow participating experts, usually practicing physicians, to maintain some parts of their busy clinic schedules on these days. As a result, scheduling these meeting times has been simpler and allowed us to engage a more diverse group of experts, including those who do not primarily conduct research or those who were reluctant to travel for a panel meeting, even prior to the pandemic.

During our first meetings at the start of the pandemic, we worked with our panelists ahead of the meeting to familiarize them with the video conferencing technology to avoid technological challenges on the day of the meeting. As the pandemic evolved, our panelists became more adept at video conferencing and have not needed this additional step.

There are some limitations to this virtual panel method. First, in person, meetings often generated an informal interchange among panelists during breaks or meals, which often built rapport and respect among the group. We have not been able to recreate this virtually. Second, international panels with experts from across the globe, though less expensive and easier to plan due to lack of travel, are harder to schedule given time differences. It can be difficult to find several hour segments of time during which all panelists are willing to engage in a virtual discussion.

Nevertheless, in several virtual panels since early 2020, we have successfully reduced disagreement and achieved consensus on a variety of clinical topics (Table 1). For example, during a panel held on March 18–19, 2020, the proportion of items with disagreement decreased by half following the panel meeting.6 Upcoming publications will discuss other modified Delphi panel findings in detail.

Discussion

In response to the COVID-19 pandemic, we adapted the RAND/UCLA modified Delphi panel method to include a virtual, rather than in person, meeting. The result is a cost-effective and efficient alternative. Virtual panel meetings include the same elements that have made the method a success (eg a diverse group of 9–13 panelists, a reduction in disagreement following the discussion) as well as additional advantages such as shorter meeting times scheduled around clinician schedules and the ability to include diverse, sometimes international, experts who may not have otherwise been able to attend a 1–2 day in-person meeting.

Online adaptations of this method are not new. RAND developed an online elicitation system (ExpertLens), based on the original Delphi panel method, to facilitate rounds of online surveys among a large group of experts.32 Analytic methods are used to aggregate the data in real-time and determine what the group thinks. Like our adaptation, the authors note that this method allows for a cost-efficient way of soliciting opinions from non-collocated stakeholders.

RAND also developed the RAND/PPMD Patient-Centered Method, an online approach to engaging patients and clinicians in clinical practice guideline development.33 Following the Delphi panel design, the method is consistent with how clinicians and researchers typically develop practice guidelines, but also allows for the input of patients and caregivers.

Other adaptations of the method have also been published. For example, researchers combined the more traditional Delphi panel method (repeated surveys) with a follow-up in-person meeting to develop a list of standard items that should be included in health economics analysis plans (protocols detailing procedures and statistical analyses to be conducted in health economics research).34 The researchers used a larger group to complete two rounds of surveys (62 participants in a first-round survey and 48 in a second-round survey) and convened a smaller group of nine experts to discuss the items in-person.

Unlike these methods, our adaptation maintains the steps of the original Delphi panel process, but rather than meeting in person, the meeting is held virtually. We believe this virtual adaptation is a suitable alternative to holding meetings in person.

Conclusion

The RAND/UCLA modified Delphi panel method remains the best studied and most methodologically rigorous way to help health experts reach consensus on complex, clinical topics. The method reduces cognitive biases and efficiently harnesses the collective knowledge of experts. While other variations of this method have been asynchronous, our virtual adaptation maintains the panel meeting, allowing for valuable real-time discussion that is integral to the RAND/UCLA modified Delphi panel methodology. Moving from in-person to virtual meetings was not without challenges, but there have also been unexpected advantages. Even as COVID-19 cases decline and it will be safe to meet in person again, virtual meetings will persist. The methodology we describe here can be a cost-effective and efficient alternative for researchers and clinicians. We anticipate conducting future panels using both methods depending on the circumstances.

Author Contributions

All authors (MSB, SNG, IY) contributed significantly to this work, ensured the integrity of the work, and meet the following ICMJE criteria for authorship:

  1. Made a significant contribution to the work reported, including conception, study design, execution, acquisition of data, analysis and interpretation.
  2. Drafted or substantially revised or critically reviewed the article.
  3. Agreed on the journal to which the article will be submitted.
  4. Reviewed and agreed on all versions of the article before submission and during revision.
  5. Agree to take responsibility and be accountable for the contents of the article.

MSB, SNG, and IY designed the study; acquired, analyzed, and interpreted the data; and drafted the manuscript. All authors provided final approval for the manuscript to be published and agree to be accountable for all aspects of the work.

Funding

No funding was provided for this study.

Disclosure

All authors (MSB, SNG, IY) are employees of Partnership for Health Analytic Research (PHAR), LLC, a health services research consulting firm. The authors report no other conflicts of interest in this work.

References

1. Fink A, Kosecoff J, Chassin M, Brook RH. Consensus methods: characteristics and guidelines for use. Am J Public Health. 1984;74(9):979–983. doi:10.2105/AJPH.74.9.979

2. Bickel KE, McNiff K, Buss MK, et al. Defining high-quality palliative care in oncology practice: an American society of clinical oncology/American Academy of hospice and palliative medicine guidance statement. J Oncol Pract. 2016;12(9):e828–838. doi:10.1200/JOP.2016.010686

3. Strosberg JR, Fisher GA, Benson AB, et al. Systemic treatment in unresectable metastatic well-differentiated carcinoid tumors: consensus results from a modified delphi process. Pancreas. 2013;42(3):397–404. doi:10.1097/MPA.0b013e31826d3a17

4. Geer EB, Ayala A, Bonert V, et al. Follow-up intervals in patients with Cushing’s disease: recommendations from a panel of experienced pituitary clinicians. Pituitary. 2017;20(4):422–429. doi:10.1007/s11102-017-0801-2

5. Herman PM, Hurwitz EL, Shekelle PG, Whitley MD, Coulter ID. Clinical scenarios for which spinal mobilization and manipulation are considered by an expert panel to be inappropriate (and appropriate) for patients with chronic low back pain. Med Care. 2019;57(5):391–398. doi:10.1097/MLR.0000000000001108

6. Cuker A, Despotovic JM, Grace RF, et al. Tapering thrombopoietin receptor agonists in primary immune thrombocytopenia: expert consensus based on the RAND/UCLA modified Delphi panel method. Res Pract Thromb Haemost. 2021;5(1):69–80. doi:10.1002/rth2.12457

7. Hemingway H, Crook AM, Dawson JR, et al. Rating the appropriateness of coronary angiography, coronary angioplasty and coronary artery bypass grafting: the ACRE study. Appropriateness of coronary revascularisation study. J Public Health Med. 1999;21(4):421–429. doi:10.1093/pubmed/21.4.421

8. Shah N, Beenhouwer D, Broder MS, et al. Development of a severity classification system for sickle cell disease. Clin Outcomes Res. 2020;12:625–633. doi:10.2147/CEOR.S276121

9. Broder MS, Landow WJ, Goodwin SC, Brook RH, Sherbourne CD, Harris K. An agenda for research into uterine artery embolization: results of an expert panel conference. J Vasc Interv Radiol. 2000;11(4):509–515. doi:10.1016/S1051-0443(07)61386-4

10. Campbell SM. Research methods used in developing and applying quality indicators in primary care. Qual Saf Health Care. 2002;11(4):358–364. doi:10.1136/qhc.11.4.358

11. Kravitz RL, Laouri M, Kahan JP, Sherman T, Hilborne L, Brook RH. Validity of criteria used for detecting underuse of coronary revascularization. JAMA. 1995;274(8):632–638. doi:10.1001/jama.1995.03530080048040

12. Hemingway H, Crook AM, Feder G, et al. Underuse of coronary revascularization procedures in patients considered appropriate candidates for revascularization. N Engl J Med. 2001;344(9):645–654. doi:10.1056/NEJM200103013440906

13. Dalkey N, Helmer O. An experimental application of the DELPHI method to the use of experts. Manag Sci. 1963;9(3):458–467. doi:10.1287/mnsc.9.3.458

14. Helmer-Hirschberg O, Analysis of the future: the Delphi method. January 1, 1967. Available from: https://www.rand.org/pubs/papers/P3558.html. Accessed April 28, 2021.

15. Brook RH, Chassin MR, Fink A, Solomon DH, Kosecoff J, Park RE. A method for the detailed assessment of the appropriateness of medical technologies. Int J Technol Assess Health Care. 1986;2(1):53–63. doi:10.1017/S0266462300002774

16. Fitch K, ed. The Rand/UCLA Appropriateness Method User’s Manual. Rand; 2001.

17. Meddings J, Saint S, Fowler KE, et al. The ann arbor criteria for appropriate urinary catheter use in hospitalized medical Patients: results obtained by using the RAND/UCLA appropriateness method. Ann Intern Med. 2015;162(9 Suppl):S1–34. doi:10.7326/M14-1304

18. Chopra V, Flanders SA, Saint S, et al. The Michigan Appropriateness Guide for Intravenous Catheters (MAGIC): results from a multispecialty panel using the RAND/UCLA appropriateness method. Ann Intern Med. 2015;163(6 Suppl):S1–40. doi:10.7326/M15-0744

19. Bennett S, Tinmouth A, McIsaac DI, et al. Ottawa criteria for appropriate transfusions in hepatectomy: using the RAND/UCLA appropriateness method. Ann Surg. 2018;267(4):766–774. doi:10.1097/SLA.0000000000002205

20. Dana R, Farid M, Gupta PK, et al. Expert consensus on the identification, diagnosis, and treatment of neurotrophic keratopathy. BMC Ophthalmol. 2021;21(1):327. doi:10.1186/s12886-021-02092-1

21. Hanley D, Gorelick PB, Elliott WJ, et al. Determining the appropriateness of selected surgical and medical management options in recurrent stroke prevention: a guideline for primary care physicians from the national stroke association work group on recurrent stroke prevention. J Stroke Cerebrovasc Dis. 2004;13(5):196–207. doi:10.1016/j.jstrokecerebrovasdis.2004.05.002

22. Weizman AV, Nguyen GC, Seow CH, et al. Appropriateness of biologics in the management of Crohn’s disease using RAND/UCLA appropriateness methodology. Inflamm Bowel Dis. 2019;25(2):328–335. doi:10.1093/ibd/izy333

23. Kahneman D. Thinking, Fast and Slow. 8th ed. Farrar, Straus and Giroux; 2011.

24. van den Berge K, Mamede S. Cognitive diagnostic error in internal medicine. Eur J Intern Med. 2013;24(6):525–529. doi:10.1016/j.ejim.2013.03.006

25. Stroop JR. Is the judgment of the group better than that of the average member of the group? - PsycNET. J Exp Psychol. 1932;5(15). doi:10.1037/h0070482

26. Surowiecki J. The Wisdom of Crowds. Anchor; 2005.

27. Steegen S, Dewitte L, Tuerlinckx F, Vanpaemel W. Measuring the crowd within again: a pre-registered replication study. Front Psychol. 2014;5. doi:10.3389/fpsyg.2014.00786.

28. Bradley SM, Chan PS, Hartigan PM, et al. Validation of the appropriate use criteria for percutaneous coronary intervention in patients with stable coronary artery disease (from the COURAGE trial). Am J Cardiol. 2015;116(2):167–173. doi:10.1016/j.amjcard.2015.03.057

29. Patel MR, Calhoon JH, Dehmer GJ, et al. ACC/AATS/AHA/ASE/ASNC/SCAI/SCCT/STS 2017 appropriate use criteria for coronary revascularization in patients with stable ischemic heart disease: a report of the American College of Cardiology Appropriate Use Criteria Task Force, American Association for Thoracic Surgery, American Heart Association, American Society of Echocardiography, American Society of Nuclear Cardiology, Society for Cardiovascular Angiography and Interventions, Society of Cardiovascular Computed Tomography, and Society of Thoracic Surgeons. J Nucl Cardiol. 2017;24(5):1759–1792. doi:10.1007/s12350-017-0917-9

30. Merrick NJ, Fink A, Park RE, et al. Derivation of clinical indications for carotid endarterectomy by an expert panel. Am J Public Health. 1987;77(2):187–190. doi:10.2105/ajph.77.2.187

31. Shekelle PG, Kahan JP, Bernstein SJ, Leape LL, Kamberg CJ, Park RE. The reproducibility of a method to identify the overuse and underuse of medical procedures. N Engl J Med. 1998;338(26):1888–1895. doi:10.1056/NEJM199806253382607

32. Dalal S, Khodyakov D, Srinivasan R, Straus SG, Adams JL ExpertLens: a system for eliciting opinions from a large pool of non-collocated experts with diverse knowledge. January 1, 2011. Available from: https://www.rand.org/pubs/external_publications/EP20110096.html. Accessed April 28, 2021.

33. Khodyakov D, Denger B, Grant S, et al. The RAND/PPMD Patient-centeredness method: a novel online approach to engaging patients and their representatives in guideline development. Eur J Pers Centered Healthc. 2019;7(3):6.

34. Thorn JC, Davies CF, Brookes ST, et al. Content of Health Economics Analysis Plans (HEAPs) for trial-based economic evaluations: expert Delphi consensus survey. Value Health. 2021;24(4):539–547. doi:10.1016/j.jval.2020.10.002

35. Broder MS, Ailawadhi S, Beltran H, et al. Estimates of stage-specific preclinical sojourn time across 21 cancer types. Presented at the: 2021 American Society of Clinical Oncology (ASCO) Annual Meeting; May, 2021.

36. Duroseau Y, Beenhouwer D, Broder MS, et al. Developing an emergency department order set to treat acute pain in sickle cell disease. J Am Coll Emerg Physicians Open. 2021;2(4):e12487. doi:10.1002/emp2.12487

37. Iyer RV, Acquisto SG, Bridgewater JA, et al. Guidelines for management of urgent symptoms in patients with cholangiocarcinoma and biliary stents or catheters using the modified RAND/UCLA Delphi process. Cancers. 2020;12(9). doi:10.3390/cancers12092375

38. Isaacson SI, Achari M, Bhidayasiri R, et al. Use of on-demand treatments for OFF episodes in Parkinson’s Disease: Guidance from a RAND/UCLA modified Delphi consensus panel. Presented at the 2022 Academy of Managed Care Pharmacy (AMCP) Annual Meeting; March 2020.

39. Gibbs SN, Broder MS, Adams DM, et al. Expert consensus on the testing and medical management of PIK3CA-Related Overgrowth Spectrum. Presented at the 2021 CLOVES Syndrome Community International Scientific Meeting for PIK3CA Related Conditions; October 2021.

Creative Commons License © 2022 The Author(s). This work is published and licensed by Dove Medical Press Limited. The full terms of this license are available at https://www.dovepress.com/terms.php and incorporate the Creative Commons Attribution - Non Commercial (unported, v3.0) License. By accessing the work you hereby accept the Terms. Non-commercial uses of the work are permitted without any further permission from Dove Medical Press Limited, provided the work is properly attributed. For permission for commercial use of this work, please see paragraphs 4.2 and 5 of our Terms.