Skip to main content

SYSTEMATIC REVIEW article

Front. Public Health, 24 February 2022
Sec. Disaster and Emergency Medicine
This article is part of the Research Topic Improving Disaster Health Outcomes and Resilience through Rapid Research: Implications for Public Health Policy and Practice View all 17 articles

A Qualitative Assessment of Studies Evaluating the Classification Accuracy of Personnel Using START in Disaster Triage: A Scoping Review

  • 1Department of Emergency Medicine, Faculty of Medicine and Dentistry, University of Alberta, Edmonton, AB, Canada
  • 2School of Public Health, University of Alberta, Edmonton, AB, Canada
  • 3J.W. Scott Health Sciences Library, University of Alberta, Edmonton, AB, Canada

Background: Mass casualty incidents (MCIs) can occur as a consequence of a wide variety of events and often require overwhelming prehospital and emergency support and coordinated emergency response. A variety of disaster triage systems have been developed to assist health care providers in making difficult choices with regards to prioritization of victim treatment. The simple triage and rapid treatment (START) triage system is one of the most widely used triage algorithms; however, the research literature addressing real-world or simulation studies documenting the classification accuracy of personnel using START is lacking.

Aims and Objectives: To explore the existing literature related to the current state of knowledge about studies assessing the classification accuracy of the START triage system.

Design: Scoping review based on Arksey and O'Malley's methodological framework and narrative synthesis based on methods described by Popay and colleagues were performed.

Results: The literature search identified 1,820 citations, of which 32 studies met the inclusion criteria. Thirty were peer-reviewed articles and 28 published in the last 10 years (i.e., 2010 and onward). Primary research studies originated in 13 countries and included 3,706 participants conducting triaging assessments involving 2,950 victims. Included studies consisted of five randomized controlled trials, 17 non-randomized controlled studies, eight descriptive studies, and two mixed-method studies. Simulation techniques, mode of delivery, contextual features, and participants' required skills varied among studies. Overall, there was no consistent reporting of outcomes across studies and results were heterogeneous. Data were extracted from the included studies and categorized into two themes: (1) typology of simulations and (2) START system in MCIs simulations. Each theme contains sub-themes regarding the development of simulation employing START as a system for improving individuals' preparedness. These include types of simulation training, settings, and technologies. Other sub-themes include outcome measures and reference standards.

Conclusion: This review demonstrates a variety of factors impacting the development and implementation of simulation to assess characteristics of the START system. To further improve simulation-based assessment of triage systems, we recommend the use of reporting guidelines specifically designed for health care simulation research. In particular, reporting of reference standards and test characteristics need to improve in future studies.

Introduction

Mass casualty incidents (MCIs) can occur as a consequence of a wide variety of events, such as those resulting from emergencies, disasters, or pandemics, and often require enhanced prehospital and emergency supports and coordinated emergency response. When MCIs cause the demand for medical care to exceed capacity, prioritization of patients shifts from treatment of the most severe casualties to an attempt to provide the best care for the highest number of victims. In these situations, medical professionals allocate priority to those who are most likely to benefit from the available resources and have the best chance of survival and recovery (1).

Created in the 1980s, the Simple Triage and Rapid Treatment (START) triage system was developed to be used in the event of a MCI (2), allowing responders to triage a patient in fewer than 60 seconds (s). (3). It has since become widely adopted (4, 5), especially in the United States, Canada, Australia and the Israeli-occupied territories (6). Its main goal is to appraise and identify conditions that can lead to death if not treated within 1 h by prioritizing clinical markers of respiration, perfusion, and mental status to identify impaired breathing, severe hemorrhage, and head injury. Responders employing START evaluate victims assigning them to one of four triage categories: deceased/expectant (black), immediate (red), delayed (yellow), and walking wounded/minor (green). Inaccuracies in correctly evaluating victims to a START triage category can result in either under-triage (not recognizing that victims could likely benefit from urgent medical intervention) or over-triage (in which valuable resources are used prematurely or unnecessarily). An effective triage tool should have a high sensitivity to minimize the occurrence of under-triage, but should not undermine specificity to prevent the occurrence of over-triage. Sensitivity and specificity can be determined using the rate of appropriately assigned clinical priority levels for victims of a MCI against a reference standard.

The highly stochastic nature of MCIs, as well as the complexity of subsystem interactions, makes simulation one of the best strategies for preparing individuals and health systems to develop the most efficient procedures. START is often utilized in simulation studies employing a variety of MCI scenarios assessing, for example, the impact of educational interventions, the effect of different simulation technologies, or its performance in comparison to other triage systems (79). A common element in these studies is the evaluation of the ability of participants to apply START in view of various outcome measures of classification accuracy. This is done to assess whether victims are being triaged to the appropriate triage category. Thus, observing simulation strategies employed in different studies and whether participants/trainees are triaging appropriately using one of the most adopted triage systems is an important step to advance studies using simulation in the field of disaster medicine.

Despite the widespread utilization of START across the literature, there was just one published synthesis of the classification accuracy of START. In this recently published systematic review it was found that the accuracy of START is insufficient to serve as a reliable disaster triage tool (10); however, it was noted that the included studies varied considerably in terms of the use of true vs. simulated MCIs, the implementation and conduct of the simulations, as well as the assessors applying the START triage system. While beyond the scope of the systematic review (10), a description of the characteristics of the simulations in which START accuracy is assessed is essential for several reasons (1115). First, it can reveal nuances of the interaction of both (simulation techniques and triage systems) and recommend adaptations (if necessary). Second, reproducibility of findings can also be considered. Thus, the research question directing this scoping review is: What is known about simulation studies of MCIs assessing the classification accuracy of the START triage system? The purpose of this scoping review is two-fold: first, to explore the existing literature related to the current state of knowledge about simulation strategies of studies assessing the classification accuracy of the START triage system; second, to consider implications for further research.

Methods

This scoping review was conducted following the methodological framework described by Arksey and O'Malley (16) including: identifying the design and search question; searching for relevant studies; selection of studies; charting the data; and finally, collating, summarizing and reporting the results. The methods of this study were enhanced by the recommendations of Levac, Colquhoun and O'Brien (17), which include connecting the research question to the purpose, ensuring that practicality does not limit the findings of the study, and identifying practical implications of the review. We did not engage in the optional stage 6—consultation with the community—in this current study, although such consultation may form a part of future knowledge translation. This scoping review followed the Preferred Reporting Items for Systematic Reviews and Meta-Analyses for Scoping Reviews (PRISMA-ScR) (18) (see Supplementary Material 1).

Search Terms and Strategies

Following an initial search to identify publications on the topic, a health sciences librarian (SC) developed a search of nine electronic databases including OVID Medline, OVID EMBASE, OVID Global Health, EBSCO CINAHL, Compendex (Engineering Village), SCOPUS, Proquest Dissertations and Theses Global, Cochrane Library, and PROSPERO. The search strings for each database was adjusted appropriately for different databases and included controlled vocabulary and keywords for three concepts: (1) START, (2) triage and (3) mass casualty. The search was conducted in March 2020 and databases searches were limited from 1983 to present. No other language or publication limitations were applied. Detailed search strategies are available in Supplementary Material 2. Search results were exported to RefWorks citation management system (ProQuest, LLC, Ann Arbor, USA) and the Covidence systematic review program (Veritas Health Innovation Ltd, Melbourne, Australia).

To identify additional studies, a search of the gray literature was conducted in May 2020 which included Google Scholar, Controlled-trials.com, a forward search of the included studies using Web of Science SCOPUS, and a search of the references of included studies and relevant reviews. In addition, recent conference abstracts (2017–2020) from Canadian Journal of Emergency Medicine, Academic Emergency Medicine, and Annals of Emergency Medicine were searched. Non-English language papers were translated first via native speaker, or using Google Translate if a native speaker was not available.

Study Screening and Selection

Following the removal of duplicates, the title and abstract of all articles identified in the search were reviewed by two independent reviewers (UDW and SWK) to identify potentially eligible studies based on the inclusion criteria. Once identified, the full-text of all studies classified as potentially eligible were reviewed by two reviewers (UDW and SWK) in duplicate. Decisions of inclusion or exclusion were made independently based on pre-defined inclusion criteria.

To be eligible for inclusion in the current scoping review, studies had to utilize the START triage system either in a true or simulated MCI scenario for the triage of adult victims. Studies that strictly used a modified version of START were not eligible. In addition, studies had to report outcomes related to the classification accuracy of START (i.e., accuracy, over-triage, under-triage, sensitivity, specificity) to be included. Studies were required to consist of a single cohort or multiple groups as long as at least one of the study cohorts were triaged using the START triage system. Non-experimental studies including case-reports, case-series, reviews, and editorials/opinion pieces were excluded.

Reasons for exclusion were documented. Multiple reports of the same study were collated so that each study, rather than each report, was the unit of review. Disagreements regarding study inclusion were resolved via a third-party adjudication (JMF). The results of the search, screening, and selection are reported in full in a PRISMA flow diagram (19).

Charting, Collating, and Reporting the Results

For studies included in the review, pre-specified outcomes were extracted onto standardized forms in Microsoft excel. Data were extracted independently by at least two of three reviewers (JMF, SWK, UDW). Disagreements were settled via discussion between the reviewers and any conflicts that could not be settled were mediated via third party adjudication (BHR, JMF). The primary outcome of interest was the summary of the methods employed to develop the MCI real or simulation study in which START was applied. As such, information regarding the nature of the simulated MCI, how the simulation was implemented, who conducted the assessments, education/training of assessors, and the triage process was collected. Additional extracted outcomes included study characteristics, reporting of classification accuracy outcomes, and details regarding the reference standard. Definition of type of MCI was based on standard definitions (20).

Study Analysis

The heterogeneity in study methods and reported findings required a narrative approach to synthesis. Findings were grouped into themes after careful reading of the final selected publications by two reviewers (SWK, UDW). These groupings were determined in relation to the research question, and in consideration of logical presentation of the findings to a diverse audience of stakeholder readers (researchers, policy developers, educators, etc.). Face validity of the themes was established by a physician specialized in emergency and disaster medicine (JMF) and a physician specialized in emergency medicine and research synthesis (BHR). This process resulted in themes that were derived from the intended scope of the study, and included the reviewers' interpretation of the data. Thematic analysis was developed using the Lancaster University Guidance on the Conduct of Narrative Synthesis in Systematic Reviews (21). Variable labels included in the studies were extracted as “themes” in the same way as conceptual themes are extracted from qualitative research (21). Development of themes was influenced by the theoretical and disciplinary lenses of emergency medicine.

Results

After removing duplicates, the literature search yielded 1,820 citations. Following the screening of titles and abstracts, 349 publications were identified as potentially relevant. Ultimately, full-text screening resulted in the inclusion of 32 studies involving 37 cases/simulations in the review. The PRISMA flow chart of study selection is presented in Figure 1.

FIGURE 1
www.frontiersin.org

Figure 1. Literature search flow diagram.

Descriptive Summary of the Studies

From the 32 included studies, 30 were peer-reviewed articles, one was a conference abstract (22), and one was a master's thesis (23). The included studies were published between the years 2005 and 2019, with 28 published in the last 10 years (i.e., 2010 and onward). Studies originated from 13 countries; the United States of America (n = 12), Italy (n = 5) and Canada (n = 4) accounted for the majority of them. Most studies were published in English, with the exception of two (24, 25).

Research designs of included studies consisted of five randomized controlled trials (2630), 17 comparative non-randomized studies (8, 9, 22, 25, 3143), eight quantitative descriptive studies (7, 24, 4449), and two mixed-method studies (23, 50). Twenty-two studies did not report their source of funding (69, 22, 23, 26, 31, 32, 3542, 44, 45, 47, 48, 50) and 12 studies did not mention or acknowledged any potential conflicts of interest among the study authors (9, 22, 23, 26, 29, 32, 38, 39, 41, 43, 44, 47). Six studies did not report any study limitations (8, 24, 26, 39, 44, 49).

Together, these studies involved 3,706 participants conducting triaging assessments involving 2,950 victims. Participants conducting the triage assessment were nurses, physicians, pharmacists, emergency medical technicians, paramedics, first responders, firefighters, non-medical personnel, as well as students from different medical areas, such as paramedic, nursing, medical and various levels of training. The majority of the studies (n = 25) did not specify whether the participants conducting the triage assessment had prior experience with real or simulated disaster events. Tables 1, 2 presents a descriptive summary of included studies that align with the objective of the scoping review.

TABLE 1
www.frontiersin.org

Table 1. Descriptive summary of the studies included in this review.

TABLE 2
www.frontiersin.org

Table 2. Transparency of the studies.

Narrative Summary of the Studies

Thematic analysis of the charted findings led to the identification of two themes: (1) typology of simulations and (2) START system in MCIs simulations. Each theme contains sub-themes regarding the development of simulation employing START as a system for improving individuals' preparedness.

Theme 1: Typology of Simulations

This theme explores the common types and characteristics of simulations employed in the studies. Sub-themes include simulation technologies, simulation settings, disaster types, assessors and their training/experiences in MCI (see Table 3).

TABLE 3
www.frontiersin.org

Table 3. Typology of simulations.

Simulation Technologies

The technology employed in the delivery of simulations varied considerably across the literature (see Table 3). In a few studies, victims from MCI were re-assessed retrospectively using real mass casualty incident data (23, 34, 46) or data from a previous simulation exercise (24). In some studies, paper-based simulations were employed in which a scenario was described involving victims of a MCI and participants were asked to review and apply START (8, 26, 29, 32, 37, 41, 43, 47). Other studies employed computer-based simulations, which generally involved a multimedia-facilitated activity (2830, 35, 36, 40, 49). Computer-based simulations varied from use of latent images to more complex software in which a series of victims of a disaster or MCI arrive to an ED or other hospital setting requiring participates to triage presenting victims via START. The majority of the studies required participants to partake in a live simulation exercise, of which participants are at the scene of a simulated MCI and are required to apply START to actors or manikins representing the victims (8, 9, 23, 25, 27, 28, 31, 33, 38, 42, 44, 45, 49, 50).

Within the last 6 years, studies started utilizing virtual reality, where participants usually wear a head-mounted display allowing them to have a 360° visual of images and videos (27, 33, 39, 50). Virtual reality was also used by live broadcasting a MCI scenario to participants; however, instead of wearing a head-mounted display, participants guided a person via video call (7). The guide at the scene would verbalize information needed for participants, so that they could evaluate each victim and assign them the appropriate triage category (7).

It should be noted that some of these studies applied a mixed technology approach when implementing their simulations (8, 23, 2729, 33, 49, 50). For example, one study employed the use of unmanned aerial vehicles to allow paramedical students to survey a simulated multi-vehicular accident with live actors with moulage playing the victims (28). Other studies compared different technologies for implementing simulations such as virtual reality-based simulation vs. live simulation with actors (27, 33, 50). Two studies did not report the technology employed to perform simulation exercises (22, 48), while another study reported using moulage without specifying whether manikins or live actors were used (8).

Simulation Settings

Simulation exercises conducted via paper, computer, and virtual-reality tended to occur in hospital or university settings (27, 29, 30, 32, 36, 37, 40, 49, 50). Live simulation exercises occurred in a variety of settings including university campuses (9, 27, 45, 49, 50), airports (25, 28, 49), emergency department (31), soccer stadium (49), fire department (38), and police academy (50). Twelve studies did not specify the location of the simulation exercises (8, 22, 26, 33, 35, 39, 4144, 47, 48).

Disaster Types

MCI simulations across the included studies were most frequently based on transportation disasters on land (i.e., motor vehicle crashes, n = 10) (23, 24, 27, 28, 35, 40, 4547, 50), followed by bomb threats/terrorist attacks (n = 5) (7, 9, 23, 34, 49). The remaining studies used a variety of MCI events including chemical explosion (9, 23, 44, 48), bomb threats/terrorist attack with chemical explosion (9), toxic release (31, 32), transportation disaster on air (23, 25), transportation disaster on land with chemical spill (31), and structural collapse (38, 42). Eleven studies did not report on the types of MCI they were simulating (8, 22, 26, 29, 30, 33, 36, 37, 39, 41, 43).

The sources of the simulation scenarios varied with some studies using real events with actual clinical characteristics of the victims (23, 24, 28, 34, 46). Study researchers (9, 42, 45, 47, 50) and healthcare professionals (32, 33, 44) created the MCI events and victims, while in other studies the MCI event was retrieved from third-party databases (26, 27, 30, 37, 49), which include various MCI scenarios from which researchers can choose. The source of the MCI event, as well as the characteristics of the victims, was not reported in 14 of the included studies, and so it was not clear how the MCI scenarios were created and validated (7, 8, 22, 25, 29, 31, 35, 36, 3841, 43, 48).

Assessors

Studies employed a variety of medical professionals to assess the classification accuracy of START across the literature (see Table 1). First responders/paramedics were most commonly recruited to participate in studies requiring to apply START (8, 9, 22, 23, 25, 31, 38, 44, 46, 49), with two studies specifically recruiting firefighters (24, 35). Students of various professions, including a variety of college-level (36, 39), medical (27, 32, 40, 41), nursing (45), and paramedic students (28, 50) were the second most common participants recruited to apply START. Other professionals including nurses and physicians were also recruited; however, studies tended to assess the ability of a mix of health professionals to accurately apply START (7, 2931, 37, 42, 43, 47). Few studies compared the differences in the accuracy of START among different healthcare professionals (7, 25).

Experience and Training in Disaster Medicine and START

Seven studies specifically reported participants had previous experience with the START system (9, 30, 35, 37, 38, 44, 46) and 11 studies specified whether or not participants had any prior experience with MCI (27, 30, 32, 36, 37, 39, 40, 42, 43, 47, 50). Seven of the 21 studies that did not report participants' prior MCI experience also did not involve any MCI education intervention or reported whether participants were trained in MCI triage for the specific study (2224, 34, 35, 48, 49).

Of the 22 studies that offered training in MCI prior to the simulation, 14 studies included training on START (7, 8, 2528, 32, 36, 38, 39, 41, 43, 45, 47). Training included lecture (27, 28, 32), courses (7, 41), provision of reading materials (39) symposium (45), video presentation (8). Six studies did not specify how training was provided (25, 26, 36, 38, 43, 47). Among the 16 studies that reported to offering lectures/courses, the majority of studies reported to implementing a single course/session lasting between 5 and 1,200 mins (median: 60 min; IQR = 110 min).

Theme 2: START System in MCIs Simulations

This theme explores how the classification accuracy of START triage system was assessed across the different studies (see Table 4).

TABLE 4
www.frontiersin.org

Table 4. Assessment of accuracy outcomes.

Diagnostic Properties

A summary of the various diagnostic outcomes assessed across the studies are provided in Table 4. As per the inclusion criteria, all of the studies reported at least one outcome related to the classification accuracy of START. All but two studies (34, 46) assessed the accuracy of START by comparing participants' performance (correctly matching of triage levels to a reference standard).

With the exception of two studies (9, 48), all studies measuring classification accuracy of participants performance reported the overall accuracy for all victims. In addition, some studies also reported the accuracy of participants' performance based on the triage subgroups of START (i.e., black, red, yellow, and green) (8, 25, 26, 30, 31, 35, 42, 44, 47, 48). Still within accuracy of participants performance, some studies teased out the proportion of patients over and under-triaged within the START triage subgroups (8, 9, 2327, 29, 30, 32, 35, 37, 38, 41, 42, 44, 46, 47, 49). Only two studies reported on outcomes related to START diagnostic properties, such as specificity, sensitivity, positive and negative predictive values, or likelihood ratios (34, 46).

Lastly, the vast majority of included studies (n = 22) did not specify which prerequisite they used to measure classification accuracy (i.e., a reference standard). When specified, the reference standard was most commonly described as expert opinions (9, 22, 23, 3033, 37, 42) followed by the Baxt and Upeniek criticality (34), and the modified Baxt criteria (46). From the nine studies using experts' opinions as the reference standard, five studies did not specify the background of the experts or how this consensus was determined (22, 23, 30, 31, 33).

Discussion

Given the widespread use of START for the triage of victims in real-world MCI's, training simulations, as well as assessing educational interventions, this scoping review aimed at exploring and summarizing the existing literature related to the current state of knowledge regarding studies assessing the classification accuracy of START. Gaining a better understanding of the literature helped us to identify gaps in reporting that may hold implications for future studies. Through an extensive and systematic search of the literature, 32 studies assessing the classification accuracy of START were identified. These studies were conducted around the world, with the majority of the studies published in the last 10 years, indicating that knowledge about simulation strategies using START for triage is a global concern and growing field of research.

Over the years, the methods used for simulations has changed as technological advancements occurred. For example, computer simulations replaced the early text-based paper exercises, and live simulations with actors have more recently been replaced by virtual reality technology. Studies included in our review employed different types of simulation technologies and, despite technological advancements, some of the most recently published studies employed technologies ranging from basic text-based exercises to the more advanced ones. This may be attributable to the high cost of using more advanced technologies during simulations, and the paucity of funding opportunities for disaster research within the research ecosystem. Although simulation can be effective at preparing individuals and systems to effectively deal with MCIs, it comes at a price. Different types of simulation technologies have different costs aggregated to them including training, equipment and systems, technicians, laboratory setup, maintenance and so on. In fact, the elevated costs of many simulation technologies has been a key criticism of medical training using simulation (51, 52). Therefore, it is reasonable that researchers developing MCI studies using simulation consider their population needs, available resources and return on investment to determine which type of technology they will study and adopt.

Other common themes arose when reviewing the articles, one of which was the reporting and implementation of the simulation. For the most part, studies provided satisfactory details regarding how the simulation exercises were conducted; however, the establishment of more systematic reporting is warranted. As discussed below, many studies lacked information that should be included in articles involving MCI simulation for them to be transparent, reproducible, and usable (5355).

This review found that some important details regarding the methodologies of the studies and classification accuracy assessment were inconsistently reported across the literature. Approximately a third of the studies assessing the classification accuracy of START failed to report the type of MCI from which the victims were being triaged. Almost half of the studies did not specify the source of disaster scenarios—whether or not the MCI was based on a real event or created by the research staff, healthcare professionals, or disaster medicine experts. In many studies using live simulation, it was unclear if the mock victims had previous training on how to simulate clinical conditions or how these mock victims were prepared (e.g., use of make-up). At this time, it unclear whether the complexity of the disaster or MCI affects the classification accuracy of disaster triage, but this might be worth exploring in future studies.

Another common theme explored in this study was the reporting regarding the assessors of START and their experiences. It was not surprising that the majority of studies assessed the classification accuracy of paramedic/EMS providers to apply START; however, it was perhaps a little surprising that students (including paramedical, nursing, and medical) were the second most common assessors of START across the literature. It is not clear why this is the case. It could be that studies assessing novel technologies for simulations or triage methods may see students as a population of participants more available, willing and able to embrace novel technologies. In addition, students are more likely to lack any prior experience in disaster triage or START, allowing researchers to assess the impact of training or educational interventions on START classification accuracy.

A fundamental methodological bias associated with this literature is a lack of transparency which impacts the trustworthiness of the science. More than a third of the studies did not state if there was any potential conflict of interest. Over two-thirds did not state if there was any funding source. In addition, several studies did not acknowledge any limitations to the study, and the ones acknowledging them overlooked or reduced to simplistic and minimally relevant themes (e.g., single institution study or small sample size) (56). With respect to the assessment of the classification accuracy of START, while the majority of the studies reported overall accuracy, a third of them did not report under- and over- triage. It is vital for studies assessing triage accuracy to provide a full assessment of the classification accuracy of START. Beneficial triage decisions direct victims to the most appropriate hospitals, resulting in lower mortality and better resource allocation (57).

Yet, one of the most concerning issues we found in this review exploring the current state of knowledge of studies assessing the classification accuracy of the START system was that two-thirds of the studies completely lacked details regarding the reference standard to which START was being compared. When a reference standard was reported, the most common was expert opinion, although details regarding the credentials of the experts were not provided. The traditional classification accuracy paradigm is based on studies that compare the results of the system under evaluation (index system) with the results of a reference standard, and it is regarded as the soundest method to determine the classification accuracy of the system or measure participants' performance. To appraise the classification accuracy of the index test, its results are compared with the results of the reference standard; subsequently indicators of accuracy can be determined. The reference standard is therefore an important determinant of the classification accuracy. From a theoretical perspective the use of an appropriate reference standard is critical and the lack of information regarding it impacts the confidence readers have in research findings.

Strengths and Limitations

We aimed at using precise and transparent review methods when conducting (16, 17) and reporting this scoping review (18). A comprehensive approach using several appropriate databases without language restrictions improved the rigor of the review. Consistent with the purpose of a scoping review, we expanded the literature search from January 1983 until March 2020, so that more literature sources could be identified, and findings could truly reflect the state of knowledge. The search words were selected by the researchers and refined by an expert health librarian. In addition, the reference lists of the included articles were forward searched. To reduce the risk of selection bias, this review utilized two independent reviewers to assess and identify potential eligible studies. Lastly, the use of Refworks and Covidence software supported meticulous documentation of screening decisions.

There were, however, some limitations of this scoping review. First, since this review did not pursue quality appraisal, we were not able to speak of the quality of the studies in the field assessing the classification accuracy of START, which could have resulted in inclusion of studies with comprised research quality and incomplete synthesis. Therefore, it is recommended that the findings should be used with caution and applied in research and practice after careful scrutiny. Second, 87.5% (n = 28) of the reviewed studies originated from developed countries which limits the extrapolation of findings to low- and middle-income countries. Third, the results of this scoping review may have been impacted by selective reporting within the included studies. While contacting the study authors could have helped clarify aspects of the simulation, triage assessment, or accuracy outcomes that were unclear or not reported, the objective of this review was to provide an assessment of studies assessing START accuracy based on what is reported in the available literature. Lastly, as with any review, there is a risk of publication bias, particularly among studies assessing the impact of novel interventions on triage classification accuracy.

Conclusion

Studies included in this scoping review provided satisfactory details on how their simulations were conducted. However, we found there is room for improvement in view of insufficient information regarding location where simulation exercises were performed, the type of disaster they were simulating, the source of the MCI event, the characteristics of the victims, whether or not participants had any prior experience with MCI triage, and potential source of bias. To further improve simulation-based assessment of triage systems, it is important that stakeholders are mindful of the complexity of subsystem interactions. It is recommended that if simulations are used for assessment purposes, they should be based in a systematic appreciation of the whole system. Future research could be more explicit about the knowledge upon which simulation training is based to allow for description of core theoretical and operational definitions, identification of the function of each component, promotion of similar construct measurement, reporting of findings in a common language, as well as replication and comparison of findings across studies. We recommend the use of reporting guidelines such as the “reporting guidelines for health care simulation research: extensions to the CONSORT and STROBE statements” (11). In particular, incomplete reporting of the reference standards and accuracy needs to be addressed and reported in future studies.

We recommend the development of a systematic review with meta-synthesis to assess overall accuracy, rate of under-triage, and rate of over-triage using the START method, as well as to obtain specific rates of accuracy for each of the four START categories: red, yellow, green, and black. A systematic review with meta-synthesis will allow the combination of results ensuring reliability across a number of studies, while assessing and minimizing bias. As a result, reliable and scientifically derived findings can be obtained for research and clinical practice.

Data Availability Statement

The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author.

Author Contributions

UDW: research conceptualization, design of the research methodology, data curation, evidence screening, data extraction, data analysis, project administration, writing and editing the research protocol, and writing and editing the final manuscript. SWK: research conceptualization, design of the research methodology, data curation, evidence screening, data extraction, project administration, writing and editing the research protocol, and writing and editing the final manuscript. BHR: research conceptualization, design of the research methodology, funding acquisition, research supervision, writing and editing the research protocol, and writing and editing the final manuscript. SC: design of the research methodology and writing and editing the final manuscript. JMF: research conceptualization, design of the research methodology, funding acquisition, data analysis, writing and editing the research protocol, and writing and editing the final manuscript. All authors contributed to the article and approved the submitted version.

Funding

Scoping and Systematic Review Grant from the Emergency Strategic Clinical Network (ESCN) at Alberta Health Services and the Emergency Medicine Research Group (EMeRG) in the Department of Emergency Medicine at the University of Alberta. BHR's research is supported by a Scientific Director's Grant (SOP 168483) from the Canadian Institutes of Health Research (CIHR; Ottawa, ON). The funders had no role in the design, implementation, analysis, and write-up of the study.

Author Disclaimer

The content hereof is the sole responsibility of the authors and does not necessarily represent the official views of the funding agencies.

Conflict of Interest

JMF is CEO and Founder of Stat59.

The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher's Note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Supplementary Material

The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fpubh.2022.676704/full#supplementary-material

References

1. Tonkin L. Triage: multiple casualty incidents. Aust J Emerg Care. (1997) 4:18–21.

2. Douglas N, Leverett J, Paul J, Gibson M, Pritchard J, Brouwer K, et al. Performance of first aid trained staff using a modified START triage tool at achieving appropriate triage compared to a physiology-based triage strategy at Australian mass gatherings. Prehospital Disaster Med. (2020) 35:184–8. doi: 10.1017/S1049023X20000102

PubMed Abstract | CrossRef Full Text | Google Scholar

3. Arnold T, Cleary V, Groth S, Hook R, Jones D, Super G. START. Newport Beach, CA: Newport Beach Fire and Marine Department (1994).

4. Garner A, Harrison K, Lee A, Schultz CH. Comparative analysis of multiple-casualty incident triage algorithms. Ann Emerg Med. (2001) 38:541–8. doi: 10.1067/mem.2001.119053

PubMed Abstract | CrossRef Full Text | Google Scholar

5. Wallis L. START is not the best triage stategy. Br J Sports Med. (2002) 36:473. doi: 10.1136/bjsm.36.6.473

PubMed Abstract | CrossRef Full Text | Google Scholar

6. Bazyar J, Farrokhi M, Khankeh H. Triage systems in mass casualty incidents and disasters: a review study with a worldwide approach. Open Access Maced J Med Sci. (2019) 7:482–94. doi: 10.3889/oamjms.2019.119

PubMed Abstract | CrossRef Full Text | Google Scholar

7. McCoy CE, Alrabah R, Weichmann W, Langdorf MI, Ricks C, Chakravarthy B, et al. Feasibility of telesimulation and Google Glass for mass casualty triage education and training. West J Emerg Med. (2019) 20:512–9. doi: 10.5811/westjem.2019.3.40805

PubMed Abstract | CrossRef Full Text | Google Scholar

8. Risavi BL, Lee W, Terrell MA, Holsten DL. Prehospital mass-casualty triage training-written versus moulage scenarios: how much do EMS providers retain? Prehospital Disaster Med. (2013) 28:251–6. doi: 10.1017/S1049023X13000241

PubMed Abstract | CrossRef Full Text | Google Scholar

9. Silvestri S, Field A, Mangalat N, Weatherford T, Hunter C, McGowan Z, et al. Comparison of START and SALT triage methodologies to reference standard definitions and to a field mass casualty simulation. Am J Disaster Med. (2017) 12:27–33. doi: 10.5055/ajdm.2017.0255

PubMed Abstract | CrossRef Full Text | Google Scholar

10. Franc JM, Kirkland SW, Wisnesky UD, Campbell S, Rowe BH. METASTART: a systematic review and meta-analysis of the diagnostic accuracy of the Simple Triage and Rapid Treatment (START) algorithm for disaster triage. Prehospital Disaster Med. (2021) 2021:1–11. doi: 10.1017/S1049023X2100131X

PubMed Abstract | CrossRef Full Text | Google Scholar

11. Cheng A, Kessler D, Mackinnon R, Chang TP, Nadkarni VM, Hunt EA, et al. Reporting guidelines for health care simulation research: extensions to the CONSORT and STROBE statements. Adv Simul. (2016) 1:25. doi: 10.1186/s41077-016-0025-y

PubMed Abstract | CrossRef Full Text | Google Scholar

12. Germini F, Marcucci M, Heath T, Mbuagbaw L, Thabane L, Worster A, et al. Quality of reporting in abstracts of RCTs published in emergency medicine journals: a systematic survey of the literature suggests we can do better. Emerg Med J. (2019). doi: 10.1136/emermed-2019-208629

PubMed Abstract | CrossRef Full Text | Google Scholar

13. Monks T, Currie CSM, Onggo BS, Robinson S, Kunc M, Taylor SJE. Strengthening the reporting of empirical simulation studies: introducing the STRESS guidelines. J Simul. (2019) 13:55–67. doi: 10.1080/17477778.2018.1442155

CrossRef Full Text | Google Scholar

14. Nawijn F, Ham WHW, Houwert RM, Groenwold RHH, Hietbrink F, Smeeing DPJ. Quality of reporting of systematic reviews and meta-analyses in emergency medicine based on the PRISMA statement. BMC Emerg Med. (2019) 19:1–18. doi: 10.1186/s12873-019-0233-6

PubMed Abstract | CrossRef Full Text | Google Scholar

15. Zhang X, Lhachimi SK, Rogowski WH. Reporting quality of discrete event simulations in healthcare—results from a generic reporting checklist. Value Health. (2020) 23:506–14. doi: 10.1016/j.jval.2020.01.005

PubMed Abstract | CrossRef Full Text | Google Scholar

16. Arksey H, O'Malley L. Scoping studies: towards a methodological framework. Int J Soc Res Methodol. (2005) 8:19–32. doi: 10.1080/1364557032000119616

CrossRef Full Text | Google Scholar

17. Levac D, Colquhoun H, O'Brien KK. Scoping studies: advancing the methodology. Implement Sci. (2010) 5:69–77. doi: 10.1186/1748-5908-5-69

PubMed Abstract | CrossRef Full Text | Google Scholar

18. Tricco AC, Lillie E, Zarin W, O'Brien KK, Colquhoun H, Levac D, et al. PRISMA extension for scoping reviews (PRISMA-ScR): checklist and explanation. Ann Intern Med. (2018) 169:467–73. doi: 10.7326/M18-0850

PubMed Abstract | CrossRef Full Text | Google Scholar

19. Moher D, Liberati A, Tetzlaff J, Altman DG. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. BMJ. (2009) 339:b2535. doi: 10.1136/bmj.b2535

PubMed Abstract | CrossRef Full Text | Google Scholar

20. Shaluf IM. Disaster types. Disaster Prev Manag. (2007) 16:704–17. doi: 10.1108/09653560710837019

CrossRef Full Text | Google Scholar

21. Popay J, Roberts H, Snowden A, Petticrew M, Arai L, Britten N, et al. Guidance on the Conduct of Narrative Synthesis in Systematic Reviews: Final Report. Swindon: ESRC Research Methods Programme (2006).

PubMed Abstract | Google Scholar

22. Buono CJ, Lyon J, Huang R, Liu F, Brown S, Killeen JP, et al. Comparison of mass casualty incident triage acuity status accuracy by traditional paper method, electronic tag, and provider PDA algorithm. Ann Emerg Med. (2007) 50:S12–S3. doi: 10.1016/j.annemergmed.2007.06.068

CrossRef Full Text | Google Scholar

23. Crews CM. Disaster Response: Efficacy of Simple Triage and Rapid Treatment in Mass Casualty Incidents. Master's thesis. Long Beach, CA: California State University (2008).

PubMed Abstract | Google Scholar

24. Simões RL, Duarte Neto C, Maciel GSB, Furtado TP, Paulo DNS. Atendimento pré-hospitalar à múltiplas vítimas com trauma simulado [Prehospital care to trauma victmis with multiple simulated]. Rev Col Bras Cir. (2012) 39:230–7. doi: 10.1590/S0100-69912012000300013

PubMed Abstract | CrossRef Full Text | Google Scholar

25. Ellebrecht N, Latasch L. Vorsichtung durch Rettungsassistenten auf der Großübung SOGRO MANV 500: Eine vergleichende Analyse der Fehleinstufungen. Paramedic triage during a mass casualty incident exercise: a comparative analysis of inappropriate triage level assignments. Notfall Rettungsmed. (2012) 15:58. doi: 10.1007/s10049-011-1477-1

CrossRef Full Text | Google Scholar

26. Badiali S, Giugni A, Marcis L. Testing the START triage protocol: can it improve the ability of nonmedical personnel to better triage patients during disasters and mass casualties incidents? Disaster Med Public Health Prep. (2017) 11:305–9. doi: 10.1017/dmp.2016.151

PubMed Abstract | CrossRef Full Text | Google Scholar

27. Ingrassia PL, Ragazzoni L, Carenzo L, Colombo D, Gallardo AR, Corte FD. Virtual reality and live simulation: a comparison between two simulation tools for assessing mass casualty triage skills. Eur J Emerg Med. (2015) 22:121–7. doi: 10.1097/MEJ?0000000000000132

PubMed Abstract | CrossRef Full Text | Google Scholar

28. Jain T, Sibley A, Stryhn H, Hubloue I. Comparison of unmanned aerial vehicle technology-assisted triage versus standard practice in triaging casualties by paramedic students in a mass-casualty incident scenario. Prehospital Disaster Med. (2018) 33:375–80. doi: 10.1017/S1049023X18000559

PubMed Abstract | CrossRef Full Text | Google Scholar

29. Khan K. Tabletop exercise on mass casualty incident triage, does it work? Health Sci J. (2018) 12:1–6. doi: 10.21767/1791-809X.1000566

PubMed Abstract | CrossRef Full Text | Google Scholar

30. Lee JS, Franc JM. Impact of a two-step emergency department triage model with START, then CTAS, on patient flow during a simulated mass-casualty incident. Prehospital Disaster Med. (2015) 30:390–6. doi: 10.1017/S1049023X15004835

PubMed Abstract | CrossRef Full Text | Google Scholar

31. Bolduc C, Maghraby N, Fok P, Luong TM, Homier V. Comparison of electronic versus manual mass-casualty incident triage. Prehospital Disaster Med. (2018) 33:273–8. doi: 10.1017/S1049023X1800033X

PubMed Abstract | CrossRef Full Text | Google Scholar

32. Sapp RF, Brice JH, Myers JB, Hinchey P. Triage performance of first-year medical students using a multiple-casualty scenario, paper exercise. Prehospital Disaster Med. (2010) 25:239–45. doi: 10.1017/S1049023X00008104

PubMed Abstract | CrossRef Full Text | Google Scholar

33. Ferrandini-Price M, Escribano Tortosa D, Nieto Fernandez-Pacheco A, Perez Alonso N, Cerón Madrigal JJ, Melendreras-Ruiz R, et al. Comparative study of a simulated incident with multiple victims and immersive virtual reality. Nurse Educ Today. (2018) 71:48–53. doi: 10.1016/j.nedt.2018.09.006

PubMed Abstract | CrossRef Full Text | Google Scholar

34. Challen K, Walter D. Major incident triage: comparative validation using data from 7th July bombings. Injury. (2013) 44:629–33. doi: 10.1016/j.injury.2012.06.026

PubMed Abstract | CrossRef Full Text | Google Scholar

35. Arshad FH, Williams A, Asaeda G, Isaacs D, Kaufman B, Ben-Eli D, et al. A modified simple triage and rapid treatment algorithm from the New York City (USA) fire department. Prehospital Disaster Med. (2015) 30:199–204. doi: 10.1017/S1049023X14001447

PubMed Abstract | CrossRef Full Text | Google Scholar

36. Loth S, Cote AC, Shaafi Kabiri N, Bhangu JS, Zumwalt A, Moss M, et al. Improving triage accuracy in first responders: measurement of short structured protocols to improve identification of salient triage features. World Med Health Policy. (2019) 11:163–76. doi: 10.1002/wmh3.306

CrossRef Full Text | Google Scholar

37. Curran-Sills G, Franc JM. A pilot study examining the speed and accuracy of triage for simulated disaster patients in an emergency department setting: comparison of a computerized version of Canadian Triage Acuity Scale (CTAS) and Simple Triage and Rapid Treatment (START) methods. CJEM. (2017) 19:364–71. doi: 10.1017/cem.2016.386

PubMed Abstract | CrossRef Full Text | Google Scholar

38. Navin DM, Sacco WJ, Waddell R. Operational comparison of the simple triage and rapid treatment method and the sacco triage method in mass casualty exercises. J Trauma Nurs. (2010) 69:215–25. doi: 10.1097/TA.0b013e3181d74ea4

CrossRef Full Text | Google Scholar

39. Izumida K, Kato R, Shigeno H. A triage training system considering cooperation and proficiency of multiple trainees. In: Yoshino T, Yuizono T, Zurita G, Vassileva J, editors. Lecture Notes in Computer Science. Cham: Springer (2017). p. 68–83.

40. Ingrassia PL, Ragazzoni L, Tengattini M, Carenzo L, Corte FD. Nationwide program of education for undergraduates in the field of disaster medicine: development of a core curriculum centered on blended learning and simulation tools. Prehospital Disaster Med. (2014) 29:508–15. doi: 10.1017/S1049023X14000831

PubMed Abstract | CrossRef Full Text | Google Scholar

41. Riza'i A, Ade WRA, Albar I, Sulitio S, Muharris R. Teaching start triage: a comparison of lecture and simulation methods. Adv Sci Lett. (2018) 24:6890–2. doi: 10.1166/asl.2018.12874

PubMed Abstract | CrossRef Full Text | Google Scholar

42. Ingrassia PL, Colombo D, Barra FL, Carenzo L, Della Corte F, Franc J. Impact of training in medical disaster management: a pilot study using a new tool for live simulation. Emergencias. (2013) 25:459–66.

43. Wu Y-L, Shu C-C, Chung C-C. A simple method for pre-hospital dispatcher-aided consciousness assessment in trauma patients. J Emerg Med Taiwan. (2005) 7:69–77. doi: 10.30018/JECCM.199906.0002

CrossRef Full Text | Google Scholar

44. Schenker JD, Goldstein S, Braun J, Werner A, Buccellato F, Asaeda G, et al. Triage accuracy at a multiple casualty incident disaster drill: the Emergency Medical Service, Fire Department of New York City experience. J Burn Care Res. (2006) 27:570–5. doi: 10.1097/01.BCR.0000235450.12988.27

PubMed Abstract | CrossRef Full Text | Google Scholar

45. Lima DS, De-Vasc Oncelos IF, Queiroz EF, Cunha TA, Dos-Santos VS, Freitas JG, et al. Multiple victims incident simulation: training professionals and university teaching. Rev Col Bras Cir. (2019) 46:e20192163. doi: 10.1590/0100-6991e-20192163

PubMed Abstract | CrossRef Full Text | Google Scholar

46. Kahn CA, Schultz CH, Miller KT, Anderson CL. Does START triage work? An outcomes assessment after a disaster. Ann Emerg Med. (2009) 54:424–30.e1. doi: 10.1016/j.annemergmed.2008.12.035

PubMed Abstract | CrossRef Full Text | Google Scholar

47. Ersoy N, Akpinar A. Triage decisions of emergency physicians in Kocaeli and the principle of justice. Ulusal Travma ve Acil Cerrahi Dergisi. (2010) 16:203–9.

PubMed Abstract | Google Scholar

48. Djalali A, Carenzo L, Ragazzoni L, Corte FD, Ingrassia PL, Azzaretto M, et al. Does hospital disaster preparedness predict response performance during a full-scale exercise? A pilot study. Prehospital Disaster Med. (2014) 29:441–7. doi: 10.1017/S1049023X1400082X

PubMed Abstract | CrossRef Full Text | Google Scholar

49. McElroy JA, Steinberg S, Keller J, Falcone RE. Operation continued care: a large mass-casualty, full-scale exercise as a test of regional preparedness. Surgery. (2019) 166:587–92. doi: 10.1016/j.surg.2019.05.045

PubMed Abstract | CrossRef Full Text | Google Scholar

50. Mills B, Dykstra P, Hansen S, Miles A, Rankin T, Hopper L, et al. Virtual reality triage training can provide comparable simulation efficacy for paramedicine students compared to live simulation-based scenarios. Prehosp Emerg Care. (2019) 24:525–36. doi: 10.1080/10903127.2019.1676345

PubMed Abstract | CrossRef Full Text | Google Scholar

51. Hippe DS, Umoren RA, McGee A, Bucher SL, Bresnahan BW. A targeted systematic review of cost analyses for implementation of simulation-based education in healthcare. SAGE Open Med. (2020) 8:2050312120913451. doi: 10.1177/2050312120913451

PubMed Abstract | CrossRef Full Text | Google Scholar

52. Mücke U, Grigull L, Sänger B, Berndt LP, Wittenbecher H, Schwarzbard C, et al. Introducing low-cost simulation equipment for simulation-based team training. Clin Simul Nurs. (2020) 38:18–22. doi: 10.1016/j.ecns.2019.09.001

CrossRef Full Text | Google Scholar

53. Munafò MR, Nosek BA, Bishop DVM, Button KS, Chambers CD, Percie du Sert N, et al. A manifesto for reproducible science. Nat Hum Behav. (2017) 1:0021. doi: 10.1038/s41562-016-0021

PubMed Abstract | CrossRef Full Text | Google Scholar

54. Moher D. Reporting guidelines: doing better for readers. BMC Med. (2018) 16:233. doi: 10.1186/s12916-018-1226-0

PubMed Abstract | CrossRef Full Text | Google Scholar

55. Hoffmann TC, Oxman AD, Ioannidis JP, Moher D, Lasserson TJ, Tovey DI, et al. Enhancing the usability of systematic reviews by improving the consideration and description of interventions. BMJ. (2017) 358:j2998. doi: 10.1136/bmj.j2998

PubMed Abstract | CrossRef Full Text | Google Scholar

56. Ross PT, Bibler Zaidi NL. Limited by our limitations. Perspect Med Educ. (2019) 8:261–4. doi: 10.1007/s40037-019-00530-x

PubMed Abstract | CrossRef Full Text | Google Scholar

57. Najafi Z, Abbaszadeh A, Zakeri H, Mirhaghi A. Determination of mis-triage in trauma patients: a systematic review. Eur J Trauma Emerg Surg. (2019) 45:821–39. doi: 10.1007/s00068-019-01097-2

PubMed Abstract | CrossRef Full Text | Google Scholar

Keywords: triage, START, mass casualty incidents, systematic review, emergency medicine, disaster medicine

Citation: Wisnesky UD, Kirkland SW, Rowe BH, Campbell S and Franc JM (2022) A Qualitative Assessment of Studies Evaluating the Classification Accuracy of Personnel Using START in Disaster Triage: A Scoping Review. Front. Public Health 10:676704. doi: 10.3389/fpubh.2022.676704

Received: 05 March 2021; Accepted: 31 January 2022;
Published: 24 February 2022.

Edited by:

Arthur Chan, University of Toronto, Canada

Reviewed by:

Tudor Adrian Codreanu, Western Australian State Health Incident Coordination Centre (SHICC), Australia
John Kellett, University of Southern Denmark, Denmark

Copyright © 2022 Wisnesky, Kirkland, Rowe, Campbell and Franc. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Jeffrey Michael Franc, jeffrey.franc@gmail.com

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.