Introduction

The risk for, and incidence of, major incidents—within the health-care system defined as situations where available resources are insufficient for the immediate need of medical care—has increased significantly during the last decades, parallel to the development in the world [13]. One of the most important—maybe even the most important—component in the preparedness for such incidents is the education and training of all medical staff potentially involved in the response [46]. It is not enough to continue to carry out the normal work more efficiently: a number of well-defined additional skills are needed for accurate management and performance in these difficult situations [7].

One of the most commonly reported reasons for failure in this response is insufficient coordination/communication between involved units [811]. To train this coordination and communication, it is necessary to train the whole chain of response simultaneously: scene, transport, the different components of the hospital response, as well as command and control functions on all levels. Such training should include also collaborating agencies like fire and rescue services, together with the police. To do this as full-scale “live exercises” with casualty actors is possible, but very expensive. It is difficult to do this in a standardized way and, thereby, get reproducible results, and it is recognized to not give optimal feedback to all staff involved. It may be used on single occasions to test practical components of the response, but not for the regular training of all staff that needs such training.

This creates the need for simulation models for such training [7]. For the accurate training of decision-making, such models have to supply all (and the same) information that the trainee has as a base for the decision in the real situation. Evaluation of the accuracy of the decisions also requires that all consequences of the decision are clearly illustrated: how did the decision influence the outcome for the patients with regard to mortality and complications? How much consumption of time and resources did it lead to, and was this utilization of resources optimal?

An increasing number of simulation models have been developed for this purpose [12, 13], but very few of them meet the demands given above. The aim of this study was to introduce, develop, and evaluate a new simulation tool, originally developed for scientific evaluation and development of methodology, in a postgraduate course for medical staff in major incident response.

Methods

The simulation system

The simulation tool used in this study was the MACSIM (MAss Casualty SIMulation) system, originally developed for the evaluation and comparison of different triage methods in major incident response [14]. The main component of this system was the casualty card (Fig. 1a). Along the edges of the card, the patient’s condition was illustrated by the physiological parameters (Airway, Breathing, Circulation, and Disability) used by Advanced Trauma Life Support, ATLS®. These parameters could simply be changed by the instructor according to the time passed since the time of injury and treatments performed/not performed. In the center of the card, the “Exposure” data were given with a simple system of symbols (Fig. 1b), making it possible to illustrate the multiple injuries. The patient’s original position and response after the incident was also indicated on the card.

Fig. 1
figure 1

a The MACSIM (MAss Casualty SIMulation) casualty card. Along the four edges of the card are indicated the physiologic parameters illustrating the patient’s present condition (Airway, Breathing, Circulation, and Disability). These parameters can easily be changed by the instructor according to the time passed since injury and resuscitation/treatment performed. b In the center of the card (“Exposure”), the different injuries are illustrated using a simple system of symbols (http://www.macsim.se, with permission)

The cards were laminated and could be used in different sizes: a bigger size could be attached to casualty actors in field exercises and a smaller one for simulation purposes. For this study, a card of size 9 × 7.5 cm was selected, making it easily readable but also possible to use in the simulation of scenarios with high casualty load.

For each patient, the instructor had information on:

  • The complete and definitive diagnosis for all injuries

  • Times within certain treatments had to be done to avoid risk for mortality and complications (these data were extrapolated from data from corresponding clinical material)

  • Outcome in case of optimal treatment, i.e., if the patient had been returned to full health if all possible treatment had been done, and done correctly; these data were also extrapolated from corresponding clinical material

  • Potential need for ventilator treatment

  • Trauma scores (Injury Severity Score, ISS; Revised Trauma Score, RTS) to correlate outcome to injury severity

To the cards could be attached movable priority tags (Fig. 2), according to the most commonly used colors for triage today.

Fig. 2
figure 2

With movable markers, the trainee can indicate the given priority and also performed treatments. Every treatment is associated with a time and the “patient” is not allowed to be moved further until the time for this treatment has passed (http://www.macsim.se, with permission)

To the card could also be attached movable treatment tags indicating performed treatments (Fig. 2). For prehospital management, 18 such different treatment alternatives were available and for in-hospital management, an additional 23 were available, in proportions and numbers adjusted to the size of the hospital. Every treatment was associated with a time, based on the results from time studies of medical staff carrying out the same procedures. The access to tags for the trainee could be adjusted to the access in reality.

The system also included symbols for staff, vehicles, and other resources to illustrate the consumption of resources of different kinds generated by decisions on different levels (see Figs. 6, 7, and 8).

The course

In this study, the simulation system described above was introduced, developed, and tested in the postgraduate courses in Medical Response to Major Incidents (MRMI), initiated by the Section of Disaster and Military Surgery within the European Society for Trauma and Emergency Surgery (ESTES). This course was initially planned as a 3-day course with a combination of lectures and practical training. The overall objectives given for this course were to:

  • Train all components of the chain of response simultaneously, including communication and coordinating between involved units,

  • Train decision-making on all levels, from command level to triage and treatment of individual casualties,

  • Provide interactive training with every participant active in his/her role (“learning by doing”).

This required access to a simulation system meeting these objectives. The MACSIM system was, in comparison with other similar systems, selected as the one which is best suited to this purpose.

The patient bank

The scenarios for the first MRMI courses were decided to be terrorist attacks with injuries caused by physical trauma. For this purpose, a patient bank was created, including 360 patient cards of the type described above. The type and distribution of injuries was based on the terror bombings in Madrid 2004 [9]. A number of burn and head injuries of different severity was added from other scenarios in order to achieve a complete and covering training in triage. Three hundred of the “patients” had physical injuries of different severity, 30 were indicated as dead, and 30 were non-injured and/or psychologically shocked to train the management also of these categories.

A separate bank of patients not generated by the incident, but already being cared for in the hospital at the time of alert, or referred there during the incident, was created. This bank included both ambulatory patients for the emergency department and in-hospital patients of different categories (elective surgery, needing immediate surgery, undergoing or needing intensive care) (Fig. 3).

Fig. 3
figure 3

Patients not involved in the incident but already staying in, or arriving at, the hospitals during the response can be indicated by special cards, including both ambulatory cases and in-hospital patients of different categories

Standardized resources and health-care structure as a base for the simulation

To achieve a uniform community structure and health-care organization as a base for interactive training with participants from many countries, and also to achieve a reproducible result of the training, a standardized country was constructed as the location for the scenario (“Anyland”, Fig. 4), with a structure that could be representative for most European countries. The resources for health-care and its collaborating organizations, including all hospital, rescue, and transport facilities, were described in detail. This description, also including a uniform terminology and a uniform structure for major incident preparedness and response, was sent out to the participants to study before the course.

Fig. 4
figure 4

A map of “Anyland”, a hypothetical country constructed for the simulation exercises to achieve a standardized community structure and health-care organization, also including a uniform terminology and a uniform structure for major incident preparedness and response as a base for interactive training in international courses with participants from many countries. The description of Anyland, given to the trainee before attending the courses, included a detailed description of all resources involved in a major incident response (http://www.macsim.se, with permission)

Course venue

The setup of the course venue is illustrated in Fig. 5. This figure shows the complete setup at the end of the development period with involvement also of collaborating agencies (rescue services, the police) and a complete regional coordination center. Figures 6, 7, 8, and 9 show live pictures from this setup.

Fig. 5
figure 5

The drawing shows the present setup of the simulation model as a result of the described process of development (see further the text). This illustrates a setup with four hospitals, but the model can easily be expanded to a practically unlimited number of hospitals. For the MRMI courses so far, 3–4 hospitals have been used, depending on the number of applicants (http://www.macsim.se, with permission)

Fig. 6
figure 6

The site of the incident. Here, a building inside which a bomb has exploded in a terrorist attack is simulated by three boards connected as a square and covered with preprinted magnetic film. Primary triage of spontaneously evacuated patients is done outside the building. Inside the building, triage teams work in collaboration with the police and rescue services. Casualties can only be evacuated from zones that have been secured and with the speed that access to rescue staff permits. In the background, the command place from where the Rescue, Police, and Medical Incident commanders lead the work on scene (see also Fig. 5)

Fig. 7
figure 7

All ambulance and helicopter transports are simulated in real time and with real resources. The destinations of the patients are decided by the Ambulance Loading Officer, communicating by radio with the Regional Medical Command Center and the Ambulance Dispatch Center. At the given time of arrival, the patient is transferred to the Emergency Department of the given hospital

Fig. 8
figure 8

a After primary triage at the entrance to the hospital, the severely injured are taken care of by modified trauma teams (“Major Incident Resuscitation Teams”), whereas less severely injured are transferred to units for ambulatory care or to wards. Examinations and treatments are, as on scene, done with the consumption of real time and resources. The severely injured require a sufficient amount of staff. Plain X-ray, computed tomography (CT), or ultrasonography can only be done if equipment and staff are available. The results of X-ray are available as a base for the decisions (see Fig. 9). b Accurate decisions must be done with regard to indications and strategy for surgery. If a patient needs immediate surgery and no theater is available, the patient may be lost; facilities for secondary transport to another hospital may be limited. The need for a sufficient number and competence of staff is emphasized. All surgery is done in real time. Access to ventilators in the intensive care unit (ICU, to the right) is, together with surgical theaters, the most critical factor for the surge capacity of the hospital

Fig. 9
figure 9

In the present design of the simulation system, every “patient” admitted to the hospital will have available X-ray (lung, skeletal), ultrasonography, and CT (when indicated) images as a base for the discussion of priorities and strategy for diagnosis and treatment (a). Drawings of surgical findings for all surgical patients are also available as a base for the discussion of surgical strategy and technique (b)

Evaluation of the result of the response for each training session

The following criteria were used to evaluate the outcome of the response:

  • Times for alert and response,

  • Times for primary and secondary reports,

  • Over- and under-triage,

  • Ambulance and helicopter waiting times,

  • Preventable mortality and complications. Based on the information supplied by the simulation system as described above, every patient declared dead by the instructors was related to trauma scores (ISS, RTS) and analyzed with regard to cause of death and/or severe complications, and whether it was considered preventable or not (Fig. 10).

    Fig. 10
    figure 10

    For every dead patient, the trauma score, time and cause of death, and whether the fatal outcome could have been avoided with optimal treatment is registered. Every death considered as preventable is analyzed and discussed during the evaluation of the response

Evaluation of the training model

Evaluation of the accuracy and quality of the simulation system was done with the use of standardized evaluation forms given to the participants at the beginning of the training. Assessment of the effects of the training on specific competencies within major incident response was, in this study, done with the use of self-assessment forms filled in by the participants at the end of the course using a floating assessment scale (Table 3 below). The same methodology was used for the assessment of the accuracy of the course as a whole for the training of major incident response (Table 4 below).

Statistical analysis

In the cross-sectional study on the trainees’ evaluation of the accuracy of the training, data were processed using SPSS Version 19 (IBM, New York, NY). Descriptive statistics including means (M), medians (MD), standard deviations (SD), and interquartile ranges (IQR) were calculated for all numeric variables.

Ethical considerations

No approval was required from ethical committees since the participants who attended the course made the evaluation forms anonymous.

Results

Courses

During the period 2009–2012, the system was used and tested in nine international MRMI courses, with a total of 470 participants from all over the world (Table 1). The distribution of participants between different categories of medical and administrative staff is illustrated in Table 2. The majority of participants in these courses were specialist staff with many years of clinical experience. 61 % had practical experience from major incidents or disasters.

Table 1 International Medical Response to Major Incidents (MRMI) courses organized during the period 2009–2012
Table 2 Distribution of participants between different categories of staff in the courses listed in Table 1

All courses were of the same length and had the same general structure, with two days of interactive simulation exercises preceded by one day of preparation with lectures, combined with practical group training.

Development of the model during the study period

Based on repeated evaluations and accumulated experience, the simulation methodology during the study period was developed regarding the following items:

Command structure

Initially, the roles of collaborating organizations such as fire and rescue services and the police were performed by instructors to train communication and collaboration with these agencies.

Step-wise, “real” staff from these organizations was included in the training. Finally, this staff was fully integrated as participants in the course both on scene and at the regional command level (Fig. 5).

From the beginning, the hospital command centers were only staffed with 1–2 trainees. Extension of communication facilities (see below) and development of the regional command structure increased the tasks for this function, justifying additional trainees in these positions (Fig. 5).

Scene

The site of the incident was, in the beginning, only depicted by one or two whiteboards, where all the “patients” were spread out randomly for primary triage. To increase the realism and to better train the collaboration with rescue and the police (see below), the site of the incident was rebuilt as a construction where the trainee had to work inside and outside a partially collapsed building (Fig. 6) with primary triage and—when needed—treatment of trapped patients before and during extrication. Secondary triage was performed in parallel lines on boards (Fig. 5).

Transport

Transport was initially limited to loading ambulances, helicopters, and other transport vehicles when patients were ready for evacuation. Runners delivered them to the hospitals after the calculated run-time for the vehicles in use. In order to add the decisions and treatments made during transport, special ambulance staff was given this task and delivered the patients with handover reports to hospitals (Fig. 7).

In-hospital management

In the hospitals, the patient management was, from the beginning, mainly focused on in-hospital triage. During the development process, more emphasize was step-wise put on the training of decision-making also with regard to primary treatment, including indications for surgery and surgical strategy. This generated more space, and also need, for staff of different categories in the hospitals. Additional data for patients on admission to hospitals were developed in the form of clinical findings on admission, live pictures of injuries, X-ray and ultrasonography findings, and also pictures from surgery (Fig. 9a, b). This allowed the sequential integration of this information with the changes of pathophysiology indicated on the cards by the instructors (see above) as a base for the decision of strategy for further diagnosis and treatment, making this simulation model also a good model for training trauma management.

Communication

In the first courses, only a limited number of telephones and/or radios were available. The communication between, for example, the hospitals and their command centers had to be done by runners or by verbal contact.

The experiences, however, clearly illustrated a need for extended communication lines to properly train communication and coordination between units. To achieve this, a complete mobile telephone switchboard with wireless telephones was introduced in cases where the number of telephones in the course venue was insufficient (Fig. 5). This permitted an unlimited number of telephones, where every operator could talk to any other operator simultaneously. Every participating agency (health-care, ambulance service, rescue service, police) used their own radio sets and channels. The main communication lines are illustrated in the figure. In the hospitals, the Emergency Department, Surgery, and ICU had their own telephone numbers; other calls were mediated over a “hospital operator” handled by the instructors, who, thereby, could “play” any function in the hospital to train the hospital command group.

Preexercise training

Initially, the first day of the course was used for lectures about management and performance in major incident response. Only a short time was used for training in the use of the simulation cards. The experiences showed that this time was too short and it was successively increased during the development process. At the same time, participants wanted two full days of simulation exercises (see evaluation below) without expansion of the length of the course. In the present design, the course was made into a fully interactive course by developing a course book [3] for precourse studies, confirmed by a pretest on arrival. Theoretical lectures were reduced to just a short introduction and the first day was devoted to practical group training in triage and management, using the simulation system.

Replacement of magnetic whiteboards with magnetic film

The simulation system was based on magnetized cards that were originally supposed to be attached to magnetic whiteboards. This generated, in parallel to the development described above, a great need for such boards, a total of 25 if the simulation should be run with four hospitals according to the standardized Anyland design. Five of these boards had to be double-sided. Even if there was access to a varying number of permanent whiteboards in the course venue, this could mean a considerable expense in setting up the course. In the latest courses, an alternative method was tested, using preprinted, thin, laminated magnetic films that could be set up anywhere, making the setup both simpler and cheaper.

Figures 6, 7, 8, 9, and 10 above show live pictures from the course at the end of the development period described above.

Evaluation of the accuracy and quality of the methodology

The first three courses in Table 1 were considered as pilot courses, since many adjustments of the simulation methodology were done between these courses. In the next three courses in the table, the evaluation was focused on the accuracy of the simulation cards for this training and also the quality of the exercises based on this system, via the evaluation forms separating the 2 days of simulation. This evaluation was based on the answers from a total of 147 participants with a questionnaire return rate of 94 % (n = 138/147).

Figure 11 illustrates that 63 % of the responding participants evaluated the accuracy of the simulation cards for this purpose as “very good” and 33 % as “good”, the highest two of the six given alternatives.

Fig. 11
figure 11

The trainees’ responses to the question “How do you evaluate the methodological accuracy of the patient cards for the simulation exercises?” Number of responders 123/138 = 89 % response rate

Figure 12 illustrates how the trainees based on the same alternatives ranked the quality of the 2 days of simulation exercises: for the first day, 69 % “very good” and 26 % “good”; for the second day, 77 % “very good” and 21 % “good”, which means that more than 95 % of the trainees selected the highest two of the six alternatives on the scale.

Fig. 12
figure 12

a The trainees’ response to the question “How do you evaluate the quality of simulation exercise 1?” Number of responders 134/138 = 97 % response rate. b The response to the question “How do you evaluate the quality of simulation exercise 2?” Number of responders 132/138 = 96 % response rate

94 % of the responding trainees preferred two full days of simulation exercises and 6 % only 1 day.

Evaluation of the effect of the training on specific competencies

In the following courses, the evaluation was focused on the result of the training related to specific competencies, registered on a floating scale from 1 to 5 and with separate evaluations for trainees in the prehospital and hospital setting in order to identify differences in outcome between these categories.

This evaluation included three courses: Slovenia 2011, Milano 2011, and Stockholm 2012 (the result from the course in Slavonski Brod 2011 could not be included because of a different evaluation process). The evaluation was based on a total of 107 participants in the prehospital and hospital setting (from a total of 146 including coordinating functions) and the response rate was 98 % (n = 105). The participants in the coordinating functions (n = 27) did not respond to the questions regarding the prehospital and hospital setting.

The first part of Table 3 shows the responses of the prehospital trainees (n = 62), illustrating that the responders ranked, on a floating scale of 1–5, the course as increasing their competences in decision-making with regard to command and coordination on scene with a score of 4.15 ± 0.70 (M ± SD) (MD 4, IQR = 1). The corresponding figures for primary triage on scene were 4.40 ± 0.76 (MD 5, IQR = 1), decision-making regarding individual patient management (primary resuscitation and treatment) on scene 4.21 ± 0.69 (MD 4.0, IQR = 1), secondary triage on scene 4.36 ± 0.62 (MD 4, IQR = 1), and decision-making with regard to transport (priority/destination) 4.11 ± 0.83(MD 4, IQR = 1).

Table 3 The trainees’ ranking of the extent that the course increased their specific competencies on a floating scale of 1–5, where 1 = not at all and 5 = very much

The second part of Table 3b shows the corresponding figures for hospital trainees (n = 43): command and coordination 4.21 ± 0.68 (MD 4, IQR = 1), primary triage in hospital 4.30 ± 0.67 (MD 4, IQR = 1), individual patient management (primary resuscitation/treatment) 4.14 ± 0.94 (MD 4.0, IQR = 1), and secondary triage in hospital 4.28 ± 0.77 (MD 4, IQR = 1). The increased ability to identify and understand the critical factors for hospital surge capacity was ranked as 4.30 ± 0.80 (MD 5, IQR = 1) on the same scale.

Table 4 shows how the prehospital and hospital trainees ranked the accuracy of the course as a whole for the training of major incident response: prehospital 4.35 ± 0.73 (MD 4, IQR = 1) and hospital 4.30 ± 0.74 (MD 4, IQR = 1).

Table 4 The trainees’ evaluation of the accuracy of the course for the training of major incident response on a floating scale of 1–5, where 1 = not accurate and 5 = fully accurate

Discussion

Demands on simulation models for the training of major incident response

The keystone in training for major incident response is the training of decision-making on all levels, from the level of command (which resources to alert and how to use them?) to the level of treatment of individual casualties (what to do with this patient in this situation, how to do it, and with which priority?). An effective way to train decision-making is in a simulation model where all information needed for the decision is available and the consequences of the decision can be clearly illustrated [7].

Since the most commonly reported cause of failure of the response is deficiencies in communication and coordination between involved units [811], the whole chain of response needs to be trained together in order to prevent such failures. This requires access to good simulation models—live exercises with dressed-up casualty actors transported and brought into hospitals can be justified for testing certain parts of the organization on single occasions, but are much too expensive and ineffective to use for the training of all staff that needs such training [15].

Another factor generating a need for simulation exercises is the importance of including hospitals in this training. It is an old misconception that, as soon as casualties have reached the hospitals, the problems are over. The demands on efficiency force today’s hospital managers and leaders to utilize all resources for routine daily health-care maximally, which means that the hospital resources are “slimmed”, with very little or no reserve capacity for extra casualty loads. Even a large hospital may have difficulties in covering a sudden and immediate need for surgical theaters or ventilators, which means that patients from a major incident may have to be spread between many hospitals. This has to be done from the beginning—secondary transports consume resources and time that not may be available [16].

This is, today, a globally recognized problem and has, in countries like the USA, led to a requirement from the authorities to include the training of hospital staff in emergency preparedness programs [17].

Access to simulation models for the training of emergency preparedness

During the last decade, simulation models have been increasingly used within the health-care sector, including the field of emergency preparedness. Olson et al. [13], in a comprehensive review of the literature on games and simulations in emergency preparedness during the period 2007–2011, concluded that such methods:

  • Could be recommended as an effective training method for teaching emergency preparedness,

  • During the study period had emerged as an important tool in developing critical competencies related to emergency preparedness and response,

  • Could be cost-effective planning tools.

Reviewing the currently available and reported simulation models for this kind of training illustrates that the majority focus on only one of the components in the chain of response, often the emergency department in the hospital. This has, of course, an important function in the receiving, primary triage, and resuscitation of casualties, but is rarely a critical limiting factor for the surge capacity of the hospital [16]. Instead, the critical factor is usually the access to ventilators and surgical theaters. In addition, focusing on only one of the functions in the hospital excludes the training of communication/coordination between units, which is, as already mentioned, the most commonly reported cause of failure in the response.

Many of the reported models focus on bioterrorism and irradiation. To prepare and train for these kinds of events is, of course, very important, and this is a very suitable field for simulation models. However, it is surprising that so few models focus on physical trauma, which is, so far, the most common cause of major incidents [810].

Hsu et al. [18], in a recent data-based review of the literature on methods for training hospital staff for mass-casualty incidents, stated that there was insufficient evidence to support firm conclusions about the effectiveness of specific training methods because of the marked heterogeneity of studies, weaknesses in the study design, and limited number of exercises reported in each study.

Based on these reviews and other literature in the field, we were unable to identify any existing simulation model meeting the given objectives for the MRMI course. The MACSIM system was originally established for scientific evaluation and development of methodology, and was at the time for designing the MRMI courses already used to compare different triage methods in major incident response [14]. To fill the requirements of a scientific tool, the system had to provide information detailed and accurate enough as a base for scientific evaluation, which should also make it suitable as a tool for training.

Experiences from the development process

It is not realistic to believe that an optimal training model can be achieved already during the first courses. To train the whole chain of major incident response simultaneously and, at the same time, optimize the effect of training for all involved categories was a challenge and required a development process based on continuous critical evaluations. We have described this development process above as a result of the study because what we learned from it may be of interest for those who are in the process of starting or developing such a course. Some additional conclusions from this process can be summarized as follows.

Preparative training with the simulation system

Enough time should be devoted to train with the simulation system in smaller groups before the exercises. In the first course, this time was too short, generating justified negative criticism, and was then successively increased to half a day. This was not only as training of the system, but, at the same time, useful training in decision-making in triage and primary management.

At the same time, the trainee’s opinion about the need for two full simulation days was uniform: 94 % requested 2 days and that figure has been similar in all courses so far. Many mistakes were made by the trainees on the first day, and they should be given the chance to improve during the course. To keep the course within the three allotted days thus required a reduction of lectures, which were, to a major extent, replaced by access to the literature for precourse studies. However, the evaluations from the last courses where this strategy was used indicate a remaining need for at least some lectures, and how this should be achieved within the same timeframe is an ongoing challenge.

The patient bank

The scenario should include a sufficient number of patients to identify the critical limiting factors for surge capacity. If a real scenario is used—as in this case—the majority of patients are usually ambulatory cases requiring limited resources, and a sufficient number of severe injuries are required in order to apply pressure on the operating room and ICU in a large hospital. The total casualty load in this scenario, 300 injured distributed between four hospitals (two bigger and two smaller, total bed capacity 1,680) within a range of 10–50 min by car, was just enough to identify the capacity-limiting factors and require the transfer of patients to additional hospitals further away, which is an important part of the training.

Training of communication

Communication needs practical training, not only with regard to the use of equipment, but also on how to give brief and clear messages. This is essential in such situations, but something that medical staff, in general, is less trained in than rescue services and the police. This requires a complete range of communication equipment with radios and telephones, as illustrated in Fig. 5. An improvement between the two days was noted as result of fully extended training on this part, but this was not scientifically confirmed because we did not measure any specific variables for this training in the present study.

Evaluation of the accuracy of the training

The evaluations of the pilot courses were mainly used for, and also resulted in, a number of adjustments of the methodology, which made them incomparable with the evaluations from the following courses. After the pilot courses, the evaluations were made in two steps:

  • For the three first courses, with focus on the accuracy of the simulation cards and quality of the simulation exercises.

  • For the next three courses, with focus on how the different categories of trainees evaluated the effects of the training on specific competencies.

The reasons to adjust focus for the evaluations during the study period were:

  • The uniform evaluations from the first courses, provided by more than 140 experienced trainees of different categories, was considered sufficient to confirm the accuracy and quality of the simulation methodology from the trainees’ perspective as far as it could be done with this methodology.

  • Since the strategy had changed from mixed to more “pure” roles in either prehospital or hospital training, it was considered important to separate the evaluations between these competencies.

  • It was also considered important to illustrate the trainees’ assessment of the effects of the training on specific prehospital and hospital fields of competence, and, finally, also of the accuracy of the course as a whole for the training of major incident response for both these categories separately.

When evaluating the estimated increase of knowledge and skills generated by the course, it should be taken into consideration that these trainees were very experienced. All of them had the same position in their normal activities as during the training, most of them specialists with many years of clinical experience. The majority had experience from major incidents, including scenarios like the London terror bombings in 2005, terror bombings in the Middle East during the last several decades, and “natural disasters” like the Haiti earthquake in 2010. Increased knowledge and skills with levels between 4 and 5 on a scale of 1 to 5 for these trainees can, therefore, be considered as a good outcome effect of this 3-day course.

An interesting and uniform observation by the faculty at the oral evaluations was that the more experience the trainees had, the more positive they were in their assessment of the course.

An important aspect is the trainees’ evaluation of the accuracy of the whole course for the training of major incident response. This reached a level of over 4 on the same scale both for prehospital and hospital trainees.

Manual or computerized setup

An alternative to the “manual” application is to make the system computer based. Such an alternative consumes high development costs, but is cheaper to use. The authors have experience from many years of training with the predecessor to MACSIM, the Emergo Train® System (ETS) [19, 20], which supports the “manual” approach: to handle and move patients along a visible chain of management adds another dimension, better illustrating the coordination between units and gives the possibility to directly communicate with other members of the team as well as with staff of other categories. A computerized variant of the ETS was developed many years ago with governmental support and at considerable cost, but it could never compete with the “manual” alternative.

Whether the simulation should be manual or computer based is currently discussed and a computer-based alternative is undergoing planning. According to the authors’ opinions, both alternatives should be available and perhaps the optimal model will be a combination of manual and computer-based performance, where the trainee can also undertake preparative training of parts of the response before the course, such as triage and treatment of casualties.

Limitations of the study

One limitation is linked to the description of the outcome of the trainee’s response where “calculated preventable mortality” is one of the parameters. This is a calculation based on the description of the injury (definitive diagnosis) and extrapolation to data from the clinical course in patients with equal injuries and equal trauma scores. Thus, it has to remain a calculation, since no available data can tell exactly within which time a treatment has to be performed to prevent mortality.

However, the definition “preventable mortality” was used only for very clear cases; for example, an expanding tension pneumothorax patient illustrated by chest trauma with reduced breathing sounds in combination with high respiratory rate, cyanosis, and beginning circulatory shock who was transported (or ventilated) without needle decompression or chest drain; or blast lung injury with high respiratory rate, cyanosis, and hemoptysis with long absence of ventilatory support.

Also, all these assessments were further backed up by calculations of the probability of survival using the RTS as well as the ISS.

Even if the accuracy of these calculations still not could be scientifically confirmed, the standardized conditions made the results reproducible: the number of casualties, the injuries with their trauma-scores, the geography, and the resources were the same in the same type of exercise. Also, since the same well-defined criteria for identifying preventable mortality were used, exercises of the same type could be compared with regard to outcome.

Another limitation is the objectiveness of the evaluations by the trainee as a measure of the accuracy of the training. Such evaluations are influenced by many factors, such as friendly teachers and good course organization. In this study, these evaluations were used for, and should be interpreted as, a registration of the trainee’s opinion about the accuracy of the simulation model compared with the real situation. The dominating majority of these experienced participants considered the simulation model to be accurate from this point of view. This is how far it is possible to reach with evaluations of this kind. The application of methodology for a more objective validation of the training is planned as a further step in this project.

Conclusions

The simulation system tested in this project could, with adjustments based on accumulated experience, be developed into a tool for the training of major incident response meeting the specific demands on such training based on recent years’ experiences from major incidents and disasters. The methodology was evaluated by experienced trainees in several courses to be accurate for this training, markedly increasing their perceived knowledge and skills in fields of importance for a successful outcome of the response to a major incident.