Summary points
What was known before the study?
- •
Interface terminologies are used for actual data entry into electronic records, facilitating collection of clinical data while simultaneously
► User-based usability evaluations of an interface terminology. ► Usability is evaluated on five aspects: effectiveness, efficiency, learnability, overall user satisfaction, and experienced usability problems. ► Detailed insight into how clinicians interact with a controlled compositional terminology through a terminology application. ► The extensiveness, complexity of the hierarchy, and the language usage of an interface terminology is defining for its usability.
Interface terminologies are used for data entry into electronic medical records, facilitating collection of clinical data while simultaneously linking users’ own descriptions to structured data elements from a reference terminology such as Systematized Nomenclature of Medicine – Clinical Terms, (SNOMED CT) [1], [2]. Before full implementation, the usability of such an interface terminology should be tested in clinical practice to evaluate whether it meets its intended purpose and to discover areas for improvement. This study reports on the usability evaluation of DICE (Diagnoses for Intensive Care Evaluation), an interface terminology based on SNOMED CT deployed through a terminology application in a Patient Data Management System (PDMS) (Metavision, iMDsoft, Sassenheim, The Netherlands) to register the reasons for admission in Intensive Care (IC) [3].
It has been argued that the correctness and specificity of terminology-based data registration in clinical settings does not only depend on the content of the terminological system, but also on certain user characteristics such as their registration habits and their experience with the terminological system [4], [5]. Furthermore, usability issues concerning the design of the graphical user interface of the terminology application impacts the efficacy of terminology-based data registration [4], [5]. Clinicians will optimally use an interface terminology for structured data entry if its presentation and browsing of information through the terminology application is intuitive, easy to use and not time consuming [6]. Furthermore, for the human–computer interaction to be effective, the action sequences that the users have to carry out in the application should map to the user's mental model [7].
Typically, terminologies are evaluated in terms of content coverage while terminology applications for data entry are evaluated in terms of usability [8], [9], [10], [11]. Combined approaches to examine how clinicians interact with the terminological system during data entry are gaining interest [5], [12], [13], [14]. A user's inability to find a clinical concept using an interface terminology might be caused by a misunderstanding of the terminology content or may be due to the (lack of) functionalities and the graphical user interface design of the terminology application. Therefore, the evaluation of an interface terminology should not only concern the terminology content, but also the data entry application as integrated in a PDMS [1], [2], [5]. In this study both aspects of the DICE system were evaluated. Usability, i.e. the extent to which users can achieve specific sets of tasks in a particular environment [15], was measured on five aspects: effectiveness (accuracy and completeness of the recorded reasons for admission), efficiency (time spent to retrieve the correct concept in relation to the effectiveness), overall user satisfaction (users’ attitude with respect to the usability of the DICE system), learnability (whether the users can easily learn to use DICE and improve their performance over time), and usability problems encountered by the DICE users [1], [15], [16], [17], [18]. To assess the five usability aspects, we examined how clinicians interacted with the interface terminology during data entry. Evaluation was performed both at the baseline and three months after full implementation of the system. The purpose of evaluating the system before and after the implementation was to assess the learnability of DICE and to study whether the usability problems revealed at the baseline persisted after the system had been in routine use for three months [17]. Consequences of usability problems, such as reduced data quality, are out of scope, and hence have not been studied.
The interface terminology based on SNOMED CT contains an IC specific subset of SNOMED CT (referred to as terminology content) with 83,125 disorder and procedure concepts, and 150,657 attribute values to further specify the disorder and procedure concepts and their English terms [3]. This core SNOMED CT subset was extended with 325 concepts, 1243 relationships between these concepts, and 597 descriptions which are not present in SNOMED CT, but are needed to cover reasons of admission in the
This study was performed in an adult Dutch ICU with 28 beds, with more than 1500 yearly admissions. Since 2002, this ward uses a commercial PDMS. This PDMS is a point-of-care clinical information system, which runs on a Microsoft Windows platform, uses a SQL server database and includes computerized order entry; automatic data collection from bedside devices; some clinical decision support; and (free-text) documentation of clinical phrases (e.g. reasons for admission and complications) during
The results of the SUS questionnaire indicate that the usability of the DICE system fell short of the users’ expectations. In the pre-implementation test, the average SUS score was 47.2 (range: 32.5–65). In the post-implementation test, this average decreased to 37.5 (range: 0–65). So, according to the scale provided by Bangor et al. [26] (score of 0–25: worst, score of 25–39: poor, score of 39–52: OK, score of 52–85: excellent, and score of 85–100: best imaginable), the acceptability of the
In this study we performed a usability evaluation of a large compositional interface terminology based on SNOMED CT for registration of reasons for ICU admission. Overall, the results of the usability evaluation revealed that the usability of the DICE system fell short of the users’ expectations. The use of post-coordination is cumbersome and was influenced by the poor usability of the terminology application as well as the size of the terminology content. The qualitative measurements revealed
This study provides a detailed insight in how clinicians interact with a large controlled compositional interface terminology to carry out data entry. The combination of qualitative and quantitative analyses enabled us to observe and analyze clinicians’ interactions with a domain specific interface terminology.
The usability problems could be linked to either the interface terminology or the terminology application. Nevertheless the source for the inadequate user performances was found in the
FBR was the initiator of the study and contributed to conception and design of the study, acquisition of the data, analyses and interpretation of the data. She wrote the first draft of the article and processed the comments of the co-authors. NdK, RC and MJ contributed to conception and design of the study, data interpretation and critically revised all drafts of the article. MD contributed to the acquisition of data, analyses and interpretation of data and reviewed all drafts of the article.
None.
The authors thank the participants of the usability evaluation sessions for their cooperation in conducting this study. The authors would also like to express their gratitude to the usability experts R. Khajouei and L. Peute for their contribution in rating the severity of the observed usability problems. Summary points What was known before the study? Interface terminologies are used for actual data entry into electronic records, facilitating collection of clinical data while simultaneously