Avoid common mistakes on your manuscript.
Dear Editor
We read with great interest the article by Girshausen R et al. [1]. The authors validated and compared the accuracy of nine scores in predicting the prognosis of severely injured trauma patients, and the study concluded that RISC II was the best predictor of mortality with an AUROC (area under the receiver operating characteristic curve) of 0.92, while APACHE II, SOFA, and Marshall scores are still helpful tools with an AUROC range of 0.69 to 0.81. In addition, ISS, NISS, RTS, EAC, and PTGS scores provided poorer mortality prediction with an AUCOC range of 0.57–0.66.The author should be commended for his choice of topic and workload. After reading this article carefully, we have some suggestions.
First, in view of the very high heterogeneity of polytrauma, it is necessary to use a larger sample when externally validating and comparing different prediction models [2]. If the authors had considered the estimated sample size, their conclusions could have been more persuasive.
Second, selected comparisons of AUROCs could have been tested for statistical significance using DeLong’s test [3].
Third, discrimination is the only indicator of the quality of these scores assessed in this study and it may need to be supplemented with calibration and decision curve analysis (DCA). For the present, it is recommended that these three indicators be employed to comprehensively evaluate the predictive models [4, 5].
Data availability
Data sharing is not applicable to this article, as there were no new data generated or analyzed during the course of this study.
References
Girshausen R, Horst K, Herren C, Bläsius F, Hildebrand F, Andruszkow H. Polytrauma scoring revisited: prognostic validity and usability in daily clinical practice. Eur J Trauma Emerg Surg. 2022. https://doi.org/10.1007/s00068-022-02035-5.
Pavlou M, Qu C, Omar RZ, Seaman SR, Steyerberg EW, White IR, et al. Estimation of required sample size for external validation of risk models for binary outcomes. Stat Methods Med Res. 2021;30(10):2187–206. https://doi.org/10.1177/09622802211007522.
DeLong ER, DeLong DM, Clarke-Pearson DL. Comparing the areas under two or more correlated receiver operating characteristic curves: a nonparametric approach. Biometrics. 1988;44(3):837–45.
Van Calster B, Steyerberg EW, Wynants L, van Smeden M. There is no such thing as a validated prediction model. BMC Med. 2023;21(1):70. https://doi.org/10.1186/s12916-023-02779-w.
Binuya MAE, Engelhardt EG, Schats W, Schmidt MK, Steyerberg EW. Methodological guidance for the evaluation and updating of clinical prediction models: a systematic review. BMC Med Res Methodol. 2022;22(1):316. https://doi.org/10.1186/s12874-022-01801-8.
Funding
None.
Author information
Authors and Affiliations
Contributions
GB: Writing Original draft, QS: Reviewing and Editing, All authors read and approved the manuscript prior to submission.
Corresponding author
Ethics declarations
Conflict of interest
The authors declare no conflict of interest.
Ethical approval and consent to participate
Not applicable.
Rights and permissions
About this article
Cite this article
Ba, G., Shi, Q. Letter to the editor on: “Polytrauma scoring revisited: prognostic validity and usability in daily clinical practice”. Eur J Trauma Emerg Surg 49, 2637 (2023). https://doi.org/10.1007/s00068-023-02354-1
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00068-023-02354-1