Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Central nervous system infection in the intensive care unit: Development and validation of a multi-parameter diagnostic prediction tool to identify suspected patients

  • Hugo Boechat Andrade ,

    Roles Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Project administration, Resources, Software, Visualization, Writing – original draft, Writing – review & editing

    hugo.boechat@ini.fiocruz.br

    Affiliations Intensive Care Unit, Evandro Chagas National Institute of Infectious Diseases (INI), Oswaldo Cruz Foundation (Fiocruz), Rio de Janeiro, RJ, Brazil, Sexually Transmitted Diseases Sector, Biomedical Institute, Universidade Federal Fluminense (UFF), Niterói, RJ, Brazil

  • Ivan Rocha Ferreira da Silva,

    Roles Formal analysis, Investigation, Validation, Writing – review & editing

    Affiliation Department of Neurological Sciences, Rush University Medical Center, Chicago, IL, United States of America

  • Justin Lee Sim,

    Roles Investigation, Validation

    Affiliation Department of Neurological Sciences, Rush University Medical Center, Chicago, IL, United States of America

  • José Henrique Mello-Neto,

    Roles Formal analysis, Investigation, Resources, Validation

    Affiliation Intensive Care Unit, Evandro Chagas National Institute of Infectious Diseases (INI), Oswaldo Cruz Foundation (Fiocruz), Rio de Janeiro, RJ, Brazil

  • Pedro Henrique Nascimento Theodoro,

    Roles Investigation, Validation

    Affiliation Intensive Care Unit, Evandro Chagas National Institute of Infectious Diseases (INI), Oswaldo Cruz Foundation (Fiocruz), Rio de Janeiro, RJ, Brazil

  • Mayara Secco Torres da Silva,

    Roles Formal analysis, Investigation

    Affiliation Intensive Care Unit, Evandro Chagas National Institute of Infectious Diseases (INI), Oswaldo Cruz Foundation (Fiocruz), Rio de Janeiro, RJ, Brazil

  • Margareth Catoia Varela,

    Roles Data curation, Formal analysis, Investigation, Software

    Affiliation Immunization and Health Surveillance Research Laboratory, Evandro Chagas National Institute of Infectious Diseases (INI), Oswaldo Cruz Foundation (Fiocruz), Rio de Janeiro, RJ, Brazil

  • Grazielle Viana Ramos,

    Roles Data curation, Investigation, Resources

    Affiliation Department of Critical Care, D’Or Institute for Research and Education, Rio de Janeiro, RJ, Brazil

  • Aline Ramos da Silva,

    Roles Data curation, Formal analysis, Validation

    Affiliation Department of Critical Care, D’Or Institute for Research and Education, Rio de Janeiro, RJ, Brazil

  • Fernando Augusto Bozza,

    Roles Conceptualization, Methodology, Supervision, Writing – review & editing

    Affiliations Intensive Care Unit, Evandro Chagas National Institute of Infectious Diseases (INI), Oswaldo Cruz Foundation (Fiocruz), Rio de Janeiro, RJ, Brazil, Department of Critical Care, D’Or Institute for Research and Education, Rio de Janeiro, RJ, Brazil

  • Jesus Soares,

    Roles Supervision, Writing – review & editing

    Affiliation Division of High-Consequence Pathology and Pathogens, National Center for Emerging and Zoonotic Infectious Diseases, Centers for Disease Control and Prevention, Atlanta, GA, United States of America

  • Ermias D. Belay,

    Roles Funding acquisition, Resources, Supervision, Writing – review & editing

    Affiliation Division of High-Consequence Pathology and Pathogens, National Center for Emerging and Zoonotic Infectious Diseases, Centers for Disease Control and Prevention, Atlanta, GA, United States of America

  • James J. Sejvar,

    Roles Conceptualization, Writing – review & editing

    Affiliation Division of High-Consequence Pathology and Pathogens, National Center for Emerging and Zoonotic Infectious Diseases, Centers for Disease Control and Prevention, Atlanta, GA, United States of America

  • José Cerbino-Neto,

    Roles Funding acquisition, Methodology, Project administration, Resources, Visualization, Writing – review & editing

    Affiliation Immunization and Health Surveillance Research Laboratory, Evandro Chagas National Institute of Infectious Diseases (INI), Oswaldo Cruz Foundation (Fiocruz), Rio de Janeiro, RJ, Brazil

  • André Miguel Japiassú

    Roles Conceptualization, Data curation, Formal analysis, Investigation, Project administration, Supervision, Visualization, Writing – review & editing

    Affiliation Intensive Care Unit, Evandro Chagas National Institute of Infectious Diseases (INI), Oswaldo Cruz Foundation (Fiocruz), Rio de Janeiro, RJ, Brazil

Abstract

Background

Central nervous system infections (CNSI) are diseases with high morbidity and mortality, and their diagnosis in the intensive care environment can be challenging. Objective: To develop and validate a diagnostic model to quickly screen intensive care patients with suspected CNSI using readily available clinical data.

Methods

Derivation cohort: 783 patients admitted to an infectious diseases intensive care unit (ICU) in Oswaldo Cruz Foundation, Rio de Janeiro RJ, Brazil, for any reason, between 01/01/2012 and 06/30/2019, with a prevalence of 97 (12.4%) CNSI cases. Validation cohort 1: 163 patients prospectively collected, between 07/01/2019 and 07/01/2020, from the same ICU, with 15 (9.2%) CNSI cases. Validation cohort 2: 7,270 patients with 88 CNSI (1.21%) admitted to a neuro ICU in Chicago, IL, USA between 01/01/2014 and 06/30/2019. Prediction model: Multivariate logistic regression analysis was performed to construct the model, and Receiver Operating Characteristic (ROC) curve analysis was used for model validation. Eight predictors—age <56 years old, cerebrospinal fluid white blood cell count >2 cells/mm3, fever (≥38°C/100.4°F), focal neurologic deficit, Glasgow Coma Scale <14 points, AIDS/HIV, and seizure—were included in the development diagnostic model (P<0.05).

Results

The pool data’s model had an Area Under the Receiver Operating Characteristics (AUC) curve of 0.892 (95% confidence interval 0.864–0.921, P<0.0001).

Conclusions

A promising and straightforward screening tool for central nervous system infections, with few and readily available clinical variables, was developed and had good accuracy, with internal and external validity.

Introduction

Infectious diseases with significant public health impact due to potential severity, such as encephalitis and hemorrhagic fever, are a challenge for health systems and health authorities worldwide [1]. Thus, Intensive Care Units (ICUs) can be an essential target for establishing sentinel syndromic surveillance, optimizing resources by precisely focusing on new diseases with tremendous potential for severe morbidity and mortality [2].

Among undiagnosed severe infectious illnesses, encephalitis may be considered a hallmark disease [3]. It is a severe clinical manifestation associated with many autoimmune and infectious diseases, including recently identified emerging and reemerging pathogens [46]. Besides, its true incidence is difficult to determine because many cases are unreported, the diagnosis may not be considered, or a specific infectious etiology may never be confirmed [68].

Robertson et al. [9] conducted a systematic literature review and meta-analysis of 154 studies of Central Nervous System Infections (CNSI) published between 1990 and 2016, 71 of them with incidence data. A total sample size of 130,681,681 individuals with 508,078 cases across all studies was included, with a global prevalence of 0.4%.

The encephalitis incidence varies from 3.5 to 7.4/100,000 patient-years, and it occurs worldwide. Some etiologies have a global distribution (e.g., herpesviruses), while others are geographically restricted (e.g., arboviruses) [4, 10]. Other CNSI, including meningitis and brain abscesses, are less rare: hospitalization and ICU admission rates varied from 1 to 4.5%. The incidence of brain abscess is approximately 8% of intracranial masses in developing countries and 1% to 2% in the Western countries, with around four cases occurring per million [1114].

It is more challenging to generalize for encephalitis, as few population-based studies exist. Many possible pathogens are implicated, and most cases are not reported to health authorities. Still, in most cases, a cause is never found [15].

We aimed to develop and validate a diagnostic model that allows for the quick screening of patients suspected of having CNSI, consequently, encephalitis, using a readily available clinical dataset. Its simplicity could enable application on an individual level and potential for population screening and even large databases. A neurological diagnostic prediction model for delirium in adult ICU patients [16] was previously developed. Still, the model described in the present article, to our knowledge, is the first model intended for monitoring CNSI in ICUs.

Materials and methods

This multivariate diagnostic model was developed and validated following the Transparent Reporting of a Multivariable Prediction Model for Individual Prognosis or Diagnosis (TRIPOD) statement [17]. The checklist is on the S1 File.

Ethics statement

The study was approved by the local Institutional Review Board (CAAE 16876819.9.0000.5262), which waived the need for informed consent, as the data were analyzed anonymously. No interventions were carried out, and data collection was not burdensome to patients. This report’s findings and conclusions are those of the authors and do not necessarily represent the Centers for Disease Control and Prevention’s official position.

Data collection and potential predictive variables

We first performed an observational retrospective cohort study of patients admitted between January 1st, 2012, and June 30th, 2019, in the 4-bed ICU of a 25-bed hospital located at Evandro Chagas National Institute of Infectious Diseases (INI), Oswaldo Cruz Foundation (Fiocruz), Rio de Janeiro, Brazil.

We reviewed the medical records of all 869 consecutive patients admitted, for any reason, excluding readmissions (80) to ICU during the period of data collection and patients (6) with critical missing data in medical records. So, 783 patients were included in the development cohort (DC).

Potential predictive variables selected were those known associated with CNSI, its severity, and outcome, and readily available in emergency departments or ICUs [18] to calculate predictive score systems, like Simplified Acute Physiology Score (SAPS) 3 [19]. The following were collected in the first 24 h of ICU admission:

  1. Age, sex, dates of hospital and ICU admission and discharge, the patient outcome at discharge (alive/dead).
  2. Clinical and laboratory data: SAPS 3 and Sequential Organ Failure Assessment (SOFA) [20] prognostic scores and the lowest Glasgow Coma Scale (GCS) [21]; fever ≥ 38°C (100.4°F) within the 72h before or after the presentation, AIDS/HIV (Acquired Immunodeficiency Syndrome / Human Immunodeficiency Virus) infection status.
  3. Neurologic signs/symptoms: cerebrospinal fluid (CSF) white blood cell count (WBC)/mm3, and those syndromes defined by the SAPS 3 score [22] and Venkatesan et al.: Encephalopathy—altered mental status, defined as decreased or altered level of consciousness/vigilance disturbances, confusion, disorientation, behavioral changes, or other cognitive impairment, lasting ≥24 h with no alternative cause identified; new onset of focal neurologic signs (hemiplegia, paraplegia, tetraplegia); generalized or partial seizures not entirely attributable to a preexisting seizure disorder.

When missing values were less than 20%, imputation for missing variables was considered. We used logistic regression to impute binary variables and predictive mean matching to impute numeric features.

Outcomes

Central nervous system infection was defined as any case of the following diseases, diagnosed between 48 h before and five days after ICU admission:

  • Cerebral abscess or suppurative intracranial infections: Symptoms of a mass lesion, seizures, signs of focal deficit, and cerebral lesion documented by neuroimaging (magnetic resonance imaging/computed tomography) or anatomical evidence.
  • Encephalitis: Involvement of the brain parenchyma by infectious agent inducing neurological symptoms. It could be documented by CSF abnormalities, serology, isolation of the causal agent, neuroimaging. The criteria for encephalitis diagnosis were those defined by Venkatesan et al. and shown on S1 Table in the S2 File.
  • Meningitis: Patients without criteria for encephalitis, but with symptoms of the meningeal syndrome (headache, fever, irritability, and stiff neck, with or without focal neurological signs) with positive CSF culture or CSF abnormalities compatible with meningitis, serology, isolation of the causal agent, neuroimaging.

Two physicians (HBA and JHN) independently reviewed the medical records. The diagnosis of CNSI was considered if it met at least two of the following criteria: clinical syndrome, neuroimaging, CSF analysis, and microbiological exams (blood and CSF cultures, serologies). All patients were submitted to computed tomography. One-third of the DC and VC1 could not be submitted to lumbar puncture because of formal contraindications for the procedure (all of them with brain abscesses). Those with laboratory diagnosis of CNSI but no symptoms were classified as asymptomatic CNSI.

Statistical analysis

Statistical analyses were performed, and figures created using the MedCalc® application, version 19.3, for Microsoft Windows®. Categorical variables were expressed as the absolute numbers and percentages in each category. Chi-square and Fisher’s exact tests were used to analyze categorical variables. Continuous variables were expressed as medians with interquartile ranges (IQR) and analyzed by Mann–Whitney U-test. A p-value <0.05 and 95% confidence interval (CI) indicated significance for all tests.

Predictor selection and model construction

Sixteen variables were analyzed, and those associated (p<0.05) with the outcome were included in a Least Absolute Shrinkage and Selection Operator (LASSO) regression to minimize the potential collinearity of variables, as shown on S2-S4 Tables in S2 File. This approach refined and defined the final multivariate logistic regression model, avoiding collinearity [23].

Values were missing in the DC for body temperature (1%), encephalopathy (2%), and the Glasgow Coma Scale score (1%). Data for all other variables were complete. The optimal cutoff point, where Youden’s index is maximum, converted continuous to categorical data before regression. Subsequently, variables identified by LASSO regression analysis were entered into multivariate logistic regression models, and those that were statistically significant were used to construct the diagnostic model.

We used bootstrapping techniques to adjust for overly optimistic estimates of the predictors’ regression coefficients in the final model (overfitting): one thousand random bootstrap samples resulted in shrunken regression coefficients [24]. Finally, the calibration slopes of the regression lines for the cohorts updated the model.

Assessment of accuracy

The model’s potential ability to discriminate between patients with and without central nervous system infection was quantified by diagnostic accuracy measures, such as sensitivity, specificity, predictive values, likelihood ratios, and the area under the receiver-operator characteristic (ROC) curve (AUC). The sample size calculated for the AUC was at least 96 patients (12 positive cases and 84 negative cases) for the following parameters: alpha 0.05, beta 0.2, minimal AUC of 0.9, null hypothesis 0.5 (no discriminating power).

Model validation

To validate the generalizability of the algorithm, we used two cohorts:

  • Internal validation cohort (VC1): 163 patients with 15 (9.2%) cases of CNSI were included. One case (6.6%) was classified as asymptomatic. One hundred seventy-seven patients were reviewed for a prospective cohort study in INI between July 1st, 2019, and July 1st, 2020. Ten readmissions and four records were excluded because of critical missing data in medical records. Data for all variables were complete.
  • External validation cohort (VC2): 7,270 patients with 88 (1.2%) cases of CNSI were included. 18 (20.45%) of the CNSI cases were classified as asymptomatic. A retrospective cohort of patients admitted between January 1st, 2014, and June 30th, 2019, in the neuro ICU from Rush University Medical Center, Chicago, IL, USA.

The variables required for calculating the VC2 were collected and crosschecked by two of the authors (IRFDS and JLS). A case of CNSI was established by the same criteria as described herein for the development cohort. The data about AIDS/HIV was missing for 40% of the patients. All the other variables had less than 20% of missing data.

Results

Characteristics and outcomes from the development and validation cohorts

Table 1 summarizes the demographic and clinical characteristics of the DC, VC1, and VC2. The detailed profile of the 783 patients from the DC, with 97 (12.38%) cases of CNSI and 9 (9.28%) of them asymptomatic, as shown in S2 and S3 Tables in S2 File. The DC’s variables associated with the outcome (p<0.05) were selected for the LASSO regression, as shown in S4 Table in S2 File. The S5 Table in S2 File shows the global microbiological profile of the cohorts.

thumbnail
Table 1. Demographic and clinical characteristics of the development (DC) and validation cohorts (VC1 and VC2).

https://doi.org/10.1371/journal.pone.0260551.t001

There were some significant differences (P<0.05) between the DC and VC1: median age, the prevalence of AIDS/HIV, the median SOFA score, the ICU/hospital mortalities; but no significant difference (P>0.05) in CNSI/asymptomatic CNSI prevalence, or the median SAPS 3 score. Those differences can be explained from a clinical view: severe coronavirus disease 2019 (COVID-19), by SARS-CoV-2, was the reason for the admission to ICU of 61/163 patients (37.42%) from the VC1, without CSNI cases among them.

The VC2 was a completely different sample compared to the INI cohorts in all characteristics (p<0.05). The most remarkable differences between the DC and VC2 are the prevalence of CNSI, asymptomatic CNSI, AIDS/HIV, and surgical patients, which are expected for a neurointensive care unit.

Diagnostic model, calibration slopes and recalibration.

Table 2 shows the results of the multiple logistic regression: AIDS/HIV, Age <56 years old, CSF WBC >2 cells/mm3, fever (body temperature ≥38°C), focal neurological deficit, encephalopathy, GCS <14 points, and seizures were predictors independently associated with central nervous system infections diagnoses (p<0.05).

thumbnail
Table 2. Multiple logistic regression final model and calibrated regression coefficient derived from the development cohort (DC).

https://doi.org/10.1371/journal.pone.0260551.t002

S1 Fig in S2 File shows the linear regression lines for each cohort and their calibration slopes. The model overestimated the risk of CNSI in the VC2 by about 40% more than DC’s risk estimation. The coefficients of the individual predictors were updated to recalibrate the model [25]. Each predictor with a factor that is the estimated calibration slope (0.5981) and adding the estimate of α’ (1,341; the intercept of the calibration slope model) to the original intercept, adjusted to the local prevalence of the disease as an additional correction coefficient of 0.1 [26, 27].

Fig 1 compares the regression lines after recalibration: there were no significant differences between the slopes (0.01257, P = 0.8790) or the intercepts (-0.007181, P = 0.7156) of DC vs VC1, nor between the slopes (-0.01443, P = 0.8339) or the intercepts (-0.003056, P = 0.8338) of DC vs VC2. The model regression equation was y = 0.00008489 (-0.01473 to 0.01490, P = 0.9910) + 0.9954 (0.9377 to 1.0531, P<0.0001) x, with a coefficient of determination R2 of 0.42176 and the residual standard deviation of 0.2548. It suggested a well-calibrated final model.

thumbnail
Fig 1. Final updated calibration slope for the DC, VC1 and VC2.

DC: development cohort. VC1: internal validation cohort. VC2: external validation cohort. Solid line DC: y = -0.002583 (-0.02065 to 0.01548 CI; P = 0.7790) + 1.0013 x (0.9343 to 1.0683; P<0.0001) x. Dashed line VC1: y = 0.005774 (-0.02433 to 0.03588; P = 0.7054) + 0.9887 (0.8655 to 1.1119; P<0.0001) x. Dotted line VC2: y = 0.002511 (-0.02846 to 0.03348; P = 0.8735) + 0.9868 (0.8590 to 1.1146) x.

https://doi.org/10.1371/journal.pone.0260551.g001

The formula for CNSI probability.

Estimated probability of central nervous system infection = 1 / [1 + exp—(-4.4 + 0.273 * “AIDS/HIV” + 0.9774 * “Age <56 years-old” + 0.6192 * “Fever (T≥38°C)” + 0.6588 * “Encephalopathy” + 0.912 * “Glasgow Coma Scale <14 points” + 1.532 x “Neurologic Focal Deficit” + 0.897 * “Seizures” + 2.701 * “CSF WBC >2 cells/mm3” + 0.1 * “Local CNSI prevalence in %”)].

The presence or absence of a predictor was defined by 1/0 on the formula. The local prevalence must be entered as a percentage (e.g., 12%). If the local prevalence is unknown, the field can be left blank.

For example, an HIV-negative 70-years-old person with fever and encephalopathy in a low prevalence setting (1%) has a 4% probability of CNSI. On the other hand, a 40-year-old HIV patient with fever, hemiplegia, and seizures, in a high prevalence setting (10%), has a 71% risk of CNSI. The CNSI probability calculator can be tested on S3 File.

ROC curve analysis.

Fig 2 compares the ROC curves for the development and the validation cohorts. The DC showed an AUC of 0.939 (CI 0.903 to 0.959, p<0.0001), while the VC1 an AUC of 0.978 (CI 0.945 to 0.994, p<0.0001), with a small but significant difference between areas: 0.0398, P<0.0192. The VC2 presented an AUC of 0.840 (CI 0.802–0.870, P<0.0001), with a significant difference (0.108, P<0.0004) when compared to DC’s, expected for validation. The model adjustment did not change the ranking of the predicted risks, so the AUC was unaltered by the recalibration. The pool data’s AUC was 0.892 (0.864–0.921, P<0.0001).

thumbnail
Fig 2. ROC curves for DC, VC 1 and VC2.

AUC: area under the ROC curve. DC: development cohort. VC1: internal validation cohort. VC2: external validation cohort. ROC: Receiver operating characteristic. Solid line DC: AUC of 0.939 (CI 0.903 to 0.959, p<0.0001). Dashed line VC1: AUC of 0.978 (CI 0.945 to 0.994, p<0.0001). Dotted line VC2: AUC of 0.840 (CI 0.802–0.870, P<0.0001).

https://doi.org/10.1371/journal.pone.0260551.g002

As a specialized hospital in infectious diseases, INI’s AIDS/HIV prevalence was at least 100 times the Brazilian prevalence in the general population [28]. Fig 3 shows a sensitivity analysis correcting for the importance of the HIV population in DC: HIV-negative patients had an AUC of 0.945 (CI 0.920–0.964, P<0.0001), not significantly different (0.0238, P = 0.4041) from HIV—positive patients’ area under the curve (0.921 [CI 0.893 to 0.944, P<0.0001]).

thumbnail
Fig 3. ROC curves for HIV vs. non-HIV patients in DC.

AUC: area under the ROC curve. DC: development cohort. ROC: Receiver operating characteristic. HIV: human immunodeficiency virus. Solid line Non-HIV: AUC of 0.945 (CI 0.920–0.964, P<0.0001). Dashed line HIV: AUC of 0.921 (CI 0.893 to 0.944, P<0.0001). Difference between areas: 0.0238, P = 0.4041.

https://doi.org/10.1371/journal.pone.0260551.g003

Measures of diagnostic accuracy of the model.

Table 3 shows the sensitivity, specificity, and likelihood ratios for each risk group on the model: low (0–10%), medium (possible CNSI, >10–50%), and high probability (probable CNSI, >50%). The optimal cutoff point was >0.1032 (10%), with a sensitivity of 88.69, a specificity of 85.57, a positive likelihood ratio of 6.21, and a negative likelihood ratio of 0.12.

thumbnail
Table 3. Measures of diagnostic accuracy infections, risk groups, and cutoff points of the ROC curve diagnostic model for central nervous system infections (CNSI).

https://doi.org/10.1371/journal.pone.0260551.t003

Discussion

We developed and validated a predictive model for aiding in diagnosing central nervous system infections in ICU patients. To our knowledge, this is the first of its kind for general intensive care patients. Our model reliably predicted these infections based on seven readily available variables on admission. The clinical variables are widely used for the calculation of other prognostic scores, such as SAPS 3. The additional laboratory variable to help identify asymptomatic infections.

The DC and VC1 belonged to a referral center for infectious diseases, including AIDS/HIV, in the second-largest Brazilian urban center (Rio de Janeiro). That explains not only the prevalence of CNSI (12.4%), which is at least twice as high as in other Brazilian ICUs (1–5%) in general hospitals [13] but also that the prevalence of AIDS/HIV (54.7%) is 30-times as high as in Brazilian hospitals (1.8%) [28]. However, the prevalence of CNSI among our HIV-negative critical patients (5%) was like other general medical ICUs [28].

A CSF WBC count ≥5 cells/mm3 is one of the minor criteria for encephalitis diagnosis (S1 Table in S2 File). However, CSF may be devoid of cells in immunocompromised patients [29] or early in the course of infection [30], not excluding encephalitis. Therefore, the proposed algorithm adjusts the cutoff point to a more sensitive value of CSF WBC >2 cells/mm3.

The microbiological profile is compatible with the current international literature: in a multicenter international study to understand the burden of community acquired CNSI, Erdem et al. showed that the most frequent pathogens were Streptococcus pneumoniae (n = 206; 8%) and Mycobacterium tuberculosis (n = 152; 5.9%). Cryptococci were leading pathogens in the subgroup of HIV-positive individuals. Ninety-six (8.9%) patients of INI’s sample presented with clinical features of a subacute disease, suggestive of tuberculosis or neurosyphilis [31].

Clinical relevance

Our findings suggest that the model may have great value in daily practice to help to screen patients with higher risks (>10%), as we would catch 181/200 (90.5%) of CNSI cases, even with 28/200 (14%) of asymptomatic ones. The 19 cases classified as false negatives were the following:

  • Postoperative subarachnoid or intracranial hemorrhage, with external ventricular drain and secondary infection—6 patients.
  • Patients with missing data—5 patients.
  • CNSI in a patient with previous neurological disease and poorly characterized new symptoms on the medical record, classified as asymptomatic CNSI—2 patients.
  • Neurosyphilis—asymptomatic infection, ICU admission for other causes, LCR exam realized for distinct reasons—2 patients.
  • Asymptomatic neurocryptococcosis—admitted for other infections, blood antigen exam was positive, so the patients were submitted to LCR analysis and neuroimage exam—4 patients.

The overall prevalence of SNI was 13.33%. In comparison, the prevalence estimated by the model would be 15% (in a low prevalence setting—1–2%) to 25% (in a high prevalence setting—10%), expected for a cutoff value (10%) for a model designed as a screening tool. The neurointensive ICU presented more false-positive cases, as neurological symptoms were more common.

Only 80/200 (40%) of all patients with CNSI had that suspicion on ICU admission. Hence, the initial suspicions are not reliable, even in specialized institutions. Among patients presenting to the emergency department at a single United States of America hospital with a clinical suspicion of meningitis who underwent lumbar puncture, the prevalence of meningitis (defined as cerebrospinal fluid white blood cell count ≥5/mL) was 27%. In the broad spectrum of adults with suspected meningitis, three classic meningeal signs (Kernig’s sign, Brudzinski’s sign, and nuchal rigidity) did not have diagnostic value [32], so better bedside diagnostic tools are needed.

An estimated probability of CNSI lower than 10% makes this hypothesis improbable, so the investigation of other diagnoses must be prioritized. On the other hand, a risk greater than 10% indicates that imaging exams and diagnostic lumbar puncture, if possible, should be considered. Finally, a chance greater than 50% suggests that complementary exams are mandatory or repeated if the diagnosis is unclear, and empirical treatment should also be considered.

The model’s variables can be used for CNSI screening in large health system databases as well, provided the necessary variables are included, which could serve as a sentinel surveillance tool for encephalitis and other CNSI. Finally, it could also be used as a tool to calculate the pre-test probability of CNSI before other diagnostic tests, allowing earlier diagnosis and ensuring efficient use of research and diagnostic resources.

Limitations of the study

This study has limitations. The completion of medical records data in research institutions might be better than in other institutions that are not research driven. That is why we chose to select as few variables as possible, present in almost every record in the database.

Retrospective studies have limitations and specific bias risks, so a prospective cohort was used for internal validation to reduce those biases. The high proportion of patients with AIDS/HIV, the high prevalence of CNSI, and the extremely low prevalence of surgical patients in the DC can influence the external validity of the tool, as well as its calibration. So, the VC2 was included to lessen those problems.

The calibration and the cutoff points should be validated in other scenarios, like emergency rooms, general/mixed ICUs, general wards, and even outpatients. The model showed worse performance with surgical patients and, naturally, with asymptomatic infections, as the diagnosis depends heavily on laboratory data. The use of CSF WBC count in the model lessens that limitation.

Encephalopathy and GCS are correlated variables, which could influence the accuracy of the model. However, both are commonly missing data in medical records. For that reason, SAPS 3 use both: the first as a more subjective criterion (quality of mental status) and the second as an objective one (quantitative measure of conscience). Besides, the LASSO regression and the bootstrapping did not recommend excluding one of them from the final model.

Conclusions

A promising and straightforward screening tool for central nervous system infections, with few and readily available clinical variables, was developed and had good accuracy, with internal and external validity.

Future research is needed to validate this tool in other settings. It could provide a cost-effective means to successfully identify these cases and lead to more timely diagnostics and treatment in an intensive care setting.

Supporting information

S2 File. Supporting data for "central nervous system infection in the intensive care unit: Development of a multi-parameter diagnostic prediction tool to identify suspected patients".

https://doi.org/10.1371/journal.pone.0260551.s002

(DOCX)

S3 File. Central nervous system infection probability calculator.

https://doi.org/10.1371/journal.pone.0260551.s003

(XLSX)

References

  1. 1. Hajjeh RA, Relman D, Cieslak PR, Sofair AN, Passaro D, Flood J, et al. Surveillance for unexplained deaths and critical illnesses due to possibly infectious causes, United States, 1995–1998. Emerg Infect Dis. 2002;8: 145–153. pmid:11897065
  2. 2. Norton S, Cordery DV, Abbenbroek BJ, Ryan AC, Muscatello DJ. Towards public health surveillance of intensive care services in NSW, Australia. Public Health Res Pract. 2016;26. pmid:27421345
  3. 3. Venkatesan A, Tunkel AR, Bloch KC, Lauring AS, Sejvar J, Bitnun A, et al. Case definitions, diagnostic algorithms, and priorities in encephalitis: consensus statement of the international encephalitis consortium. Clin Infect Dis. 2013;57: 1114–1128. pmid:23861361
  4. 4. Whitley RJ. Viral encephalitis. N Engl J Med. 1990;323: 242–250. pmid:2195341
  5. 5. Connolly KJ, Hammer SM. The acute aseptic meningitis syndrome. Infect Dis Clin North Am. 1990;4: 599–622. Available: https://www.ncbi.nlm.nih.gov/pubmed/2277191 pmid:2277191
  6. 6. Tyler KL. Emerging viral infections of the central nervous system: part 1. Arch Neurol. 2009;66: 939–948. pmid:19667214
  7. 7. Silva GS, Richards GA, Baker T, Amin PR, Council of the World Federation of Societies of Intensive and Critical Care Medicine. Encephalitis and myelitis in tropical countries: Report from the Task Force on Tropical Diseases by the World Federation of Societies of Intensive and Critical Care Medicine. J Crit Care. 2017;42: 355–359. pmid:29157660
  8. 8. Ferreira JE, Ferreira SC, Almeida-Neto C, Nishiya AS, Alencar CS, Gouveia GR, et al. Molecular characterization of viruses associated with encephalitis in São Paulo, Brazil. PLoS One. 2019;14: e0209993. pmid:30640927
  9. 9. Robertson FC, Lepard JR, Mekary RA, Davis MC, Yunusa I, Gormley WB, et al. Epidemiology of central nervous system infectious diseases: a meta-analysis and systematic review with implications for neurosurgeons worldwide. J Neurosurg. 2018; 1–20. pmid:29905514
  10. 10. Granerod J, Crowcroft NS. The epidemiology of acute encephalitis. Neuropsychol Rehabil. 2007;17: 406–428. pmid:17676528
  11. 11. Vincent J-L, Rello J, Marshall J, Silva E, Anzueto A, Martin CD, et al. International study of the prevalence and outcomes of infection in intensive care units. JAMA. 2009;302: 2323–2329. pmid:19952319
  12. 12. Marchiori PE, Lino AMM, Machado LR, Pedalini LM, Boulos M, Scaff M. Neuroinfection survey at a neurological ward in a Brazilian tertiary teaching hospital. Clinics. 2011;66: 1021–1025. pmid:21808869
  13. 13. Silva E, Dalfior Junior L, Fernandes H da S, Moreno R, Vincent J-L. Prevalência e desfechos clínicos de infecções em UTIs brasileiras: subanálise do estudo EPIC II. Rev bras ter intensiva. 2012;24: 143–150. pmid:23917761
  14. 14. Boucher A, Herrmann JL, Morand P, Buzelé R, Crabol Y, Stahl JP, et al. Epidemiology of infectious encephalitis causes in 2016. Med Mal Infect. 2017;47: 221–235. pmid:28341533
  15. 15. Roos KL. Encephalitis. Neurol Clin. 1999;17: 813–833. pmid:10517930
  16. 16. van den Boogaard M, Pickkers P, Slooter AJC, Kuiper MA, Spronk PE, van der Voort PHJ, et al. development and validation of PRE-DELIRIC (PREdiction of DELIRium in ICu patients) delirium prediction model for intensive care patients: observational multicentre study. BMJ. 2012;344: e420. pmid:22323509
  17. 17. Collins GS, Reitsma JB, Altman DG, Moons KGM. Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis (TRIPOD): the TRIPOD Statement. Br J Surg. 2015;102: 148–158. pmid:25627261
  18. 18. Dorsett M, Liang SY. Diagnosis and Treatment of Central Nervous System Infections in the Emergency Department. Emerg Med Clin North Am. 2016;34: 917–942. pmid:27741995
  19. 19. Metnitz PGH, Moreno RP, Almeida E, Jordan B, Bauer P, Campos RA, et al. SAPS 3—From evaluation of the patient to evaluation of the intensive care unit. Part 1: Objectives, methods, and cohort description. Intensive Care Med. 2005;31: 1336–1344. pmid:16132893
  20. 20. Vincent J-L, de Mendonca A, Cantraine F, Moreno R, Takala J, Suter PM, et al. use of the SOFA score to assess the incidence of organ dysfunction/failure in intensive care units: Results of a multicenter, prospective study. Crit Care Med. 1998;26: 1793. Available: https://journals.lww.com/ccmjournal/Fulltext/1998/11000/Use_of_the_SOFA_score_to_assess_the_incidence_of.16.aspx pmid:9824069
  21. 21. Teasdale G, Maas A, Lecky F, Manley G, Stocchetti N, Murray G. The Glasgow Coma Scale at 40 years: standing the test of time. Lancet Neurol. 2014;13: 844–854. pmid:25030516
  22. 22. Moreno RP, Metnitz PGH, Almeida E, Jordan B, Bauer P, Campos RA, et al. SAPS 3—From evaluation of the patient to evaluation of the intensive care unit. Part 2: Development of a prognostic model for hospital mortality at ICU admission. Intensive Care Med. 2005;31: 1345–1355. pmid:16132892
  23. 23. Yue L, Li G, Lian H, Wan X. Regression adjustment for treatment effect with multicollinearity in high dimensions. Comput Stat Data Anal. 2019;134: 17–35.
  24. 24. Austin PC, Tu JV. Bootstrap Methods for Developing Predictive Models. Am Stat. 2004;58: 131–137. Available: http://www.jstor.org/stable/27643521
  25. 25. Van Calster B, McLernon DJ, van Smeden M, Wynants L, Steyerberg EW, Topic Group "Evaluating diagnostic tests and prediction models" of the STRATOS initiative. Calibration: the Achilles heel of predictive analytics. BMC Med. 2019;17: 230. pmid:31842878
  26. 26. Janssen KJM, Moons KGM, Kalkman CJ, Grobbee DE, Vergouwe Y. Updating methods improved the performance of a clinical prediction model in new patients. J Clin Epidemiol. 2008;61: 76–86. pmid:18083464
  27. 27. Janssen KJM, Vergouwe Y, Kalkman CJ, Grobbee DE, Moons KGM. A simple method to adjust clinical prediction models to local circumstances. Can J Anaesth. 2009;56: 194–201. pmid:19247740
  28. 28. Szwarcwald CL. Estimation of the HIV Incidence and of the Number of People Living With HIV/AIDS in Brazil, 2012. J AIDS Clin Res. 2015;06.
  29. 29. Fodor PA, Levin MJ, Weinberg A, Sandberg E, Sylman J, Tyler KL. Atypical herpes simplex virus encephalitis diagnosed by PCR amplification of viral DNA from CSF. Neurology. 1998;51: 554–559. pmid:9710034
  30. 30. Jakob NJ, Lenhard T, Schnitzler P, Rohde S, Ringleb PA, Steiner T, et al. Herpes simplex virus encephalitis despite normal cell count in the cerebrospinal fluid. Crit Care Med. 2012;40: 1304–1308. pmid:22067626
  31. 31. Erdem H, Inan A, Guven E, Hargreaves S, Larsen L, Shehata G, et al. The burden and epidemiology of community-acquired central nervous system infections: a multinational study. Eur J Clin Microbiol Infect Dis. 2017;36: 1595–1611. pmid:28397100
  32. 32. Thomas KE, Hasbun R, Jekel J, Quagliarello VJ. The diagnostic accuracy of Kernig’s sign, Brudzinski’s sign, and nuchal rigidity in adults with suspected meningitis. Clin Infect Dis. 2002;35: 46–52. pmid:12060874