Next Article in Journal
Does It Really Pay-Off? Comparison of Lymphadenectomy versus Observational Approach in Skin Melanoma with Positive Sentinel Node Biopsy: Systematic Review and Meta-Analysis
Next Article in Special Issue
Special Issue “Post-COVID-19 Symptoms in Long-Haulers: Definition, Identification, Mechanisms, and Management”
Previous Article in Journal
Medications as a Trigger of Sleep-Related Eating Disorder: A Disproportionality Analysis
Previous Article in Special Issue
Detection of Male Hypogonadism in Patients with Post COVID-19 Condition
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Neuropsychological Predictors of Fatigue in Post-COVID Syndrome

by
Jordi A. Matias-Guiu
1,*,
Cristina Delgado-Alonso
1,
María Díez-Cirarda
1,
Álvaro Martínez-Petit
2,
Silvia Oliver-Mas
1,
Alfonso Delgado-Álvarez
1,
Constanza Cuevas
1,
María Valles-Salgado
1,
María José Gil
1,
Miguel Yus
3,
Natividad Gómez-Ruiz
3,
Carmen Polidura
3,
Josué Pagán
2,4,
Jorge Matías-Guiu
1 and
José Luis Ayala
4,5
1
Department of Neurology, Hospital Clínico San Carlos Health Research Institute “San Carlos” (IdISCC), Universidad Complutense de Madrid, 28040 Madrid, Spain
2
Department of Electronic Engineering, Universidad Politécnica de Madrid, 28040 Madrid, Spain
3
Department of Radiology, Clinico San Carlos Health Research Institute “San Carlos” (IdISCC), Universidad Complutense de Madrid, 28040 Madrid, Spain
4
Center for Computational Simulation, Universidad Politécnica de Madrid, Campus de Montegancedo, Boadilla del Monte, 28223 Madrid, Spain
5
Department of Computer Architecture and Automation, Faculty of Informatics, Universidad Complutense de Madrid, 28040 Madrid, Spain
*
Author to whom correspondence should be addressed.
J. Clin. Med. 2022, 11(13), 3886; https://doi.org/10.3390/jcm11133886
Submission received: 21 June 2022 / Revised: 1 July 2022 / Accepted: 1 July 2022 / Published: 4 July 2022

Abstract

:
Fatigue is one of the most disabling symptoms in several neurological disorders and has an important cognitive component. However, the relationship between self-reported cognitive fatigue and objective cognitive assessment results remains elusive. Patients with post-COVID syndrome often report fatigue and cognitive issues several months after the acute infection. We aimed to develop predictive models of fatigue using neuropsychological assessments to evaluate the relationship between cognitive fatigue and objective neuropsychological assessment results. We conducted a cross-sectional study of 113 patients with post-COVID syndrome, assessing them with the Modified Fatigue Impact Scale (MFIS) and a comprehensive neuropsychological battery including standardized and computerized cognitive tests. Several machine learning algorithms were developed to predict MFIS scores (total score and cognitive fatigue score) based on neuropsychological test scores. MFIS showed moderate correlations only with the Stroop Color–Word Interference Test. Classification models obtained modest F1-scores for classification between fatigue and non-fatigued or between 3 or 4 degrees of fatigue severity. Regression models to estimate the MFIS score did not achieve adequate R2 metrics. Our study did not find reliable neuropsychological predictors of cognitive fatigue in the post-COVID syndrome. This has important implications for the interpretation of fatigue and cognitive assessment. Specifically, MFIS cognitive domain could not properly capture actual cognitive fatigue. In addition, our findings suggest different pathophysiological mechanisms of fatigue and cognitive dysfunction in post-COVID syndrome.

1. Introduction

Fatigue is defined as a feeling of tiredness and lack of energy, including physical and/or mental exertion that has an impact on everyday activities. Fatigue is one of the most common symptoms in several neurological and medical disorders and, importantly, is considered one of the most disabling symptoms [1]. Fatigue may be categorized as peripheral or central. Peripheral fatigue is due to muscle and neuromuscular junction disorders and is characterized by muscle fatigability (i.e., objective reduction in strength during effort, improving with rest). Central fatigue may be present in peripheral, autonomic, and central nervous system disorders, and involves a subjective feeling of exhaustion that is also present at rest [2]. Interestingly, central fatigue usually also has a cognitive component (mental or cognitive fatigue). Cognitive fatigue refers to a decrease in mental effort in demanding cognitive tasks [3] and may be as disabling as physical fatigue.
Fatigue is usually examined using self-report questionnaires [4]. One of the most widely used scales is the Modified Fatigue Impact Scale (MFIS), which includes a multidimensional assessment of physical, cognitive, and psychosocial aspects of fatigue [5]. The relationship between cognitive fatigue and results from objective neuropsychological assessments is still controversial. For instance, in multiple sclerosis, some studies have found a relationship between fatigue and attention/executive functioning [6]. Specifically, sustained attention seems to be more closely related to fatigue. In addition, both fatigue and sustained attention deficits have similar neuroanatomical underpinnings: both processes have been associated with dysfunction of the frontoparietal network in structural and functional brain imaging studies. However, very few studies have directly evaluated the relationship between fatigue and cognitive performance [7]. In multiple sclerosis, sleep quality was the best predictor of cognitive fatigue as evaluated with the MFIS, while cognitive function (assessed with the Paced Auditory Serial Addition Test or the Symbol Digit Modalities Test) had lower or non-significant importance in the prediction [8,9].
Post-COVID syndrome (PCS) is a new condition occurring in individuals with a history of SARS-CoV-2 infection, in which several symptoms persist over time [10]. Among symptoms of PCS, fatigue is one of the most frequent and most disabling [11], and is usually persistent [12]. Cognitive symptoms are also very frequent, and neuropsychological examinations have revealed predominant impairment of attention and executive function [13,14]. From the perspective of cognitive neuroscience, PCS represents a new opportunity to evaluate the relationship between fatigue and cognitive function, and the neural underpinnings of cognitive fatigue. From a more clinical approach, understanding the relationship between fatigue and cognitive function has several implications. Specifically, it could help to improve our understanding of the mechanisms linked to mental fatigue and the concept of cognitive fatigue. Furthermore, it could guide the selection of cognitive tests for objective evaluation of fatigue, which may be important for the diagnosis and follow-up of these patients. The pathophysiology of fatigue in PCS is still poorly understood. According to the first studies showing neuroimaging alterations in several brain regions [15,16,17,18], fatigue may involve a central mechanism. In this regard, impairment of GABAB-ergic neurotransmission has been detected using transcranial magnetic stimulation of the motor cortex [19,20], and another study found an association between APOE4 and post-COVID fatigue [21]. Histopathological studies have been conducted in patients deceased by COVID-19, showing vascular changes and prominent neuroinflammation [22,23]. Neuroinflammation could promote neurodegenerative changes [24]. Because most pathological studies have been conducted in severe cases with COVID-19 deceased in the acute stage, it is unknown whether some of these or other mechanisms could be involved in the pathophysiology of PCS, which often occurs also after mild acute infections [25]. Furthermore, the existence of persistent immunological changes, viral reservoirs, autonomic failure, or even mitochondrial dysfunction may also play a role [26,27,28,29]. It is also unclear whether physical and cognitive fatigue share the same mechanisms.
In this study, we aimed to develop predictive models of fatigue using neuropsychological assessments in PCS. Specifically, we sought the following contributions:
(1)
To train several machine-learning algorithms using a dataset comprising a wide range of traditional “paper and pencil” and computerized neuropsychological assessments administered to a cohort of patients with PCS.
(2)
These models were trained to predict the presence of fatigue, several levels of fatigue severity, and the fatigue score of a perceived fatigue questionnaire.
(3)
We used a data-driven approach to evaluate the existence of linear and non-linear relationships between cognitive assessment results and subjective fatigue.

2. Materials and Methods

2.1. Participants

One hundred and thirteen patients with PCS according to the World Health Organization criteria [30] were included in the study. All patients reported new-onset cognitive complaints after COVID-19. The mean age was 50.94 ± 11.90 years old and 64.60% were women. The mean time between onset of the acute infection and assessment was 11.14 ± 4.67 months. SARS-CoV-2 infection was confirmed by reverse transcription-polymerase chain reaction in all cases, and other causes of the symptoms were excluded. Complete data are presented in Table 1.

2.2. Fatigue Assessment

Patients were assessed with the MFIS [31]. The MFIS contains 21 items related to cognitive (10 items, maximum score 40), physical (9 items, maximum score 36), and psychosocial (2 items, maximum score 8) aspects of fatigue. Each item is scored on a 5-point Likert-type scale from “never” (0 points) to “most of the time” (4 points). The maximum score is 84 [32]. A cut-off score of >38 has been proposed to classify patients as having significant fatigue [5].

2.3. Neuropsychological Assessment

Patients underwent a comprehensive neuropsychological assessment in 3 sessions lasting approximately 90 min each. Two different approaches were used. First, a trained neuropsychologist performed a standard neuropsychological assessment including the following tests:
  • Forward and backward digit span
  • Corsi block-tapping test
  • Symbol Digit Modalities Test
  • Boston Naming Test
  • Judgment of Line Orientation
  • Rey–Osterrieth Complex Figure (copy, recall at 3 and 30 min, and recognition)
  • Free and Cued Selective Reminding Test
  • Verbal fluencies (animals and words beginning with “p” and “m”; 1 min for each)
  • Stroop Color–Word Interference Test
  • Visual Object and Space Perception Battery.
For these tests, we obtained a raw score and derived an age- and education-adjusted scaled score following normative data from our setting [33,34].
Subsequently, patients were also assessed using the computerized neuropsychological battery Vienna Test System® (Schuhfried GmbH; Mödling, Austria) including the Cognitive Basic Assessment (COGBAT) and Perception and Attention Functions (WAF) batteries [35]. The COGBAT battery included the following tests:
  • Trail Making Test (Langensteinbach version), parts A and B (S1 form).
  • Figural Memory Test (S11 form)
  • Response inhibition (S13 form)
  • N-Back verbal (S1 form)
  • Tower of London (Freiburg version) (TOL, S1 form).
The WAF battery comprises 42 subtests: a total of 16 subtests for the alertness dimension, 8 for vigilance and sustained attention, 5 for divided attention, 3 for focused attention, 3 for selective attention, 3 for spatial attention, and 2 for smooth pursuit eye movements and visual scanning.
In addition, the Cognitrone (S11 form), Reaction test (RT, S3 form), and Determination test (DT, S1 form) were also administered. The computerized battery was self-administered at the hospital under the supervision of a trained neuropsychologist. Further information about neuropsychological assessments is included in Supplementary Table S1.

2.4. Statistical and Machine Learning Analysis

Raw scores for each test were converted to age-, education-, and sex-adjusted scaled scores, according to local norms. These scaled test results were the main focus of the analysis as they are comparable across all the patients in the study, independently of their demographic characteristics. However, raw and computerized test scores were also assessed in some parts of the analysis.
All neuropsychological test results were preprocessed following the same procedure: outlier removal based on the interquartile range (IQR) of scores on each test, imputation of missing values through a K-nearest neighbors (KNN) algorithm with 5 neighbors as the parameter, and normalization in the range [0, 1]. Patients were divided into 2 subsets, with 80% of the sample used to train the machine learning models and 20% to test the results.
We first performed a univariate analysis of the correlation between scaled scores and the MFIS score. Pearson’s coefficients and their p-values were calculated for every neuropsychological test independently. Correlation coefficients were characterized as low (<0.30), moderate (0.30–0.49), or high (>0.50). Only correlations of r > 0.30 were specified.
Classification tasks were performed using the adjusted scaled scores for neuropsychological tests. We trained multiple machine learning algorithms, including (a) random forest, (b) K-nearest neighbors, (c) support vector machine, (d) Gaussian naive Bayes, (e) complement naive Bayes, and (f) logistic regression. In all cases, parameters were optimized with a grid-search trained in a 5-fold cross-validation, scoring the weighted F1, from which we extracted the best estimator. These machine-learning algorithms were selected for their better performance among a broader set of classifiers. Additionally, they were selected to exploit different useful characteristics of machine-learning classification: support vector machine is able to provide non-linear classification and works well with unstructured and semi-structured data; naive Bayes focuses on calculating conditional probability assuming a statistical distribution, while K-nearest neighbors does not require any statistical assumption; random forests is a bunch of decision trees combined that can handle both categorical and numerical variables at the same time as features, but overfitting is a very common problem; finally, logistic regression works with already identified independent variables, and it is based on statistical approaches, but it can provide different decision boundaries with different weights that are near the optimal point. As classification targets, MFIS scores were categorized into different classes. For binary classification, the threshold used was a score of 38, above which a patient was labeled as fatigued. For 3-classes models, the fatigued patients were divided into low and high fatigue with a heuristic threshold of 61. For 4-classes models, the fatigued patients were divided into low, medium, and high fatigue with thresholds of 53 and 68. We compared the results of the models among them and with zero-rule classifiers.
Regression models were evaluated for the scores of all neuropsychological tests: raw, scaled, and computerized. The algorithms used were (i) linear, (ii) ridge, (iii) lasso, and (iv) elastic net regression. The best parameters for each algorithm were found with a grid-search that was also trained in a 5-fold cross-validation, scoring the R 2 metric, from which we extracted the best estimator. The study was complemented with deep learning techniques by applying two different architectures of artificial neural networks (ANN) to the data, as detailed in Supplementary Table S2. ANN 1 was trained with a batch size of 10 samples and 30 epochs, while ANN 2 used a batch size of 64 samples and 100 epochs. After the first batch of results, a principal component analysis (PCA) was performed on the features with a view to improving the metrics achieved. For this purpose, the normalization step was replaced in the preprocessing phase by a standardization of values to mean = 0 and standard deviation = 1. A soft and hard reduction in features was conducted in each of the datasets available. The soft reduction consisted of selecting the number of principal components that accounted for 90% of the variance, which resulted in keeping a high number of components. The hard reduction involved selecting only the principal components that comparatively captured the majority of variance (around 40–50%), with few principal components selected for each dataset.
All models (either classification or regression) were also tested specifically on the MFIS cognitive subscale score. The cut-off scores for the classification models were 18, above which scores were considered to indicate cognitive fatigue; 29 for the low- and high-cognitive fatigue split in the 3-classes models; and 26 and 33 for the low-, medium-, and high-cognitive fatigue split in the 4-classes models.
Classification models were evaluated with the F1-score, a metric commonly used in machine learning analysis, based on precision (fraction of correctly classified positive subjects among those classified as the positive class) and recall (the fraction of correctly classified positive subjects among the actual positive number of subjects). To evaluate the regression models, we used the R2 statistic, which measures the amount of variance in the predictions that is explained and takes a maximum value of 1 (optimal prediction). Low or negative values indicate worse models.

3. Results

3.1. Sample Description

Ninety-two patients (81.41%) were regarded as having clinically significant fatigue according to the prespecified cut-off point. The mean MFIS-total score was 52.73 ± 16.02. By fatigue domain, mean MFIS-physical was 23.28 ± 8.35, MFIS-cognitive was 25.05 ± 7.15, and MFIS-psychosocial was 4.86 ± 3.50.
MFIS-total presented a correlation of r = 0.903 with MFIS-physical, r = 0.862 with MFIS-cognitive, and r = 0.495 with MFIS-psychosocial. MFIS-physical was correlated with MFIS-cognitive (r = 0.670) and MFIS-psychosocial (r = 0.434). The correlation between MFIS-cognitive and MFIS-psychosocial was r = 0.322. All these correlations were statistically significant at p < 0.001.
The correlation with the Beck Depression Inventory was r = 0.234 (p = 0.013) for MFIS-total and r = 0.250 (p = 0.009) for MFIS-cognitive. The correlation with Pittsburg Sleep Quality Index was r = 0.250 (p = 0.009) for MFIS-total and r = 0.214 (p = 0.026) for MFIS-cognitive.

3.2. Correlations between MFIS and Neuropsychological Tests

MFIS-total showed moderate, statistically significant correlations (p < 0.05) with Stroop trial 1 (r = −0.32) and Stroop trial 2 (r = −0.38). Correlations with the MFIS-cognitive score were similar, and only Stroop trial 1 (r = −0.33), Stroop trial 2 (r = −0.37), and Stroop trial 3 (r = −0.35) reached moderate correlations. The other neuropsychological tests showed non-significant or low correlations with MFIS-total and MFIS-cognitive.

3.3. Classification Models

None of the models evaluated for the classification of MFIS scores, except for complement naive Bayes, was able to classify more than 25% of non-fatigued instances as such. The results of the models were compared on the weighted average F1-score (Figure 1).
All binary classification models presented an F1-score of 0.75, although this was due to a high instance class imbalance, with all patients of the test subset classified as fatigued. This means that the metrics were similar to those obtained with a zero-rule classifier, in which all the instances are assigned to the most frequent class with no need for patient information. Only the complement naive Bayes correctly classified 75% of non-fatigued patients, reaching an F1-score of 0.88. However, this algorithm failed to classify more than 25% of non-fatigued instances in the 3- and 4-classes models, making the results of the binary class model less solid. The highest F1-score was achieved by the random forest algorithm for the 3-classes model (F1 = 0.53) and by the complement naive Bayes algorithm (F1 = 0.34) for the 4-classes model. These results were considered too low to establish a quality classification of fatigue levels in patients. However, they improved the F1-scores achieved by the zero-rule algorithms (F1 = 0.36 for the three-classes models and F1 = 0.14 in the four-classes models). Detailed F1-scores are gathered in Table 2. Precision and recall are shown in Supplementary Tables S2 and S3.
Results were similar for the classification of MFIS-cognitive (Figure 2). None of the models was able to classify a single instance as non-fatigued, with the high F1-scores achieved in the binary classification once more explained by the severe class imbalance. The highest F1-score was achieved by both support vector machine and logistic regression algorithms (F1 = 0.81 for both) in the binary classification, by the K-nearest neighbors algorithm (F1 = 0.58) in the 3-classes models, and by the logistic regression (F1 = 0.34) in the 4-classes models. In this case, these models were similar or outperformed those obtained by the zero-rule classifiers (F1 = 0.81 for the binary classification, F1 = 0.36 for three-classes and F1 = 0.22 for the four-classes models). Detailed F1-scores are summarized in Table 2.

3.4. Regression Models

No regression model achieved acceptable values, whether the MFIS-total score or MFIS-cognitive score was evaluated.
The highest score in the MFIS regression task was achieved by a Ridge regression model for scaled test results, with R 2 = 0.16, which was considered insufficient. However, metrics for scaled scores were significantly higher than those obtained with raw and computerized scores, as can be seen in Table 3. This is the reason why the ANNs were only evaluated for this set of features. After applying the two PCA-based feature reductions, we compared the R 2 scores of the models (Figure 3), and the previous Ridge regression trained on the full dataset remained as the highest metric. However, PCA reductions improved the result of some of the machine learning and ANN models, especially with soft reduction. The R 2 scores achieved for each reduction in features can be found in Table 4.
The highest scoring algorithm in the MFIS-cognitive regression task was once more the Ridge Regression for scaled test results ( R 2 = 0.12). In this MFIS subscale, scaled scores overperformed or matched the raw and computerized scores in all models (Table 3), and were used again for assessment of the ANNs. When comparing the metrics of PCA reductions against previous results (Figure 4), it was found that the best scoring model was lasso regression in the soft reduction ( R 2 = 0.19). Generally, PCA reductions also helped in the performance of the regression task for MFIS-cognitive. Detailed data on R 2 scores for each reduction can be consulted in Table 4.

4. Discussion

In this study, we evaluated a group of patients with PCS with a comprehensive neuropsychological assessment and a standardized scale for fatigue. The MFIS is one of the most commonly used tools for the assessment of fatigue in several conditions and has also been widely applied for the assessment of post-COVID-19 fatigue [36]. Interestingly, we only found moderate correlations between MFIS and Stroop test scores. The correlation was negative, meaning that higher fatigue severity is associated with poorer performance in the Stroop test. The correlation was slightly higher with MFIS-cognitive than MFIS-total score. The Stroop test is a measure of cognitive flexibility, selective attention, inhibition, and information processing speed [37], several of the processes linked to cognitive fatigue [38]. The other tests showed non-significant or low correlations, which suggests that these correlations are not clinically relevant.
We developed several machine learning algorithms in order to predict the presence of fatigue, several levels of fatigue severity, or the fatigue score, based on neuropsychological test scores. Despite using several algorithms with different approaches, the classification metrics obtained were considered low according to the F1-score and R2. In addition, three and four-classes models (which reflect different degrees of severity of fatigue) performed worse than binary classification (which means the presence or absence of clinically significant fatigue). This suggests that there are no substantial cognitive modifications over the different degrees of fatigue. To our knowledge, the relationship between cognitive performance and fatigue in PCS has only been explored in one other study [39]. In this case, the authors conducted a linear regression analysis and, even after including several scales of depression, anxiety, or apathy, the best model obtained an R2 of 0.418 for MFIS-cognitive, and the cognitive test included in the model (digit span backwards) explained a very low percentage of variance (partial correlation; r < −0.2). Overall, these results suggest that fatigue and cognitive dysfunction in PCS may present different pathophysiological mechanisms. Central fatigue has previously been associated with the activity of several brain regions and networks. Specifically, a recent study suggested the involvement of the striatum of the basal ganglia, the dorsolateral prefrontal cortex, the dorsal anterior cingulate, the ventromedial prefrontal cortex, and the anterior insula [40]. Of these regions, the anterior cingulate, ventromedial prefrontal cortex, and anterior insula could have a more prominent role [38,41]. Most attentional/executive tests are more closely linked to the dorsolateral prefrontal cortex than to other regions, which may explain the low correlations with MFIS. One of the most noteworthy exceptions is the Stroop test, which has also been associated with the anterior cingulate and ventromedial prefrontal cortex in some studies of other disorders [42]. In addition, fatigue (and especially physical fatigue) in PCS may also have other mechanisms, such as immunological dysfunction [43], which could also explain the apparent discordance between subjective fatigue assessment and cognitive performance.
Although further research is probably needed to design and validate novel neuropsychological tasks that fully capture cognitive fatigue more ecologically, the extensive neuropsychological battery and wide variety of tests used in this study raise the fundamental debate about the capability of the cognitive subdomain of MFIS to detect actual cognitive fatigue. In this regard, other questionnaires or electrophysiological biomarkers have been suggested [44]. Reliable tools for the assessment of cognitive fatigue are needed for accurate diagnosis and follow-up and for evaluating the effect of new therapies in clinical trials. Previous studies have also used machine learning to analyze alternative fatigue detection methods based on new technologies. For instance, biological features extracted with EEG, electro-oculogram, or heart rate, and physical features such as yawning, drowsiness, or slow eye movements [45]. These approaches may be especially useful in the driving and occupational fields to reduce risks and improve workers’ health and well-being [46].
Our study has some limitations. Although our protocol included a wide range of cognitive tests, we cannot exclude the possibility that other cognitive tasks may improve prediction. For instance, some authors have used the Paced Auditory Serial Addition Test as a measure of cognitive fatigue, especially in the field of multiple sclerosis [47,48], although they also observed no correlation with subjective fatigue [9,49,50]. In addition, we used the MFIS as a reference for the assessment of fatigue and cognitive fatigue. Other studies replicating these findings with other fatigue scales may be of interest [4].
In conclusion, our study did not identify reliable neuropsychological predictors of cognitive fatigue as determined by a subjective questionnaire. This may suggest that different pathophysiological mechanisms are associated with each disorder in PCS. Future studies using advanced neuroimaging protocols could be of interest to further disentangle the relationships between fatigue and cognitive function in the context of PCS.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/jcm11133886/s1, Supplementary Table S1: Description of neuropsychological tests and scores; Supplementary Table S2: Architecture of the two ANNs used in the regression task, detailing their layer types (type), number of neurons/fraction of the input units to drop for dense and dropout layers respectively (size) and activation functions, if used (activation); Supplementary Table S3. Weighted average precision of the classification models on predicting MFIS (total score) and MFIS (cognitive score) categorizations. The algorithms evaluated were random forest (RF), K-nearest neighbors (KNN), support vector machine (SVM), Gaussian naive Bayes (GNB), complement naive Bayes (CNB), and logistic regression (LR); Supplementary Table S4. Weighted average recall of the classification models on predicting MFIS (total score) and MFIS (cognitive score) categorizations. The algorithms evaluated were random forest (RF), K-nearest neighbors (KNN), support vector machine (SVM), Gaussian naive Bayes (GNB), complement naive Bayes (CNB), and logistic regression (LR).

Author Contributions

Conceptualization, J.L.A. and J.A.M.-G.; methodology, J.L.A., M.D.-C. and J.A.M.-G.; software, Á.M.-P., J.P. and J.L.A.; formal analysis, C.D.-A., Á.M.-P., J.P., J.L.A. and J.A.M.-G.; investigation, all; resources, J.M.-G., J.L.A. and J.A.M.-G.; data curation, C.D.-A., S.O.-M., C.C., A.D.-Á., M.V.-S., M.J.G., M.D.-C., M.Y., C.P., N.G.-R. and J.A.M.-G.; writing—original draft preparation, Á.M.-P., C.D.-A., J.A.M.-G. and J.L.A.; writing—review and editing, all.; supervision, J.L.A. and J.A.M.-G.; project administration, J.A.M.-G.; funding acquisition, J.A.M.-G., J.L.A., J.M.-G. and M.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Department of Health of the Community of Madrid, grant number FIBHCSC 2020 COVID-. Jordi A. Matias-Guiu is supported by Instituto de Salud Carlos III, grant number INT20/00079 (co-funded by European Regional Development Fund “A way to make Europe”). Silvia Mas-Oliver is supported by Fundación Para el Conocimiento Madrid+D, Grant number G63-Healthstart Plus HSP3.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and approved by the Institutional Review Board (or Ethics Committee) of Hospital Clinico San Carlos (protocol code 20/633-E, date of approval: 19 October 2020) for studies involving humans.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Not applicable.

Acknowledgments

The authors acknowledge all the participants in this study, and specifically the association of patients with long-COVID “Colectivo COVID-19 Persistente Madrid” for their support and participation in the study.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Chaudhuri, A.; Behan, P.O. Fatigue in neurological disorders. Lancet 2004, 363, 978–988. [Google Scholar] [CrossRef]
  2. Palotai, M.; Guttmann, C.R. Brain anatomical correlates of fatigue in multiple sclerosis. Mult. Scler. J. 2019, 26, 751–764. [Google Scholar] [CrossRef] [PubMed]
  3. Ren, P.; Anderson, A.J.; McDermott, K.; Baran, T.M.; Lin, F. Cognitive fatigue and cortico-striatal network in old age. Aging 2019, 11, 2312–2326. [Google Scholar] [CrossRef] [PubMed]
  4. Delgado-Álvarez, A.; Matías-Guiu, J.A.; Delgado-Alonso, C.; Cuevas, C.; Palacios-Sarmiento, M.; Vidorreta-Ballesteros, L.; Montero-Escribano, P.; Matías-Guiu, J. Validation of two new scales for the assessment of fatigue in Multiple Sclerosis: F-2-MS and FACIT-F. Mult. Scler. Relat. Disord. 2022, 63, 103826. [Google Scholar] [CrossRef]
  5. Kos, D.; Kerckhofs, E.; Carrea, I.; Verza, R.; Ramos, M.; Jansa, J. Evaluation of the Modified Fatigue Impact Scale in four different European countries. Mult. Scler. J. 2005, 11, 76–80. [Google Scholar] [CrossRef]
  6. Hanken, K.; Eling, P.; Hildebrandt, H. Is there a cognitive signature for MS-related fatigue? Mult. Scler. 2015, 21, 376–381. [Google Scholar] [CrossRef]
  7. Linnhoff, S.; Fiene, M.; Heinze, H.-J.; Zaehle, T. Cognitive Fatigue in Multiple Sclerosis: An Objective Approach to Diagnosis and Treatment by Transcranial Electrical Stimulation. Brain Sci. 2019, 9, 100. [Google Scholar] [CrossRef] [Green Version]
  8. Berard, J.A.; Smith, A.M.; Walker, L. Predictive Models of Cognitive Fatigue in Multiple Sclerosis. Arch. Clin. Neuropsychol. 2019, 34, 31–38. [Google Scholar] [CrossRef]
  9. Mackay, L.; Johnson, A.M.; Moodie, S.T.; Rosehart, H.; Morrow, S.A. Predictors of cognitive fatigue and fatigability in multiple sclerosis. Mult. Scler. Relat. Disord. 2021, 56, 103316. [Google Scholar] [CrossRef]
  10. Mehandru, S.; Merad, M. Pathological sequelae of long-haul COVID. Nat. Immunol. 2022, 23, 194–202. [Google Scholar] [CrossRef]
  11. Vanichkachorn, G.; Newcomb, R.; Cowl, C.T.; Murad, M.H.; Breeher, L.; Miller, S.; Trenary, M.; Neveau, D.; Higgins, S. Post–COVID-19 Syndrome (long haul syndrome): Description of a multidisciplinary clinic at mayo clinic and characteristics of the initial patient cohort. Mayo Clin. Proc. 2021, 96, 1782–1791. [Google Scholar] [CrossRef] [PubMed]
  12. de las Peñas, C.F.; Martín-Guerrero, J.D.; Cancela-Cilleruelo, I.; Moro-López-Menchero, P.; Pellicer-Valero, O. Exploring the recovery curve for long-term post-COVID dyspnea and fatigue. Eur. J. Intern. Med. 2022, 117, 201–203. [Google Scholar] [CrossRef]
  13. Delgado-Alonso, C.; Valles-Salgado, M.; Delgado-Álvarez, A.; Yus, M.; Gómez-Ruiz, N.; Jorquera, M.; Polidura, C.; Gil, M.J.; Marcos, A.; Matías-Guiu, J.; et al. Cognitive dysfunction associated with COVID-19: A comprehensive neuropsychological study. J. Psychiatr. Res. 2022, 150, 40–46. [Google Scholar] [CrossRef] [PubMed]
  14. García-Sánchez, C.; Calabria, M.; Grunden, N.; Pons, C.; Arroyo, J.A.; Gómez-Anson, B.; Lleó, A.; Alcolea, D.; Belvís, R.; Morollón, N.; et al. Neuropsychological deficits in patients with cognitive complaints after COVID-19. Brain Behav. 2022, 12, e2508. [Google Scholar] [CrossRef] [PubMed]
  15. Douaud, G.; Lee, S.; Alfaro-Almagro, F.; Arthofer, C.; Wang, C.; McCarthy, P.; Lange, F.; Andersson, J.L.R.; Griffanti, L.; Duff, E.; et al. SARS-CoV-2 is associated with changes in brain structure in UK Biobank. Nature 2022, 604, 697–707. [Google Scholar] [CrossRef] [PubMed]
  16. Yus, M.; Matias-Guiu, J.A.; Gil-Martínez, L.; Gómez-Ruiz, N.; Polidura, C.; Jorquera, M.; Delgado-Alonso, C.; Díez-Cirarda, M.; Matías-Guiu, J.; Arrazola, J. Persistent olfactory dysfunction after COVID-19 is associated with reduced perfusion in the frontal lobe. Acta Neurol. Scand. 2022, in press. [Google Scholar] [CrossRef]
  17. Huang, S.; Zhou, Z.; Yang, D.; Zhao, W.; Zeng, M.; Xie, X.; Du, Y.; Jiang, Y.; Zhou, X.; Yang, W.; et al. Persistent white matter changes in recovered COVID-19 patients at the 1-year follow-up. Brain 2022, 145, 1830–1838. [Google Scholar] [CrossRef]
  18. Huang, Y.; Ling, Q.; Manyande, A.; Wu, D.; Xiang, B. Brain imaging changes in patients recovered from COVID-19: A Narrative Review. Front. Neurosci. 2022, 16, 5868. [Google Scholar] [CrossRef]
  19. Ortelli, P.; Ferrazzoli, D.; Sebastianelli, L.; Engl, M.; Romanello, R.; Nardone, R.; Bonini, I.; Koch, G.; Saltuari, L.; Quartarone, A.; et al. Neuropsychological and neurophysiological correlates of fatigue in post-acute patients with neurological manifestations of COVID-19: Insights into a challenging symptom. J. Neurol. Sci. 2021, 420, 117271. [Google Scholar] [CrossRef]
  20. Ortelli, P.; Ferrazzoli, D.; Sebastianelli, L.; Maestri, R.; Dezi, S.; Spampinato, D.; Saltuari, L.; Alibardi, A.; Engl, M.; Kofler, M.; et al. Altered motor cortex physiology and dysexecutive syndrome in patients with fatigue and cognitive difficulties after mild COVID-19. Eur. J. Neurol. 2022, 29, 1652–1662. [Google Scholar] [CrossRef]
  21. Kurki, S.N.; Kantonen, J.; Kaivola, K.; Hokkanen, L.; Mäyränpää, M.I.; Puttonen, H.; Martola, J.; Pöyhönen, M.; Kero, M.; Tuimala, J.; et al. APOE ε4 associates with increased risk of severe COVID-19, cerebral microhaemorrhages and post-COVID mental fatigue: A Finnish biobank, autopsy and clinical study. Acta Neuropathol. Commun. 2021, 9, 199. [Google Scholar] [CrossRef] [PubMed]
  22. Ruz-Caracuel, I.; Pian-Arias, H.; Corral, I.; Carretero-Barrio, I.; Bueno-Sacristán, D.; Pérez-Mies, B.; García-Cosío, M.; Caniego-Casas, T.; Pizarro, D.; García-Narros, M.I.; et al. Neuropathological findings in fatal COVID-19 and their associated neurological clinical manifestations. Pathology 2022. [Google Scholar] [CrossRef] [PubMed]
  23. Maiese, A.; Manetti, A.C.; Bosetti, C.; Del Duca, F.; La Russa, R.; Frati, P.; Di Paolo, M.; Turillazzi, E.; Fineschi, V. SARS-CoV-2 and the brain: A review of the current knowledge on neuropathology in COVID-19. Brain Pathol. 2021, 31, e13013. [Google Scholar] [CrossRef] [PubMed]
  24. Xia, X.; Wang, Y.; Zheng, J. COVID-19 and Alzheimer’s disease: How one crisis worsens the other. Transl. Neurodegener. 2021, 10, 15. [Google Scholar] [CrossRef]
  25. Frontera, J.A.; Simon, N.M. Bridging knowledge gaps in the diagnosis and management of neuropsychiatric sequelae of COVID-19. JAMA Psychiatry 2022, in press. [Google Scholar] [CrossRef]
  26. Becker, R.C. Autonomic dysfunction in SARS-COV-2 infection acute and long-term implications COVID-19 editor’s page series. J. Thromb. Thrombolysis 2021, 52, 692–707. [Google Scholar] [CrossRef]
  27. Paul, B.D.; Lemle, M.D.; Komaroff, A.L.; Snyder, S.H. Redox imbalance links COVID-19 and myalgic encephalomyelitis/chronic fatigue syndrome. Proc. Natl. Acad. Sci. USA 2021, 118, e2024358118. [Google Scholar] [CrossRef]
  28. Gómez-Pinedo, U.; Matias-Guiu, J.; Sanclemente-Alaman, I.; Moreno-Jimenez, L.; Montero-Escribano, P.; Matias-Guiu, J.A. Is the brain a reservoir organ for SARS-CoV2? J. Med. Virol. 2020, 92, 2354–2355. [Google Scholar] [CrossRef]
  29. Zollner, A.; Koch, R.; Jukic, A.; Pfister, A.; Meyer, M.; Rössler, A.; Kimpel, J.; Adolph, T.E.; Tilg, H. Postacute COVID-19 is Characterized by Gut Viral Antigen Persistence in Inflammatory Bowel Diseases. Gastroenterology 2022, in press. [Google Scholar] [CrossRef]
  30. Ramakrishnan, R.K.; Kashour, T.; Hamid, Q.; Halwani, R.; Tleyjeh, I.M. Unraveling the Mystery Surrounding Post-Acute Sequelae of COVID-19. Front. Immunol. 2021, 12, 686029. [Google Scholar] [CrossRef]
  31. World Health Organization. A Clinical Case Definition of Post COVID-19 Condition by a Delphi Consensus; World Health Organization: Geneva, Switzerland, 2021. [Google Scholar]
  32. Fisk, J.D.; Ritvo, P.G.; Ross, L.; Haase, D.A.; Marrie, T.J.; Schlech, W.F. Measuring the functional impact of fatigue: Initial validation of the Fatigue Impact Scale. Clin. Infect. Dis. 1994, 18 (Suppl. S1), S79–S83. [Google Scholar] [CrossRef] [PubMed]
  33. Larson, R.D. Psychometric Properties of the Modified Fatigue Impact Scale. Int. J. MS Care 2013, 15, 15–20. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  34. Peña-Casanova, J.; Blesa, R.; Aguilar, M.; Gramunt, N.; Gómez-Ansón, B.; Oliva, R.; Molinuevo, J.L.; Robles, A.; Barquero, M.S.; Antúnez, C.; et al. Spanish Multicenter Normative Studies (NEURONORMA Project): Methods and Sample Characteristics. Arch. Clin. Neuropsychol. 2009, 24, 307–319. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  35. Peña-Casanova, J.; Casals-Coll, M.; Quintana, M.; Sánchez-Benavides, G.; Rognoni, T.; Calvo, L.; Palomo, R.; Aranciva, F.; Tamayo, F.; Manero, R. Spanish normative studies in a young adult population (NEURONORMA young adults project): Methods and characteristics of the sample. Neurologia 2012, 27, 253–260. [Google Scholar] [CrossRef]
  36. Aschenbrenner, S.; Kaiser, S.; Pfüller, U.; Roesch-Ely, D.; Weisbrod, M. Testset COGBAT; Schuhfried GmbH: Mödling, Austria, 2012. [Google Scholar]
  37. Ceban, F.; Ling, S.; Lui, L.M.; Lee, Y.; Gill, H.; Teopiz, K.M.; Rodrigues, N.B.; Subramaniapillai, M.; Di Vincenzo, J.D.; Cao, B.; et al. Fatigue and cognitive impairment in Post-COVID-19 Syndrome: A systematic review and meta-analysis. Brain Behav. Immun. 2021, 101, 93–135. [Google Scholar] [CrossRef]
  38. Peña-Casanova, J.; Quiñones-Úbeda, S.; Gramunt-Fombuena, N.; Quintana, M.; Aguilar, M.; Molinuevo, J.L.; Serradell, M.; Robles, A.; Barquero, M.S.; Payno, M.; et al. Spanish Multicenter Normative Studies (NEURONORMA Project): Norms for the Stroop Color-Word Interference Test and the Tower of London-Drexel. Arch. Clin. Neuropsychol. 2009, 24, 413–429. [Google Scholar] [CrossRef]
  39. Kok, A. Cognitive control, motivation and fatigue: A cognitive neuroscience perspective. Brain Cogn. 2022, 160, 105880. [Google Scholar] [CrossRef]
  40. Calabria, M.; García-Sánchez, C.; Grunden, N.; Pons, C.; Arroyo, J.A.; Gómez-Anson, B.; García, M.d.C.E.; Belvís, R.; Morollón, N.; Igual, J.V.; et al. Post-COVID-19 fatigue: The contribution of cognitive and neuropsychiatric symptoms. J. Neurol. 2022. [Google Scholar] [CrossRef]
  41. Wylie, G.R.; Yao, B.; Genova, H.M.; Chen, M.H.; DeLuca, J. Using functional connectivity changes associated with cognitive fatigue to delineate a fatigue network. Sci. Rep. 2020, 10, 21927. [Google Scholar] [CrossRef]
  42. Pessiglione, M.; Vinckier, F.; Bouret, S.; Daunizeau, J.; Le Bouc, R. Why not try harder? Computational approach to motivation deficits in neuro-psychiatric diseases. Brain 2017, 141, 629–650. [Google Scholar] [CrossRef] [Green Version]
  43. Matias-Guiu, J.A.; Cabrera-Martin, M.N.; Valles-Salgado, M.; Rognoni, T.; Galán, L.; Moreno-Ramos, T.; Carreras, J.L.; Matías-Guiu, J. Inhibition impairment in frontotemporal dementia, amyotrophic lateral sclerosis, and Alzheimer’s disease: Clinical assessment and metabolic correlates. Brain Imaging Behav. 2019, 13, 651–659. [Google Scholar] [CrossRef] [PubMed]
  44. Ganesh, R.; Grach, S.L.; Ghosh, A.K.; Bierle, D.M.; Salonen, B.R.; Collins, N.M.; Joshi, A.Y.; Boeder, N.D.; Anstine, C.V.; Mueller, M.R.; et al. The Female-Predominant Persistent Immune Dysregulation of the Post-COVID Syndrome. Mayo Clin. Proc. 2022, 97, 454–464. [Google Scholar] [CrossRef] [PubMed]
  45. Bafna, T.; Bækgaard, P.; Hansen, J.P. Mental fatigue prediction during eye-typing. PLoS ONE 2021, 16, e0246739. [Google Scholar] [CrossRef] [PubMed]
  46. Hooda, R.; Joshi, V.; Shah, M. A comprehensive review of approaches to detect fatigue using machine learning techniques. Chronic Dis. Transl. Med. 2021, 8, 26–35. [Google Scholar] [CrossRef] [PubMed]
  47. De la Vega, R.; Anabalón, H.; Jara, C.; Villamil-Cabello, E.; Chervellino, M.; Calvo-Rodríguez, A. Effectiveness of mobile technology in managing fatigue: Balert App. Front. Psychol. 2021, 12, 704955. [Google Scholar] [CrossRef]
  48. Matias-Guiu, J.A.; Cortés-Martínez, A.; Montero, P.; Pytel, V.; Moreno-Ramos, T.; Jorquera, M.; Yus, M.; Arrazola, J.; Matías-Guiu, J. Structural MRI correlates of PASAT performance in multiple sclerosis. BMC Neurol. 2018, 18, 214. [Google Scholar] [CrossRef] [Green Version]
  49. Pitteri, M.; Dapor, C.; DeLuca, J.; Chiaravalloti, N.D.; Marastoni, D.; Calabrese, M. Slowing processing speed is associated with cognitive fatigue in newly diagnosed multiple sclerosis patients. J. Int. Neuropsychol. Soc. 2022, in press. [Google Scholar] [CrossRef]
  50. Cortés-Martínez, A.; Matías-Guiu, J.A.; Pytel, V.; Montero, P.; Moreno-Ramos, T.; Matías-Guiu, J. What is the meaning of PASAT rejection in multiple sclerosis? Acta Neurol. Scand. 2019, 139, 559–562. [Google Scholar] [CrossRef]
Figure 1. F1-scores for each Modified Fatigue Impact Scale classification type (binary, 3-classes, and 4-classes) for each model evaluated: random forest (RF), K-nearest neighbors (KNN), support vector machine (SVM), Gaussian naive Bayes (GNB), complement naive Bayes (CNB), and logistic regression (LR).
Figure 1. F1-scores for each Modified Fatigue Impact Scale classification type (binary, 3-classes, and 4-classes) for each model evaluated: random forest (RF), K-nearest neighbors (KNN), support vector machine (SVM), Gaussian naive Bayes (GNB), complement naive Bayes (CNB), and logistic regression (LR).
Jcm 11 03886 g001
Figure 2. F1-scores for each Modified Fatigue Impact Scale-cognitive classification type (binary, 3-classes, and 4-classes) on each model evaluated: random forest (RF), K-nearest neighbors (KNN), support vector machine (SVM), Gaussian naive Bayes (GNB), complement naive Bayes (CNB), and logistic regression (LR).
Figure 2. F1-scores for each Modified Fatigue Impact Scale-cognitive classification type (binary, 3-classes, and 4-classes) on each model evaluated: random forest (RF), K-nearest neighbors (KNN), support vector machine (SVM), Gaussian naive Bayes (GNB), complement naive Bayes (CNB), and logistic regression (LR).
Jcm 11 03886 g002
Figure 3. R 2 scores for each Modified Fatigue Impact Scale regression model (linear, ridge, lasso, elastic net, ANN 1, and ANN 2) for each feature reduction type (no principal component analysis [PCA], hard PCA, soft PCA). The negative section of the vertical axis is not represented to scale with the positive section to improve the visualization of values.
Figure 3. R 2 scores for each Modified Fatigue Impact Scale regression model (linear, ridge, lasso, elastic net, ANN 1, and ANN 2) for each feature reduction type (no principal component analysis [PCA], hard PCA, soft PCA). The negative section of the vertical axis is not represented to scale with the positive section to improve the visualization of values.
Jcm 11 03886 g003
Figure 4. R 2 scores for each Modified Fatigue Impact Scale–cognitive regression model (linear, ridge, lasso, elastic net, ANN 1, and ANN 2) for each feature reduction type (no principal component analysis [PCA], hard PCA, soft PCA). The negative section of the vertical axis is not represented to scale with the positive section to improve the visualization of values. This section may be divided by subheadings. It should provide a concise and precise description of the experimental results, their interpretation, as well as the experimental conclusions that can be drawn.
Figure 4. R 2 scores for each Modified Fatigue Impact Scale–cognitive regression model (linear, ridge, lasso, elastic net, ANN 1, and ANN 2) for each feature reduction type (no principal component analysis [PCA], hard PCA, soft PCA). The negative section of the vertical axis is not represented to scale with the positive section to improve the visualization of values. This section may be divided by subheadings. It should provide a concise and precise description of the experimental results, their interpretation, as well as the experimental conclusions that can be drawn.
Jcm 11 03886 g004
Table 1. Main demographic and clinical characteristics.
Table 1. Main demographic and clinical characteristics.
Variable
Age (years), mean ± SD50.94 ± 11.90
Sex (women)73 (64.60%)
Months from acute onset to assessment, mean ± SD11.14 ± 4.67
Years of education, mean ± SD14.12 ± 3.84
Hypertension32 (28.32%)
Diabetes15 (13.27%)
Dyslipidemia35 (30.97%)
Smokers18 (15.93%)
SARS-CoV-2 reinfection10 (8.8%)
Hospital admission33 (29.20%)
Days of hospitalization, mean ± SD19.25 ± 14.12
ICU admission10 (8.85%)
Ventilatory support11 (9.73%)
Table 2. Weighted average F1-scores of the classification models for predicting Modified Fatigue Impact Scale (MFIS)-total score and MFIS-cognitive score categorizations. The algorithms evaluated were random forest (RF), K-nearest neighbors (KNN), support vector machine (SVM), Gaussian naive Bayes (GNB), complement naive Bayes (CNB), and logistic regression (LR).
Table 2. Weighted average F1-scores of the classification models for predicting Modified Fatigue Impact Scale (MFIS)-total score and MFIS-cognitive score categorizations. The algorithms evaluated were random forest (RF), K-nearest neighbors (KNN), support vector machine (SVM), Gaussian naive Bayes (GNB), complement naive Bayes (CNB), and logistic regression (LR).
Classification TypeRFKNNSVMGNBCNBLR
MFIS-total scoreBinary0.750.750.750.750.880.75
Three-classes0.530.470.370.480.550.51
Four-classes0.230.200.260.240.340.22
MFIS-cognitive scoreBinary0.790.740.810.740.630.81
Three-classes0.530.580.360.510.380.50
Four-classes0.180.250.310.250.270.34
Table 3. R 2 scores of the regression models for predicting Modified Fatigue Impact Scale (MFIS) score for each subset of neuropsychological test results.
Table 3. R 2 scores of the regression models for predicting Modified Fatigue Impact Scale (MFIS) score for each subset of neuropsychological test results.
Test ScoresLinear RegressionRidge RegressionLasso RegressionElastic Net Regression
MFIS-total scoreRaw−0.8570.005−0.149−0.052
Scaled−0.0180.1610.0870.085
Computerized−0.940−0.490−0.208−0.237
MFIS-cognitiveRaw−0.3830.104−0.100−0.020
Scaled−0.1320.121−0.1000.073
Computerized−0.683−0.185−0.062−0.014
Table 4. R 2 scores of the regression models for predicting Modified Fatigue Impact Scale (MFIS) score for each feature reduction type.
Table 4. R 2 scores of the regression models for predicting Modified Fatigue Impact Scale (MFIS) score for each feature reduction type.
Feature ReductionLinearRidgeLassoElastic NetANN 1ANN 2
MFIS-total scoreNone−0.0180.1610.0870.085−6.577−2.345
Hard PCA0.0380.0380.0110.013−4.036−1.728
Soft PCA0.0580.0720.1240.126−1.120−0.669
MFIS-cognitive scoreNone−0.1320.121−0.1000.073−6.716−6.385
Hard PCA0.1190.1190.0790.078−4.183−3.481
Soft PCA0.0750.0910.1970.173−1.156−0.552
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Matias-Guiu, J.A.; Delgado-Alonso, C.; Díez-Cirarda, M.; Martínez-Petit, Á.; Oliver-Mas, S.; Delgado-Álvarez, A.; Cuevas, C.; Valles-Salgado, M.; Gil, M.J.; Yus, M.; et al. Neuropsychological Predictors of Fatigue in Post-COVID Syndrome. J. Clin. Med. 2022, 11, 3886. https://doi.org/10.3390/jcm11133886

AMA Style

Matias-Guiu JA, Delgado-Alonso C, Díez-Cirarda M, Martínez-Petit Á, Oliver-Mas S, Delgado-Álvarez A, Cuevas C, Valles-Salgado M, Gil MJ, Yus M, et al. Neuropsychological Predictors of Fatigue in Post-COVID Syndrome. Journal of Clinical Medicine. 2022; 11(13):3886. https://doi.org/10.3390/jcm11133886

Chicago/Turabian Style

Matias-Guiu, Jordi A., Cristina Delgado-Alonso, María Díez-Cirarda, Álvaro Martínez-Petit, Silvia Oliver-Mas, Alfonso Delgado-Álvarez, Constanza Cuevas, María Valles-Salgado, María José Gil, Miguel Yus, and et al. 2022. "Neuropsychological Predictors of Fatigue in Post-COVID Syndrome" Journal of Clinical Medicine 11, no. 13: 3886. https://doi.org/10.3390/jcm11133886

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop