4.1 | Adherence Type and Completion Rates
Our study supports the growing literature that older adults, including those with CI, are capable of completing high-frequency, mobile-based cognitive assessments. We observed that the stringency by which adherence is defined may also influence completion rates in our sample. This is relevant to the emerging practice of self-administered assessments in that individuals have the latitude to choose when to complete assessments at home or in any quiet environment with internet access rather than during a scheduled time in a clinical environment. Within this present study, the progressive discrepancy between the overall rates for subsegment adherence, segment adherence, and the cumulative adherence indicates that many individuals completed the total number of prescribed assessments, but that they did not complete them according to specific deadlines. In other words, as adherence criteria is relaxed from using specific subsegment dates to considering broader segment dates or the cumulative study duration, participants continued to complete assessments at a consistent rate over a span of one year. This discrepancy between adherence type provides an initial perspective towards how self-administered completion rates may vary based on frequency such that individuals were less adherent to the specific study schedule as segment durations lengthened and assessment frequency decreased. Also, the overall subsegment adherence rate as compared to the specific subsegment adherence rates indicates that different participants were non-adherent across different segments, suggesting the phenomena was not isolated to select individuals.
This study supports the feasibility of the high-frequency assessments among older adults, including those with CI, which was previously reported in studies with shorter assessment periods. Among the emerging literature, Nicosia et al., (2023) asked participants to complete up to four brief mobile cognitive assessments per day across three seven-day periods spaced six months apart, and found no difference based on CI status and observed an overall adherence rate of 80.42%. Cerino et al., (2021) included up to six daily assessments for 16 days, and found only a slight difference in adherence based on CI status wherein the mean completion rate for cognitive unimpaired adults was 85.20%, while the rate for adults with MCI was 78.10%. Ours is the first study to span a complete year with continuous assessments and to compare completion rates according to different assessment frequencies and adherence types. Defining specific adherence types not only provides insight in terms of how assessment frequency can influence completion rates, but also provides a framework to consider how completion patterns of self-administered assessments could provide diagnostic utility.
4.2 | Enabling Process Based Detection of Cognitive Impairment
The traditional approach to detecting CI relies on identifying intra-individual variability or dispersion (IIV-D) across cognitive domains42–45. This approach compresses item-level responses into subtest scores that are then converted into standard scores to decipher whether significant discrepancy (i.e., dispersion) is observed across cognitive domains. No information regarding the strategies or processes utilized to answer specific question is accessible using this method, and no patterns in responses can be identified. The Boston Process Approach (BPA) was pioneered to address this shortcoming by emphasizing process-based scoring (e.g., characterizing cognitive error types to differentiate disease pathology that would otherwise be indistinguishable using traditional NP summary scores)46. Modern adoptions of a process-based approach have demonstrated the utility in coupling granular digital data with advanced analytics to uncover novel indices of cognitive functioning24,46–51. For example, learning effects observed during repeated cognitive assessments are typically interpreted as confounds to valid test interpretation. Yet recent process-based evidence suggests that an absence of a learning effect after repeated assessments is associated with amyloid beta positivity in cognitively unimpaired adults at risk for cognitive decline24,51. This granularity afforded by digital cognitive data enables the ability to consider whether fluctuations across or within assessments, or intra-individual variability in inconsistency (IIV-I)52, that would traditionally be considered “noise” might be a meaningful signal34,53. Despite decades of research demonstrating the diagnostic association between inconsistency in cognitive performance (IIV-I) and CI in controlled research settings using laboratory computers31,54–60, the transition to utilizing unsupervised mobile devices for remote data collection remains in its infancy61. The DANA Brain Vital application utilized in this study exemplifies the possibilities afforded by digital data collection. For example, whereas a traditional reaction time test (and many computerized reaction times test today) provide only a mean single reaction time value, the DANA SRT subtest provides 30 intra-trial values from which a single mean value is derived. This granularity of data not only enables the total amount of IIV-I be measured, but it enables the patterning of variability to also be accounted for62.
Even among the limited use of process-based approaches currently utilized in research literature, all associate some metric cognitive performance with meaningful clinical or neurological outcomes. Within the context of high-frequency cognitive assessments to detect the presence of cognitive decline, it may be assumed the only utility in monitoring adherence would only be to ensure enough assessments are completed to detect meaningful changes in objective cognitive performance. However, considering adherence patterns as a separate variable relevant to the onset of CI could represent an expansion of the BPA that considers aspects of behavior beyond traditional cognitive performance.
4.3 | Accelerated Progress Forward
Prior to COVID-19 necessitating the transition to remote cognitive assessment, some reluctance to embrace technological assessment approaches persisted among neuropsychologists49,63. The term Hybrid Neuropsychology has since been introduced as a model to modernize the field by integrating technology, data science, and engaging with innovators in other fields64. This transition forward is largely contingent on the ability to collect both cognitive and lifestyle data remotely and repeatedly, which may enable populations with limited access to resources based on sociodemographic and/or geographic factors to participate in research and reduce reliance on population-based norms which can be biased based on educational and cultural context65,66. In doing so, the traditional approach of considering how between-person variables (i.e., age, disease status) impact within-person processes (i.e., cognitive functioning)67 can evolve into acknowledging the myriad of within-person factors (i.e., daily activities, stress, sleep, etc.) that are now increasingly being understood to influence cognitive functioning20,68,69. Collectively, digital data enables both existing constructs to be redefined and the development of new measures to be identified. Patterns of performance, adherence, and incidental data (i.e., misunderstanding instructions, completing on last day of schedule) may all provide insights into cognitive functioning but require large samples and advanced analytics to do so.
4.4 | Limitations
This study provides insight towards future opportunities and challenges associated with remote data collection to assess cognitive functioning. many of which were not able to be addressed in the study. This study was limited by the small sample size, which affected the interpretations of any statistical comparisons related to feasibility and precluded any comparisons of cognitive performance based on impairment status or other established clinical markers. Furthermore, the generalizability of the results to more diverse populations is restricted as the sample consisted of participants from a clinical research center who are mostly White and well-educated. However, the relatively homogenous composition of the study sample is likely a reflection of the parent study rather than a function of the study design per se. Conceptually, the remote study design utilized here should enable recruitment of more representative sample, as has been demonstrated in other literature comparing demographic diversity between in-person and remote research studies70. Future efforts should target a larger and more diverse sample, assess both mean scores and variability in cognitive performance relative to assessment frequency, and collect data related to potential within-person influences on cognitive function, such as sleep, diet, stress, and physical activity.