Acessibilidade / Reportar erro

Learning effect of computerized cognitive tests in older adults

Abstracts

Objective

To evaluate the learning effect of computerized cognitive testing in the elderly.

Methods

Cross-sectional study with 20 elderly, 10 women and 10 men, with average age of 77.5 (±4.28) years. The volunteers performed two series of computerized cognitive tests in sequence and their results were compared. The applied tests were: Trail Making A and B, Spatial Recognition, Go/No Go, Memory Span, Pattern Recognition Memory and Reverse Span.

Results

Based on the comparison of the results, learning effects were observed only in the Trail Making A test (p=0.019). Other tests performed presented no significant performance improvements. There was no correlation between learning effect and age (p=0.337) and education (p=0.362), as well as differences between genders (p=0.465).

Conclusion

The computerized cognitive tests repeated immediately afterwards, for elderly, revealed no change in their performance, with the exception of the Trail Making test, demonstrating high clinical applicability, even in short intervals.

Elderly; Neuropsychological tests; Learning; Diagnosis; computer assisted


Objetivo

Avaliar o efeito da aprendizagem nos testes cognitivos computadorizados em idosos da comunidade.

Métodos

Estudo transversal, com 20 idosos - 10 mulheres e 10 homens, com média de idade de 77,5 (±4,28) anos. Os voluntários realizaram duas séries em sequência de testes cognitivos computadorizados e seus resultados foram comparados. Os testes aplicados foram: Trail Making A e B,Spatial Recognition, Go/No Go, Memory Span, Pattern Recognition e Reverse Memory Span.

Resultados

Houve efeito de aprendizagem na comparação dos resultados somente no testeTrail Making A (p=0,019). Os demais testes não apresentaram alteração significante no desempenho. Não houve correlação entre o efeito de aprendizagem e a idade (p=0,337) e a escolaridade (p=0,362), e nem diferença entre os gêneros (p=0,465).

Conclusão

Os testes cognitivos computadorizados repetidos por idosos imediatamente após sua realização não revelaram alteração do seu desempenho, com exceção do teste Trail Making, que demonstrou alta aplicabilidade clínica, mesmo em intervalos curtos.

Idoso; Testes neuropsicológicos; Aprendizagem; Diagnóstico por computador


INTRODUCTION

With the growth of the elderly population, the incidence of cognitive decline also decreases, causing diseases of great impact on public health. The development of therapeutic techniques and early detection of cognitive decline are very important for maintaining quality of life of the individuals for the longest time possible.

The development of technology has facilitated the adoption of rapid and efficient measures, for both diagnosis and treatment. Computer science enabled the use of this technology in the application of cognitive tests.

Computerized cognitive tests were introduced during the 1970s and gained popularity as the use of computers grew. During the 1980s, several studies were carried out on the advantages and disadvantages of evaluating cognition by these tests. Currently, the studies focus on the development of cognitive batteries capable of evaluating cognitive functions and confirming the efficacy of the existing tests.(1. Wild K, Howieson D, Webbe F, Seelye A, Kaye J. Status of computerized cognitive testing in aging: a systematic review. Alzheimers Dement. 2008;4(6):428-37. Review.)

In order to be used in clinical practice or studies, a cognitive evaluation tool should be validated, that is, it should be capable of assessing the qualities desired, show inter-rater and test-retest reliability, besides maintaining stability when applied by different interviewers and on the same individual at different time points.(2. Applegate WB, Blass JP, Williams TF. Instruments for the functional assessment of older patients. N Engl J Med. 1990;322(17):1207-14. Review.)

Among the advantages of using computerized tests, are the capacity to evaluate multiple cognitive functions, greater global consistency and sensitivity, standardization of evaluations, precisely recording the response speed, a more accessible cost, the possibility of issuing an automatic report, and less need for professional training for test application, in which some of the test batteries are self-applicable.(3. Kane RL, Kay GG. Computerized assessment in neuropsychology: a review of tests and test batteries. Neuropsychol Rev. 1992;3(1):1-117. Review.,4. Schatz P, Browndyke J. Applications of computer-based neuropsychological assessment. J Head Trauma Rehabil. 2002;17(5):395- 410. Review.) Nevertheless, in applying this form of evaluation, the behavior of the one evaluated is not analyzed, with his/her reactions or verbalizations, which differs from the neuropsychological assessment traditionally used.(5. Cernich AN, Brennana DM, Barker LM, Bleiberg J. Sources of error in computerized neuropsychological assessment. Arch Clin Neuropsychol. 2007;22 Suppl 1:S39-48.)

Wild et al.(1. Wild K, Howieson D, Webbe F, Seelye A, Kaye J. Status of computerized cognitive testing in aging: a systematic review. Alzheimers Dement. 2008;4(6):428-37. Review.) point towards an inherent limitation of computerized cognitive tests, especially when used with the elderly population, which is the lack of psychometric data, such as data reliability and validity, in comparison with the traditional measurements, paper and pencil. Several researchers had the objective of verifying the equivalence between traditional and computerized tests. Studies by Collerton et al.(6. Collerton J, Collerton D, Arai Y, Barrass K, Eccles M, Jagger C, McKeith I, Saxby BK, Kirkwood T; Newcastle 85+ Study Core Team. A comparison of computerized and pencil-and-paper tasks in assessing cognitive function in community-dwelling older people in the Newcastle 85+ Pilot Study. J Am Geriatr Soc. 2007;55(10):1630-5.) and Wagner and Trentini(7. Wagner GP, Trentini CM. Assessing executive functions in older adults: a comparison between the manual and the computer-based versions of the Wisconsin Card Sorting Test. Psychology & Neuroscience. 2009;2(2):195-8.) observed the equivalence between the two methods; whereas the studies by Feldstein et al.(8. Feldstein SN, Keller FR, Portman RE, Durham RL, Klebe KJ, Davis HP. A comparison of computerized and standard versions of the Wisconsin Card Sorting Test. Clin Neuropsychol. 1999;13(3):303-13.) and Steinmetz et al.(9. Steinmetz JP, Brunner M, Loarer E, Houssemand C. Incomplete psychometric equivalence of scores obtained on the manual and the computer version of the Wisconsin Card Sorting Test? Psychol Assess. 2010;22(1):199-202.) showed a difference in the results suggesting that skills to use computers might favor a better performance of the person assessed. The study done by McDonald et al.(1010 . McDonald AS. The impact of individual differences on the equivalence of computer-based and paper-and-pencil educational assessments. Comput Educ. 2002;39(3):299-312.)showed a yet greater difference in the elderly population.

The repetition of cognitive tests, which is very common in neuropsychological clinical practice, also shows a variation in the results found, suggesting an effect of learning when compared to the results of serial applications.(1111 . Duff K, Westervelt HJ, McCaffrey RJ, Haase R. Practice effects, test–retest stability, and dual baseline assessments with the California Verbal Learning Test in an HIV sample. Arch Clin Neuropsychol. 2001;16(5):461-76.,1212 . Beglinger LJ, Gaydos B, Tangphao-Daniels O, Duff K, Kareken DA, Crawford J, et al. Practice effects and the use of alternate forms in serial neuropsychological testing. Arch Clin Neuropsychol. 2005;20(4):517-29.)

The effect of learning or of practice is defined as improvement in performance of the test by volunteer, without having been offered any intervention or condition that could justify it. Various reasons have been discussed to explain the gains in scores induced by practice, such as reduced anxiety or increased familiarity with the testing environment and procedural learning.(1313 . Hausknecht JP, Halpert JA, Di Paolo NT, Moriarty Gerrard MO. Retesting in selection: a meta-analysis of coaching and practice effects for tests of cognitive ability. J Appl Psychol. 2007;92(2):373-85.) The studies that do not consider the effect of learning on the repetition of tests may lead to wrong conclusions on the benefits of interventions and even mask the presence of cognitive decline, primarily in the elderly population.(1414 . Salthouse TA, Toth J, Daniels K, Parks C, Pak R, Wolbrette M, et al. Effects of aging on efficiency of task switching in a variant of the trail making test. Neuropsychology. 2000;14(1):102-11.)

In a meta-analysis on the effects of learning in neuropsychological tests, Calamia et al.(1515 . Calamia M, Markon K, Tranel D. Scoring higher the second time around: meta-analyses of practice effects in neuropsychological assessment. Clin Neuropsychol. 2012;26(4):543-70.) found few studies in the literature comparing the performance of elderly individuals.

OBJECTIVE

To confirm the applicability of computerized tests in elderly individuals of the community, to verify the possibility of repeating them immediately with no modification in performance, and to evaluate the effect of learning these tests in the elderly of the community.

METHODS

The sample of 20 aged individuals was selected by convenience between June and October 2012, from the Outpatient’s Clinic - Department of Geriatrics,Hospital das Clínicas da Faculdade de Medicina da Universidade de São Paulo (HCFMUSP).

The inclusion criteria were score on the Mini Mental Status Examination(1616 . Folstein MF, Folstein SE, McHugh PR. “Mini-mental state”. A practical method for grading the cognitive state of patients for the clinician. J Psychiatr Res. 1975;12(3):189-98.) within normality for schooling level;(1717 . Brucki SM, Nitrini R, Caramelli P, Bertolucci PH, Okamoto IH. Sugestões para o uso do mini-exame do estado mental no Brasil. Arq Neuropsiquiatr. 2003;61(3-B):777-81.) score of the Geriatric Depression Scale -15 ≤5 points;(1818 . Sheikh JI, Yesavage JA. Geriatric Depression Scale (GDS): recent evidence and development of a shorter version. Clin Gerontol. 1986;5(1/2):165-73.) being literate; presenting with sensory functions (visual and auditory) that enable the performance of the tests; and agreeing to participate in the study, by signing the Informed Consent Form.

The exclusion criteria were the use of five or more medications; habitual ingestion of alcoholic beverage; and presenting with decompensated or symptomatic systemic disease.

The first 20 individuals selected responded to a socioeconomic and clinical evaluation protocol, and then were submitted twice to a battery of computerized cognitive tests.

The first evaluation (T1) had the objective of presenting the tests, which were reapplied (T2) in the same sequence after the end of the first battery.

The time used for each battery was approximately 30 minutes.

This study was approved by the Ethics Committee for Analysis of Research Projects (CAPEPesq) of the HCFMUSP, under number 149,925, on November 21st, 2012, and was a part of the thematic project Human biometeorology: analysis of the effects of environmental variables (meteorological, thermal comfort, and air pollution) and of climate changes in the geriatric population of the city of São Paulo, financed by The State of São Paulo Research Foundation (FAPESP), process number 2010/10189-5.

Cognitive evaluation

A series of computerized tests was applied that was developed at the Universities of Stanford, San Francisco, and McGill.(1919 . Sternberg DA, Ballard K, Hardy JL, Katz B, Doraiswamy PM, Scanlon M. The largest human cognitive performance dataset reveals insights into the effects of lifestyle factors and aging. Front Human Neurosci. 2013;7:292.) The series was composed of simple and quick tests performed by touch screen.

The tests selected were:

  • Trail Making A (TMA): it evaluates attention relative to time to perform the task. It is composed of circles numbered from 1 to 25, randomly distributed, which should be touched in the growing order of the numbers. When circles are touched in the correct order the color changes indicating that the patient may proceed with the task. If the option touched is wrong, an “x” appears over the number, allowing the choice of another option;

  • Trail Making B (TMB): similar to the previous test. It is composed of circles containing 13 numbers and 12 letters, randomly distributed, which should be touched in the increasing order of the numbers and letters of the alphabet (1A, 2B, 3C);

  • Spatial Recognition: it evaluates spatial recognition relative to the number of correct answers. It consists of visualizing, for each time frame, five squares of the same color and size in certain positions on the screen. The patient should memorize the position of the squares. Next, two squares appear at the same time, one of them in a new position and the other in the original position, which should be touched by the volunteer;

  • Go/No-Go: it evaluates the reaction time and consists of the presentation of the figure of a fruit. The volunteer should click on the space key as quickly as possible every time he/she visualizes the figure of this fruit, as figures of other fruits are randomly presented. The time for reaction and number of errors are evaluated;

  • Pattern Recognition: it evaluates memorization of details and orientation of figures by means of the number of correct answers. The test comprises a sequential presentation of 12 images, which should be memorized by the volunteers in the details and positions on the screen. Next, by pairs, a new figure and an original figure are presented, which should be touched by the volunteer;

  • Memory Span: evaluated spatial memory by means of presentation of ten cards of the same color on the screen. The cards change color individually, following a determined sequence. The volunteer should next repeat the order of change of the cards, touching them on the screen. If two sequences are identified correctly, one unit is added to the number of cards that change color. The test evaluates the number of correct sequences;

  • Reverse Memory Span: similar to the previous test, but the cards should be touched in inverse order of their color change.

The results were presented by means, standard deviations, and proportions. The Friedman and Wilcox tests were used, considering that the data were paired, in order to compare the two collection times for all the variables analyzed. To relate the data with the sociodemographic data, Spearman’s correlation and Student’s t test were used, considering the significance level of p<0.05.

RESULTS

The clinical and sociodemographic profile of the 20 elderly individuals (10 women and 10 men) is detailed on table 1. We highlight that in women, the mean age was 69.7 (±4.85) years, and for men, 74.7 (±3.46) years, with p=0.172, while the schooling level for women was 8.5 (±4.27) years and for men, 7.0 (±4.37), with p=0.160.

Table 1
Sociodemographic characteristics of the sample studied (n=20)

In comparing the cognitive function tests performed in sequence, a significant difference was noted only in the TMA test (Table 2).This result observed in the TMA Test did not correlate with age (p=0.337) and schooling (p=0.362) according to Spearman’s test, even when comparing the percentages of learning (T2-T1/T1). Student’s t test also showed no significant difference between the percentage of learning of men and women (p=0.465).

Table 2
SD: standard deviation; BMI: body mass index; MMSE: Mini Mental State Examination; GDS: Geriatric Depression Scale.

. Comparing results of cognitive function tests T1 and T2


DISCUSSION

Our data demonstrate a significant improvement in the TMA results between the two evaluations. Recent studies have shown that the improvement in performance, brought about by the effect of practice may remain from one to six weeks(2020 . Bates ME, Voelbel GT, Buckman JF, Labouvie EW, Barry D. Short-term neuropsychological recovery in clients with substance use disorders. Alcohol Clin Exp Res. 2005;29(3):367-77.) and is no longer significant one to seven years after the initial evaluation.(2121 . Salthouse TA, Schroeder DH, Ferrer E. Estimating retest effects in longitudinal assessments of cognitive functioning in adults between 18 and 60 years of age. Dev Psychol. 2004;40(5):813-22.)

Few studies, however, have evaluated the effect of learning on computerized cognitive tests. Raymond et al.(2222 . Raymond PD, Hinton-Bayre AD, Radel M, Ray MJ, Marsh NA. Test-retest norms and reliable change indices for the MicroCog Battery in a healthy community population over 50 years of age. Clin Neuropsychol. 2006;20(2):261-70.) applied a battery of computerized tests (MicroCog) and found significant effects of learning when applying it twice, with an interval of two weeks. The improvement in performance was attributed to increased confidence in use of a computer.

A study carried out by Beglinger et al.(2323 . Beglinger LJ, Gaydos BL, Kareken DA, Tangphao-Daniels O, Siemers ER, Mohs R. Neuropsychological test performance in healthy volunteers before and after donepezil administration. J Psychopharmacol. 2004;18(1):102-8.) with healthy adults who used drugs to improve cognitive function showed better computerized test scores even in individuals who did not undergo drug intervention throughout the six repetitions, showing a high level of learning in TMA, especially in the third and fourth repetitions. The same gain in performance was not observed in TMB results, or in the present study.

It is believed that, in this study, the fact of the tests being computerized might have increased the degree of anxiety and the fear of inability to conclude the test, regardless of the socioeconomic characteristics. The fact that the population evaluated comprised exclusively elderly individuals also contributed towards increased anxiety, since this age group historically utilizes computers less frequently.(2424 . Cutler SJ, Hendricks J, Guyer A. Age differences in home computer availability and use. J Gerontol B Psychol Sci Soc Sci. 2003;58(5):S271-80.) In this study, the use of computers was only reported by a small portion (20%) of the sample. It is important to point out that the TMA was the first test to be performed, a fact that could have compromised the results of the first evaluation, considering a significant difference in the comparison between the two performances. Lezak et al.(2525 . Lezak MD, Howieson D, Bigler E, Tranel D. Neuropsychological assessment. 5th ed. New York: Oxford University Press; 2012.) stated that after the first test, the strategies developed to master the task could facilitate the performance of the other evaluations as of the first application, reducing the percentage of progression when comparing the results of both assessments. This fact contributed towards the absence of the effect of significant learning in the other tests applied, showing that after handling of the computer was introduced, the volunteers felt greater ease in performing the tests.

Some authors reported that in comparing the effect of learning between the young and old, ageing makes performance in evaluations similar, but the factors that lead the elderly population to show a smaller effect are still not clear.(2626 . Temkin NR, Heaton RK, Grant I, Dikmen SS. Detecting significant change in neuropsychological test performance: a comparison of four models. J Int Neuropsychol Soc. 1999;5(4):357-69.) The smaller capacity for memorization and coding of relevant information in the initial tests may be a cause for such a finding.

Irrespective of the time interval between evaluations, and even of the volunteers´ age, some individual factors, such as motivation at the initial evaluation, satisfactory clinical conditions, a high intelligence coefficient, and high schooling level, may favor the effect of learning.(2727 . Rapport LJ, Brines DB, Axelrod BN, Theisen ME. Full scale IQ as mediator of practice effects: the rich get richer. Clin Neuropsychol. 1997;11(4):375-80.,2828 . Salthouse TA. Effects of age on time-dependent cognitive change. Psychol Sci. 2011;22(5):682-8.)

A study carried out with TMA and TMB in their traditional versions, showed no difference among individuals of different age groups.(2929 . Salthouse TA, Toth J, Daniels K, Parks C, Pak R, Wolbrette M, et al. Effects of aging on efficiency of task switching in a variant of the trail making test. Neuropsychology. 2000;1(1):102-11.) It is believed that the tests used here, whether due to low complexity or the furnishing of appropriate instructions, made the performance of elderly individuals easier, regardless of some factors, such as age and schooling level.

Duff et al.(3030 . Duff K, Callister C, Dennett K, Tometich D. Practice effects: a unique cognitive variable. Clin Neuropsychol. 2012;26(7):1117-27.) found no association between the sociodemographic data (gender, schooling, and age) and intensity of the effect of learning on traditional cognitive tests in 268 adults, which is in agreement with the results of this study.

A large part of the studies aiming to verify the effect of learning in the elderly was carried out in adults, considering the mean age was approximately 50 years. Even when designed to evaluate the impact of age on this phenomenon.

Although most studies are performed with tests in their traditional form (paper and pencil), it is believed that the introduction of computerized tests at this age is important, and that these individuals should increasingly use this piece of equipment.

Rarely is the immediate effect of learning evaluated; nevertheless, we assume here that this model of evaluation allowed us to detect the inexistence of this phenomenon in this set of cases. We believe that the immediate repetition of most of the tests used was not capable of producing a gain in performance, but this should not happen later, a fact that can be corroborated by new studies or by an increase in the number of cases.

CONCLUSION

Computerized cognitive tests are performed by elderly individuals regardless of their prior experience with this technology. Such tests can be repeated immediately and with no interference in performance.

In the Trail Making A test, which is the one with least complexity among all tests applied, demonstrated that there was learning. In the sample studied, there was no difference in the other computerized tests applied (Trail Making B, Spatial Recognition, Go/No-Go, Pattern Recognition, Memory Span, and Reverse Memory Span), demonstrating high clinical applicability, even with short intervals.

ACKNOWLEDGEMENTS

This work was sponsored by the State of São Paulo Research Foundation (FAPESP) [process number 2010/10189-5], for funding this project.

REFERENCES

  • 1
    Wild K, Howieson D, Webbe F, Seelye A, Kaye J. Status of computerized cognitive testing in aging: a systematic review. Alzheimers Dement. 2008;4(6):428-37. Review.
  • 2
    Applegate WB, Blass JP, Williams TF. Instruments for the functional assessment of older patients. N Engl J Med. 1990;322(17):1207-14. Review.
  • 3
    Kane RL, Kay GG. Computerized assessment in neuropsychology: a review of tests and test batteries. Neuropsychol Rev. 1992;3(1):1-117. Review.
  • 4
    Schatz P, Browndyke J. Applications of computer-based neuropsychological assessment. J Head Trauma Rehabil. 2002;17(5):395- 410. Review.
  • 5
    Cernich AN, Brennana DM, Barker LM, Bleiberg J. Sources of error in computerized neuropsychological assessment. Arch Clin Neuropsychol. 2007;22 Suppl 1:S39-48.
  • 6
    Collerton J, Collerton D, Arai Y, Barrass K, Eccles M, Jagger C, McKeith I, Saxby BK, Kirkwood T; Newcastle 85+ Study Core Team. A comparison of computerized and pencil-and-paper tasks in assessing cognitive function in community-dwelling older people in the Newcastle 85+ Pilot Study. J Am Geriatr Soc. 2007;55(10):1630-5.
  • 7
    Wagner GP, Trentini CM. Assessing executive functions in older adults: a comparison between the manual and the computer-based versions of the Wisconsin Card Sorting Test. Psychology & Neuroscience. 2009;2(2):195-8.
  • 8
    Feldstein SN, Keller FR, Portman RE, Durham RL, Klebe KJ, Davis HP. A comparison of computerized and standard versions of the Wisconsin Card Sorting Test. Clin Neuropsychol. 1999;13(3):303-13.
  • 9
    Steinmetz JP, Brunner M, Loarer E, Houssemand C. Incomplete psychometric equivalence of scores obtained on the manual and the computer version of the Wisconsin Card Sorting Test? Psychol Assess. 2010;22(1):199-202.
  • 10
    McDonald AS. The impact of individual differences on the equivalence of computer-based and paper-and-pencil educational assessments. Comput Educ. 2002;39(3):299-312.
  • 11
    Duff K, Westervelt HJ, McCaffrey RJ, Haase R. Practice effects, test–retest stability, and dual baseline assessments with the California Verbal Learning Test in an HIV sample. Arch Clin Neuropsychol. 2001;16(5):461-76.
  • 12
    Beglinger LJ, Gaydos B, Tangphao-Daniels O, Duff K, Kareken DA, Crawford J, et al. Practice effects and the use of alternate forms in serial neuropsychological testing. Arch Clin Neuropsychol. 2005;20(4):517-29.
  • 13
    Hausknecht JP, Halpert JA, Di Paolo NT, Moriarty Gerrard MO. Retesting in selection: a meta-analysis of coaching and practice effects for tests of cognitive ability. J Appl Psychol. 2007;92(2):373-85.
  • 14
    Salthouse TA, Toth J, Daniels K, Parks C, Pak R, Wolbrette M, et al. Effects of aging on efficiency of task switching in a variant of the trail making test. Neuropsychology. 2000;14(1):102-11.
  • 15
    Calamia M, Markon K, Tranel D. Scoring higher the second time around: meta-analyses of practice effects in neuropsychological assessment. Clin Neuropsychol. 2012;26(4):543-70.
  • 16
    Folstein MF, Folstein SE, McHugh PR. “Mini-mental state”. A practical method for grading the cognitive state of patients for the clinician. J Psychiatr Res. 1975;12(3):189-98.
  • 17
    Brucki SM, Nitrini R, Caramelli P, Bertolucci PH, Okamoto IH. Sugestões para o uso do mini-exame do estado mental no Brasil. Arq Neuropsiquiatr. 2003;61(3-B):777-81.
  • 18
    Sheikh JI, Yesavage JA. Geriatric Depression Scale (GDS): recent evidence and development of a shorter version. Clin Gerontol. 1986;5(1/2):165-73.
  • 19
    Sternberg DA, Ballard K, Hardy JL, Katz B, Doraiswamy PM, Scanlon M. The largest human cognitive performance dataset reveals insights into the effects of lifestyle factors and aging. Front Human Neurosci. 2013;7:292.
  • 20
    Bates ME, Voelbel GT, Buckman JF, Labouvie EW, Barry D. Short-term neuropsychological recovery in clients with substance use disorders. Alcohol Clin Exp Res. 2005;29(3):367-77.
  • 21
    Salthouse TA, Schroeder DH, Ferrer E. Estimating retest effects in longitudinal assessments of cognitive functioning in adults between 18 and 60 years of age. Dev Psychol. 2004;40(5):813-22.
  • 22
    Raymond PD, Hinton-Bayre AD, Radel M, Ray MJ, Marsh NA. Test-retest norms and reliable change indices for the MicroCog Battery in a healthy community population over 50 years of age. Clin Neuropsychol. 2006;20(2):261-70.
  • 23
    Beglinger LJ, Gaydos BL, Kareken DA, Tangphao-Daniels O, Siemers ER, Mohs R. Neuropsychological test performance in healthy volunteers before and after donepezil administration. J Psychopharmacol. 2004;18(1):102-8.
  • 24
    Cutler SJ, Hendricks J, Guyer A. Age differences in home computer availability and use. J Gerontol B Psychol Sci Soc Sci. 2003;58(5):S271-80.
  • 25
    Lezak MD, Howieson D, Bigler E, Tranel D. Neuropsychological assessment. 5th ed. New York: Oxford University Press; 2012.
  • 26
    Temkin NR, Heaton RK, Grant I, Dikmen SS. Detecting significant change in neuropsychological test performance: a comparison of four models. J Int Neuropsychol Soc. 1999;5(4):357-69.
  • 27
    Rapport LJ, Brines DB, Axelrod BN, Theisen ME. Full scale IQ as mediator of practice effects: the rich get richer. Clin Neuropsychol. 1997;11(4):375-80.
  • 28
    Salthouse TA. Effects of age on time-dependent cognitive change. Psychol Sci. 2011;22(5):682-8.
  • 29
    Salthouse TA, Toth J, Daniels K, Parks C, Pak R, Wolbrette M, et al. Effects of aging on efficiency of task switching in a variant of the trail making test. Neuropsychology. 2000;1(1):102-11.
  • 30
    Duff K, Callister C, Dennett K, Tometich D. Practice effects: a unique cognitive variable. Clin Neuropsychol. 2012;26(7):1117-27.

Publication Dates

  • Publication in this collection
    Apr-Jun 2014

History

  • Received
    21 Aug 2013
  • Accepted
    10 Dec 2013
Instituto Israelita de Ensino e Pesquisa Albert Einstein Avenida Albert Einstein, 627/701 , 05651-901 São Paulo - SP, Tel.: (55 11) 2151 0904 - São Paulo - SP - Brazil
E-mail: revista@einstein.br