Ethical issues in the use of computerized assessment

https://doi.org/10.1016/j.chb.2003.10.006Get rights and content

Abstract

Computer applications in the field of psychological test administration have significant ethical implications for clinicians, client/responders, and computerized test construction and administration. Lack of awareness of computer-related issues may undermine clinicians' ability to ethically perform computerized psychological assessments. Graduate training in computerized testing is limited, and clinicians should be exposed to ethical concerns, potential judgment errors, and possible pitfalls in evaluating computer-generated reports. Recommendations for clinical research and practice are offered. Computerization clearly presents a series of dilemmas for psychologists conducting clinical assessments that will continue well into the future. Increased awareness of relevant issues will enhance the chances that ethical dilemmas will be successfully navigated.

Section snippets

Competence

An underlying assumption of professional practice is that practitioners practice within the boundaries of their particular areas of competence. As this relates to psychological assessment, clinicians are expected to be knowledgeable about and skilled with the administration and interpretation of the assessment techniques, tools, and measures they utilize (APA, 2002). Assessment competence, thus, includes familiarity with the research support and psychometric strengths and limitations of the

Computer-generated interpretive reports

The potential for clinicians to misuse computer test interpretations has been well documented (e.g. Maddux & Johnson, 1998, Matarazzo, 1986). Maddux and Johnson noted that it is currently inappropriate to use the majority of interpretation software because these programs “encourage the use of assessment instruments by personnel who are not fully competent in their use, and they apply a simplistic paradigm (IF-THEN) to the solution of complex human assessment problems.” (p. 99) This potential

Accommodating the client/responder

In addition to being aware of issues relating to computerized test interpretation, clinicians also need to be aware of the array of potentially confounding responder variables that may differentially influence the process and outcome of a computerized testing session. Responders bring various predilections with them into computerized testing sessions and can have a variety of experiences during a testing session that can negatively influence reliability and validity. Two specific areas that can

Equivalence of scores across modalities

Computerized testing improves the reliability, standardization, and objectivity of test administration by administering items the same way each time (Butcher, 1987, Dignon, 1996, Kobak et al., 1996, Maddux & Johnson, 1998, Sturges, 1998). Computers also decrease scoring errors and increase processing capabilities such as being able to arithmetically or statistically transform scores (Butcher et al., 1985, Dignon, 1996, Kobak et al., 1993, Maddux & Johnson, 1998, Sturges, 1998). Computerized

Confidentiality

Finally, with increased use of computer technology in clinical practice, clinicians may unknowingly violate ethical standards relating to confidentiality, as would be the case if access to computer test results were not protected (Maddux & Johnson, 1998, McMinn et al., 1999). Security of computer-generated materials is a responsibility of users of computer-based reports (Butcher, 2003). Storing confidential client information on the hard drive or on a network may compromise confidentiality in

Recommendations for practice

According to AERA Standard 5.5, the clinician needs to ensure that responders are knowledgeable of the computer equipment and the response format, and that they are able to make adequate responses using the necessary equipment. Clinicians may be tempted to leave responders alone with the computer for extended periods of time, which may detract from the reliability and validity of the testing session as well as limit valuable observational data regarding the respondent's approach to test-taking

Recommendations for research

As computerized assessment becomes more prevalent in clinical practice, there is a parallel need for continued research into its limitations and benefits. Examples of possible areas for further research include improving the quality of assessment software, examining the accuracy of mechanical versus clinical models of decision-making, and continuing to examine effects of responder characteristics on computerized assessment outcome.

Concluding remarks

Computerized psychological testing is a promising area for the clinician and the researcher, but it is also one that continues to present a number of ethical challenges. The focus of this paper was to increase clinicians' awareness of ethical issues that they might face as they incorporate technology into their practice. It is hoped that the recommendations outlined above will encourage clinicians to seriously consider the strengths and limitations of computerized assessment on the quality of

Acknowledgments

The authors would like to thank Jessica Kaster, M.S. for assistance with helpful editorial comments made on a previous draft of this manuscript.

References (53)

  • Standards for educational and psychological testing

    (1999)
  • Ethical principles of psychologists and code of conduct

    (2002)
  • ASPPB code of conduct

    (2001)
  • J.N. Butcher

    The use of computers in psychological assessment: an overview of practices and issues

  • J.N. Butcher

    How to use computer-based reports

  • J.N. Butcher

    Computerized psychological assessment

  • J.N. Butcher et al.

    Current developments and future directions in computerized personality assessment

    Journal of Consulting and Clinical Psychology

    (1985)
  • J.N. Butcher et al.

    Validity and utility of computer-based test interpretation

    Psychological Assessment

    (2000)
  • H.P. Erdman et al.

    Computer-assisted assessment with couples and families

    Family Therapy

    (1986)
  • H.P. Erdman et al.

    Suicide risk prediction by computer interviewa prospective study

    Journal of Clinical Psychiatry

    (1987)
  • H.N. Garb

    Studying the clinician: judgment research and psychological assessment

    (1998)
  • H.N. Garb

    Computers will become increasingly important for psychological assessmentnot that there's anything wrong with that!

    Psychological Assessment

    (2000)
  • H.N. Garb

    Introduction to the special section on the use of computers for making judgments and decisions

    Psychological Assessment

    (2000)
  • D.G. Gardner et al.

    The measurement of computer attitudesan empirical comparison of available scales

    Journal of Educational Computing Research

    (1993)
  • W.M. Grove et al.

    Clinical versus mechanical predictiona meta-analysis

    Psychological Assessment

    (2000)
  • W.C. King et al.

    A quasi-experimental assessment of the effect of computerizing noncognitive paper-and-pencil measurementsa test of measurement equivalence

    Journal of Applied Psychology

    (1995)
  • Cited by (34)

    • A bifactor exploratory structural equation modeling representation of the structure of the basic psychological needs at work scale

      2017, Journal of Vocational Behavior
      Citation Excerpt :

      Online assessments were preferable to in-person assessments due to the sensitive nature of the questions, particularly prone to social desirability. Individuals completing computerized evaluations tend to be more willing to provide sensitive, personal information compared to regular in-person assessments (Schulenberg & Yutrzenka, 2004). Approval for this study was obtained from the University's research ethics council.

    • An eye-controlled version of the Kaufman Brief Intelligence Test 2 (KBIT-2) to assess cognitive functioning

      2016, Computers in Human Behavior
      Citation Excerpt :

      To this end, we assessed whether scores obtained from a group of healthy volunteers on the eye-controlled version of the KBIT-2 paralleled those gained using the standard paper-based version. Available research investigating the equivalence between computer-based cognitive assessments has highlighted that psychometric equivalence across modalities should not be assumed (Arce-Ferrer & Guzmán, 2009; Schulenberg & Yutrzenka, 2004; Thompson, Ennis, Coffin, & Farman, 2007; Williams & McCord, 2006). Indeed, while scores on traditional and computerized versions of assessments have been found to be well-correlated in a number of studies (e.g. Chen, White, McCloskey, Soroui, & Chun, 2011; Choi, Kim, & Boo, 2003; Williams & McCord, 2006), suggesting that they are measuring the same construct, investigations into the differences between correlated scores, which gauge the relative difficulty of the test medium, have produced mixed results (Newton, Acres, & Bruce, 2013).

    • Interformat reliability of the patient health questionnaire: Validation of the computerized version of the PHQ-9

      2016, Internet Interventions
      Citation Excerpt :

      For instance, there have been findings that computer administrations may produce a certain level of disinhibition concerning topics such as alcohol consumption and risky sexual behaviors (Booth-Kewley et al., 2007). Also issues such as computer anxiety and familiarity with the medium need to be taken into account (Schulenberg and Yutrzenka, 2004). Although the vast majority of studies examining interformat reliability of depression measures suggest that paper-and-pen versions may be transferred to digital formats without losing diagnostic properties (Alfonsson et al., 2014), there are also a few results suggesting that higher values are obtained on the Becks Depression Inventory if it is administered over the Internet or on a computer (Carlbring et al., 2007; George et al., 1992).

    • Computer attitude in psychiatric inpatients

      2008, Computers in Human Behavior
    View all citing articles on Scopus
    View full text