skip to main content
article
Free Access

Getting the most from paired-user testing

Published:01 July 1995Publication History
First page image

References

  1. 1 C. E. O'Malley; S. W. Draper, & M. S. Riley, Constructive Interaction: "A Method for Studying Human-Computer-Human Interaction". Proceedings of lFiP INTERACT "84: Human-Computer Interaction. pp. 269-274Google ScholarGoogle Scholar

Index Terms

  1. Getting the most from paired-user testing

      Recommendations

      Reviews

      Joseph L. Podolsky

      The author correctly says that software usability testing is “part art, part science” because we want “to know what the users are thinking.” Testers often record behavioral data as a surrogate for that thinking, but that kind of quantitative data may be not only insufficient, but misleading. Wildman prefers using an approach first proposed in 1984, paired testing. In this process, two subjects work together during the usability test and are encouraged to talk to each other. Their conversation is recorded along with quantitative behavioral data. The conversation can be structured by having the subjects answer specific questions, but informal comments are encouraged and are often the most revealing parts of the test. To facilitate paired testing of software, the author and his colleagues at Bellcore devised a set of tools: a computer-based scenario script, a menu mapping exercise, and a co ordinated logger. The uses of these tools, especially the scenario script, are described briefly. The tools have been used in four studies, and the testers have been pleased with and often surprised by the results. For example, the paper describes how quantitative data showed that users were not able to match functions to menu selections. Listening to the tone of the conversations, however, helped the designers understand that the “users' reasoning processes rarely reflected [those of] the designers.” Interestingly, the designers recognized that the testing tools themselves consisted of software that needed usability testing. To minimize confusion, the tools were not used on themselves. Instead, a less formal paired testing procedure was used. This paper does not contain the details that most specialists in usability testing would demand. But the majority of us treat usability testing as one more step in our already crowded and overdue projects, and this paper gives us some useful hints about better ways of getting good results.

      Access critical reviews of Computing literature here

      Become a reviewer for Computing Reviews.

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in

      Full Access

      • Published in

        cover image Interactions
        Interactions  Volume 2, Issue 3
        July 1995
        69 pages
        ISSN:1072-5520
        EISSN:1558-3449
        DOI:10.1145/208666
        Issue’s Table of Contents

        Copyright © 1995 ACM

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 1 July 1995

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • article

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader