Examining the benefits and challenges of using audience response systems: A review of the literature
Introduction
An Audience Response System (ARS) allows an entire class to respond to multiple choice questions displayed on a screen. After students click in their responses using remote devices, the results are instantly collected, summarized and presented to the class in visual format, usually a histogram. Responses are always anonymous to peers, but the teacher can associate ARS devices with individual students for testing purposes. With feedback from the class, an instructor is provided with an opportunity to orchestrate peer or classroom discussion about concepts being covered. ARSs have been used to improve student interaction, engagement, and attention (e.g., Draper and Brown, 2004, Hinde and Hunt, 2006), increase attendance (e.g., Bullock et al., 2002), stimulate peer and class discussion (e.g., Pelton & Pelton, 2006), provide feedback for both students and instructors in order to improve instruction (e.g., Caldwell, 2007), and improve learning performance (e.g., El-Rady, 2006).
The purpose of this review is to provide a current, comprehensive synthesis of research on ARSs from 2000 to 2007 in order to guide educators and future researchers. Previous research reviews (Caldwell, 2007, Fies and Marshall, 2006, Judson and Sawada, 2002, Simpson and Oliver, 2007) are somewhat dated and/or limited in coverage and scope. Key topics encompassed in the current review include the history of ARSs, labeling and terminology, previous literature reviews, benefits and challenges when using ARSs, and suggestions for further investigation.
When ARSs were first introduced at Stanford University in 1966, they were expensive, did not function well, and were difficult to use (Abrahamson, 2006, Judson and Sawada, 2002, Judson and Sawada, 2006). In 1985, a much less expensive prototype, known as Classtalk I, was tested and generally well received by students and teachers at Christopher Newport University. Although ARSs became commercially available from 1992 to 1999 (Abrahamson, 2006, Beatty, 2004), the cost was still too prohibitive for widespread distribution. In 1999, a new generation of more affordable, infrared ARSs became available. Extensive use of ARSs began in 2003 (Abrahamson, 2006, Judson and Sawada, 2002, Judson and Sawada, 2006) and today, numerous secondary schools, colleges and universities use this tool (Abrahamson, 2006).
A comprehensive review of the literature reveals no less than 26 different labels for ARSs (see Kay (2008a) for a complete list). The most commonly used terms include: audience response system (n = 17 papers), personal response system (n = 7 papers), electronic voting system (n = 5 papers), and student response system (n = 4 papers). One key issue with inconsistent labeling is the difficulty it poses in locating and staying current with the latest research. For example, four relatively recent reviews on ARSs (Caldwell, 2007, Fies and Marshall, 2006, Simpson and Oliver, 2007) referenced 16–25 studies per review, yet this quantity represents only one quarter to one third of the peer-reviewed articles available on ARSs from 2000 to 2007.
Four literature reviews have been completed on ARSs (Caldwell, 2007, Fies and Marshall, 2006, Judson and Sawada, 2002, Simpson and Oliver, 2007). Judson and Sawada (2002) provided a summary of ARS use up until 1998, but their review included only eight peer-reviewed references. Because prevalent use of ARSs began after 2003, Judson and Sawada’s (2002) analysis is dated. Fies and Marshall (2006) examined methods used to assess ARSs, however their review included only 16 peer-reviewed studies, only two of which two were published after 2004. Therefore, some of their conclusions are questionable. For example, they claimed that few studies reported the use of ARSs for formative assessment, yet since 2004, 16 new studies have been completed where formative assessment was employed. A more recent review by Simpson and Oliver (2007) analyzed more than 40 papers. However, only 17 of the articles cited were from peer-reviewed journals, with the majority of the results based on five key references. In addition, the impact ARSs on learning was not studied in detail.
The most current and comprehensive review was conducted by Caldwell (2007) who analyzed 25 peer-reviewed articles, many of which were published after 2000. Caldwell’s analysis focussed on identifying the primary users of ARSs, articulating the rationale for using ARSs, exploring questioning strategies used with ARSs, and identifying best practices. Nevertheless, few details were offered with respect to the impact of ARSs on student learning.
In summary, it is argued that a more comprehensive review of the ARS literature is needed in order to present a more current and representative summary of benefits and challenges experienced when using this new technology.
Section snippets
Overview
Several measures were taken to address some the shortcomings of previous ARS research reviews. First, a comprehensive search of peer-reviewed journals, but not conference papers or reports, was completed based on the 26 labels for ARSs (Kay, 2008a). This approach uncovered a total of 67 papers and chapters. Given that previous literature reviews included no more than 25 peer-reviewed papers, one can be reasonably assured that the current review of ARSs is comprehensive.
Of the 67 studies
Overall attitudes
According to Judson and Sawada (2002), prior to 1992, student attitudes toward ARSs were very positive, although much of the evidence presented was based on informal student feedback. However, more recent studies have offered considerable quantitative and qualitative evidence indicating that students are positive about the use of ARSs in higher education (Caldwell, 2007, Durbin and Durbin, 2006, Fies and Marshall, 2006, Hu et al., 2006, Simpson and Oliver, 2007). In the literature review
Methodology for investigating ARSs
A number of authors have argued that there are several key problems with current research on ARSs including: a lack of systematic research, a bias toward using anecdotal, qualitative data, excessive focus on attitudes as opposed to learning and cognitive processes, and samples derived from limited educational settings. Each of these limitations will be discussed.
References (68)
- et al.
The influence of an audience response system on knowledge retention: An application to resident education
American Journal of Obstetrics and Gynecology
(2005) A brief history of networked classrooms: Effects, cases, pedagogy, and implications
- et al.
Infusing active learning into the large-enrolment biology class: Seven strategies, from the simple to complex
Cell Biology Education
(2005) Reflections on the use of ARS with small groups
- Beatty, I. (2004). Transforming student learning with classroom communication systems. EDUCAUSE Research Bulletin,...
- et al.
Designing effective questions for classroom response system teaching
American Journal of Physics
(2006) - Bergtrom, G. (2006). Clicker sets as learning objects. Interdisciplinary Journal of Knowledge and Learning Objects, 2....
Eight years of asking questions
Near real-time assessment of student learning and understanding in biology courses
BioScience
(2004)- et al.
Enhancing the student–instructor interaction frequency
The Physics Teacher
(2002)
Using wireless keypads in lecture classes
The Physics Teacher
The trial of an audience response system to facilitate problem-based learning in legal education
Clickers in the large classroom: Current research and best-practice tips
Life Sciences Education
Investigating the effects of group response systems on student satisfaction, learning, and engagement in accounting education
Issues in Accounting Education
Classroom voting in mathematics
Mathematics Teacher
Peer instruction: Ten years of experience and results
American Journal of Physics
Practical lessons from four years of using an ARS in every lecture of a large class
Using a personal response system for promoting student interaction
Teaching Mathematics and Its Applications
Increasing interactivity in lectures using an electronic voting system
Journal of Computer Assisted Learning
Electronically enhanced classroom interaction
Australian Journal of Educational Technology
Assessing-to-learn: Formative assessment in physics instruction
The Physics Teacher
Anonymous polling in a engineering tutorial environment: A case study
Peer instruction: Results from a range of classrooms
The Physics Teacher
Classroom response systems: A review of the literature
Journal of Science Education and Technology
Factors affecting educational innovation with in class electronic response systems
Australasian Journal of Educational Technology
Real-time analysis of student comprehension: An assessment of electronic student response technology in an introductory earth science course
Journal of Geoscience Education
Interactive-engagement versus traditional methods: A six-thousand-student survey of mechanics text data for introductory physics courses
American Journal of Physics
Manna from heaven or clickers from hell
Journal of College Science Teaching
Using the personal response system to enhance student learning: Some evidence from teaching economics
ARS evolution: Reflections and recommendations
Wireless interactive teaching by using keypad-based ARS
Cited by (495)
A Quantitative Characterization of Audience Response System Research
2024, Journal of Internet TechnologyEnhancing historical thinking through learning analytics in Primary Education: A bridge to formative assessment
2024, Education and Information TechnologiesUnveiling the perceived benefits of online learning among management undergraduates: a study in a Sri Lankan government-owned university
2024, Education and Information TechnologiesSmart Classroom: A Review and Research Agenda
2024, IEEE Transactions on Engineering Management
- 1
Tel.: +1 905 721 8668x2886.