Abstract
Education, industry, and the military rely on relevant and effective performance assessment to make key decisions. Qualification, promotion, advancement, hiring, firing, and training decisions are just some of the functions that rely on good performance assessment. This chapter explores issues, tools, and techniques for ensuring that performance assessment meets the goals of decision makers. Important considerations include the following: the task environment in which assessment takes place, the validity of the measures selected, the diversity and scope of the measures, and often the local political environment in which assessments are made. Unfortunately, primarily in education, assessment policy is a matter of intense political debate, and the debate is sustained by misinformation. For example, traditional paper and pencil assessment techniques have come under fire from those who do not feel they have the relevance they once did. Simulation-based training technologies in industry and the military have put more emphasis on performance assessment systems that show real-world work relevance. The chapter examines these topics.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
*American Educational Research Association, American Psychological Association, & National Council on Measurement in Education. (1999). Standards for educational and psychological testing. Washington, DC: American Educational Research Association.
*Andrews, D., Nullmeyer, R., Good, J., & Fitzgerald, P. (2008). Measurement of learning processes in pilot simulation. In E. Baker, J. Dickieson, W. Wulfeck, & H. O’Neil. Assessment of problem solving using simulations (pp. 273–288). New York: Lawrence Erlbaum Associates.
Anderson, J. R., Reder, L. M., & Simon, H. A. (2000). Applications and misapplications of cognitive psychology to mathematics education. Texas Educational Review, 6.
*Baker, E. L., Dickieson, J. L., Wulfeck, W. H., & O’Neil, H. F. (Eds.). (2008). Assessment of problem solving using simulations. New York: Lawrence Erlbaum Associates.
*Bell, H. H., Andrews, D. H., & Wulfeck, W. H., II. (2010). Behavioral task analysis. In K. H. Silber & W. R. Foshay (Eds.), Handbook of improving performance in the workplace, vol. 1: Instructional design and training delivery (pp. 184–226). San Francisco, CA: Pfeiffer/International Society for Performance Improvement.
Baird, H. (1997). Performance assessment for science teachers. Salt Lake City, UT: Utah State Office of Education.
Bloom, S. B. (1956). Taxonomy of educational objectives. Boston: Allyn and Bacon.
Byrne, A. J., & Greaves, J. D. (2001). Assessment instruments used during anesthetic simulation: Review of published studies. British Journal of Anaesthesia, 86(3), 445–450.
Department of Education. (2010). Overview information; Race to the top fund assessment program; Notice inviting applications for new awards for fiscal year (FY) 2010. Federal Register/Vol. 75, No. 68/Friday, April 9, 2010/Notices. p. 18171.
Diaper, G. (1990). The Hawthorne effect: A fresh examination. Educational Studies, 16, 261–267.
Ellis, J. A., & Wulfeck, W. H. (1982). Handbook for testing in Navy schools. Special Report 83-2, San Diego, CA: Navy Personnel Research and Development Center. DTIC Accession Number: ADA122479.
Glaser, R. (1963). Instructional technology and the measurement of learning outcomes: Some questions. American Psychologist, 18, 519–521.
Glaser, R., & Klaus, D. J. (1962). Proficiency measurement: Assessing human performances. In R. M. Gagné (Ed.), Psychological principles in systems development (pp. 419–474). New York: Holt, Rinehart, & Winston.
Hays, R. T., & Singer, M. J. (1989). Simulation fidelity in training system design: Bridging the gap between reality and training. New York: Springer-Verlag.
Joint Committee on Standards for Educational Evaluation. (1988). The personnel evaluation standards: How to assess systems for evaluating educators. Newbury Park, CA: Sage Publications.
Joint Committee on Standards for Educational Evaluation. (2003). The student evaluation standards: How to improve evaluations of students. Newbury Park, CA: Corwin Press.
Lane, N. E. (1986) Issues in performance measurement for military aviation with applications to air combat maneuvering. Technical Report: NTSC TR-86-008. Naval Training Systems Center. DTIC Accession Number: ADA172986.
Lesgold, A. (2008). Assessment to steer the course of learning. In E. Baker, J. Dickieson, W. Wulfeck, & H. O’Neil (Eds.), Assessment of problem solving using simulations (pp. 19–36). New York: Lawrence Erlbaum Associates.
Madaus, G. F., & O’Dwyer, L. M. (1999). Short history of performance assessment: Lessons learned. Phi Delta Kappan, 80(9), 688–695.
Meister, D. (1999). Measurement in aviation systems. In D. J. Garland, J. A. Wise, & V. D. Hopkins (Eds.), Handbook of aviation human factors (pp. 34–49). Mahwah, NJ: Lawerence Erlbaum Associates.
Merrill, M. D. (1994). Instructional design theory. Englewood Cliffs, NJ: Educational Technology Publications.
*Pellegrino, J. W., Chudowsky, N., & Glaser, R. (2001). Knowing what students know: The science and design of educational assessment. Washington, DC: National Academy Press.
Shivers, C. H. (1998). Halos, horns and Hawthorne: Potential flaws in the evaluation process. Professional Safety, 43(3), 38–41.
Smith, P. C., & Kendall, L. M. (1963). Retranslation of expectations: An approach to the construction of unambiguous anchors to rating scales. Journal of Applied Psychology, 47, 149–155.
Staal, M. A. (2004). Stress, cognition, and human performance: A literature review and conceptual framework. NASA Tech. Rep. NASA/TM-2004-212824. Moffat Field, CA: Ames Research Center, Retrieved from http://human-factors.arc.nasa.gov/flightcognition/Publications/IH_054_Staal.pdf
Stevens, A., & Collins, A. (1977). The goal structure of a Socratic tutor. Proceedings of Association for Computing Machinery National Conference. Seattle, Washington.
Swezey, R. W., & Andrews, D. H. (2001). Readings in training and simulation: A 30-year perspective. Santa Monica, CA: Human Factors and Ergonomics Society.
U.S. Congress, Office of Technology Assessment. (1992). Testing in american schools: Asking the right questions, OTA-SET-519. Washington, DC: U.S. Government Printing Office.
Wallace, S. R. (1965). Criteria for what? American Psychologist, 20, 411–417.
Webb, N. M., Shavelson, R. J., & Haertel, E. H. (2006). Reliability coefficients and generalizability theory. In C. R. Rao (Ed.), Handbook of statistics (Volume on Psychometrics, Vol. 26, pp. 81–124). Amsterdam, The Netherlands: Elsevier.
Wiggins, G. P. (1993). Assessing student performance. San Francisco, CA: Jossey-Bass Publishers.
Wulfeck, W. H., & Wetzel-Smith, S. K. (2008). Use of visualization to improve high-stakes problem solving. In E. L. Baker, J. Dickieson, W. H. Wulfeck, & H. F. O’Neal (Eds.), Assessment of problem solving using simulations (pp. 223–238). New York: Lawrence Erlbaum Associates.
Wulfeck, W. H., & Wetzel-Smith, S. K. (2010). Training incredibly complex tasks. In P. E. O’Connor & J. V. Cohn (Eds.), Human performance enhancement in high risk environments. Santa Barbara, CA: ABC-CLIO.
Wulfeck, W. H., Wetzel-Smith, S. K., & Dickieson, J. L. (2004). Interactive multisensory analysis training. In NATO RTO Human Factors and Medicine Panel (Eds.), Symposium on advanced technologies for military training. Neuilly-sur-Sein Cedex, France: North Atlantic Treaty Organization Research and Technology Agency.
Yarbrough, D. B., Shulha, L. M., Hopson, R. K., & Caruthers, F. A. (2011). The program evaluation standards (3rd ed.). Thousand Oaks, CA: Sage.
Acknowledgments
Portions of this work were supported by the US Navy. The views and opinions expressed herein are those of the authors, and are not to be construed as official or as representing the Department of Defense, or the Department of the Navy. In addition, we thank two anonymous reviewers for their helpful comments that improved this manuscript.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2014 Springer Science+Business Media New York
About this chapter
Cite this chapter
Andrews, D.H., Wulfeck, W.H. (2014). Performance Assessment: Something Old, Something New. In: Spector, J., Merrill, M., Elen, J., Bishop, M. (eds) Handbook of Research on Educational Communications and Technology. Springer, New York, NY. https://doi.org/10.1007/978-1-4614-3185-5_24
Download citation
DOI: https://doi.org/10.1007/978-1-4614-3185-5_24
Published:
Publisher Name: Springer, New York, NY
Print ISBN: 978-1-4614-3184-8
Online ISBN: 978-1-4614-3185-5
eBook Packages: Humanities, Social Sciences and LawEducation (R0)