Skip to main content

Extending LMS to Support IRT-Based Assessment Test Calibration

  • Conference paper
Technology Enhanced Learning. Quality of Teaching and Educational Reform (TECH-EDUCATION 2010)

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 73))

Included in the following conference series:

Abstract

Developing unambiguous and challenging assessment material for measuring educational attainment is a time-consuming, labor-intensive process. As a result Computer Aided Assessment (CAA) tools are becoming widely adopted in academic environments in an effort to improve the assessment quality and deliver reliable results of examinee performance. This paper introduces a methodological and architectural framework which embeds a CAA tool in a Learning Management System (LMS) so as to assist test developers in refining items to constitute assessment tests. An Item Response Theory (IRT) based analysis is applied to a dynamic assessment profile provided by the LMS. Test developers define a set of validity rules for the statistical indices given by the IRT analysis. By applying those rules, the LMS can detect items with various discrepancies which are then flagged for review of their content. Repeatedly executing the aforementioned procedure can improve the overall efficiency of the testing process.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Haladyna, T.M.: Developing and Validating Multiple-Choice Test Items, 2nd edn. Lawrence Erlbaum Associates, Mahwah (1999)

    Google Scholar 

  2. Hambleton, R.K., Jones, R.W.: Comparison of Classical Test Theory and Item Response Theory and their Applications to Test Development. Educational Measurement: Issues and Practices 12, 38–46 (1993)

    Article  Google Scholar 

  3. Hambleton, R.K., Swaminathan, H.: Item Response Theory: Principles and Applications. Kluwer-Nijhoff Publishing, Boston (1987)

    Google Scholar 

  4. Schmeiser, C.B., Welch, C.J.: Test Development. In: Brennan, R.L. (ed.) Educational Measurement, 4th edn. Praeger Publishers, Westport (2006)

    Google Scholar 

  5. Flaugher, R.: Item Pools. In: Wainer, H. (ed.) Computerized Adaptive Testing: A Primer, 2nd edn. Lawrence Erlbaum Associates, Mahwah (2000)

    Google Scholar 

  6. Baker, F.B.: Item Response Theory: Parameter Estimation Techniques. Marcel Dekker, New York (1992)

    MATH  Google Scholar 

  7. Moodle.org: Open-source Community-based Tools for Learning, http://moodle.org/

  8. Blackboard Home, http://www.blackboard.com

  9. Questionmark...Getting Results, http://www.questionmark.com

  10. Hsieh, C., Shih, T.K., Chang, W., Ko, W.: Feedback and Analysis from Assessment Metadata in E-learning. In: 17th International Conference on Advanced Information Networking and Applications (AINA 2003), pp. 155–158. IEEE Computer Society, Los Alamitos (2003)

    Google Scholar 

  11. Hung, J.C., Lin, L.J., Chang, W., Shih, T.K., Hsu, H., Chang, H.B., Chang, H.P., Huang, K.: A Cognition Assessment Authoring System for E-Learning. In: 24th International Conference on Distributed Computing Systems Workshops (ICDCS 2004 Workshops), pp. 262–267. IEEE Computer Society, Los Alamitos (2004)

    Google Scholar 

  12. Costagliola, G., Ferrucci, F., Fuccella, V.: A Web-Based E-Testing System Supporting Test Quality Improvement. In: Leung, H., Li, F., Lau, R., Li, Q. (eds.) ICWL 2007. LNCS, vol. 4823, pp. 264–275. Springer, Heidelberg (2008)

    Chapter  Google Scholar 

  13. Wu, I.L.: Model management system for IRT-based test construction decision support system. Decision Support Systems 27(4), 443–458 (2000)

    Article  Google Scholar 

  14. Chen, C.M., Duh, L.J., Liu, C.Y.: A Personalized Courseware Recommendation System Based on Fuzzy Item Response Theory. In: IEEE International Conference on e-Technology, e-Commerce and e-Service, pp. 305–308. IEEE Computer Society Press, Los Alamitos (2004)

    Google Scholar 

  15. Ho, R.G., Yen, Y.C.: Design and Evaluation of an XML-Based Platform-Independent Computerized Adaptive Testing System. IEEE Transactions on Education 48(2), 230–237 (2005)

    Article  Google Scholar 

  16. Sun, K.: An Effective Item Selection Method for Educational Measurement. In: Advanced Learning Technologies, pp. 105–106 (2000)

    Google Scholar 

  17. Yen, W., Fitzpatrick, A.R.: Item Response Theory. In: Brennan, R.L. (ed.) Educational Measurement, 4th edn. Praeger Publishers, Westport (2006)

    Google Scholar 

  18. Kim, S., Cohen, A.S.: A Comparison of Linking and Concurrent Calibration under Item Response Theory. Applied Psychological Measurement 22(2), 131–143 (1998)

    Article  Google Scholar 

  19. Embretson, S.E., Reise, S.P.: Item Response Theory for Psychologists. Lawrence Erlbaum, Mahwah (2000)

    Google Scholar 

  20. Assessment System Corporation: RASCAL (Rasch Analysis Program). Computer Software, Assessment Systems Corporation, St. Paul, Minnesota (1992)

    Google Scholar 

  21. Assessment System Corporation: ASCAL (2- and 3-parameter) IRT Calibration Program. Computer Software, Assessment Systems Corporation, St. Paul, Minnesota (1989)

    Google Scholar 

  22. Linacre, J.M., Wright, B.D.: A user’s guide to WINSTEPS. MESA Press, Chicago (2000)

    Google Scholar 

  23. Zimowski, M.F., Muraki, E., Mislevy, R.J., Bock, R.D.: BILOG-MG 3: Multiple-group IRT analysis and test maintenance for binary items. Computer Software, Scientific Software International, Chicago (1997)

    Google Scholar 

  24. Thissen, D.: MULTILOG user’s guide. Computer Software, Scientific Software International, Chicago (1991)

    Google Scholar 

  25. Muraki, E., Bock, R.D.: PARSCALE: IRT-based Test Scoring and Item Analysis for Graded Open-ended Exercises and Performance Tasks. Computer Software, Scientific Software International, Chicago (1993)

    Google Scholar 

  26. du Toit, M. (ed.): IRT from SSI. Scientific Software International. Lincolnwood, Illinois (2003)

    Google Scholar 

  27. Andrich, D., Sheridan, B., Luo, G.: RUMM: Rasch Unidimensional Measurement Model. Computer Software, RUMM Laboratory, Perth, Australia (2001)

    Google Scholar 

  28. von Davier, M.: WINMIRA: Latent Class Analysis, Dichotomous and Polytomous Rasch Models. Computer Software, Assessment Systems Corporation, St. Paul, Minnesota (2001)

    Google Scholar 

  29. Hanson, B.A.: IRT Command Language (ICL). Computer Software, http://www.b-a-h.com/software/irt/icl/index.html

  30. Mead, A.D., Morris, S.B., Blitz, D.L.: Open-source IRT: A Comparison of BILOG-MG and ICL Features and Item Parameter Recovery, http://mypages.iit.edu/~mead/MeadMorrisBlitz2007.pdf

  31. Hanson, B.A.: Estimation Toolkit for Item Response Models (ETIRM). Computer Software, http://www.b-a-h.com/software/cpp/etirm.html

  32. Welch, B.B., Jones, K., Hobbs, J.: Practical programming in Tcl and Tk, 4th edn. Prentice Hall, Upper Saddle River (2003)

    Google Scholar 

  33. Open Source Initiative, http://www.opensource.org/docs/definition.php

  34. General Public License, http://www.gnu.org/copyleft/gpl.html

  35. Free Software Foundation, http://www.fsf.org/

  36. Hulin, C.L., Lissak, R.I., Drasgow, F.: Recovery of Two- and Three-parameter Logistic Item Characteristic Curves: A Monte Carlo Study. Applied Psychological Measurement 6(3), 249–260 (1982)

    Article  Google Scholar 

  37. Swaminathan, H., Gifford, J.A.: Estimation of Parameters in the Three-parameter Latent Trait Model. In: Weiss, D.J. (ed.) New Horizons in Testing. Academic Press, New York (1983)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2010 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Fotaris, P., Mastoras, T., Mavridis, I., Manitsaris, A. (2010). Extending LMS to Support IRT-Based Assessment Test Calibration. In: Lytras, M.D., et al. Technology Enhanced Learning. Quality of Teaching and Educational Reform. TECH-EDUCATION 2010. Communications in Computer and Information Science, vol 73. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-13166-0_75

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-13166-0_75

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-13165-3

  • Online ISBN: 978-3-642-13166-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics