Abstract
Developing unambiguous and challenging assessment material for measuring educational attainment is a time-consuming, labor-intensive process. As a result Computer Aided Assessment (CAA) tools are becoming widely adopted in academic environments in an effort to improve the assessment quality and deliver reliable results of examinee performance. This paper introduces a methodological and architectural framework which embeds a CAA tool in a Learning Management System (LMS) so as to assist test developers in refining items to constitute assessment tests. An Item Response Theory (IRT) based analysis is applied to a dynamic assessment profile provided by the LMS. Test developers define a set of validity rules for the statistical indices given by the IRT analysis. By applying those rules, the LMS can detect items with various discrepancies which are then flagged for review of their content. Repeatedly executing the aforementioned procedure can improve the overall efficiency of the testing process.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Haladyna, T.M.: Developing and Validating Multiple-Choice Test Items, 2nd edn. Lawrence Erlbaum Associates, Mahwah (1999)
Hambleton, R.K., Jones, R.W.: Comparison of Classical Test Theory and Item Response Theory and their Applications to Test Development. Educational Measurement: Issues and Practices 12, 38–46 (1993)
Hambleton, R.K., Swaminathan, H.: Item Response Theory: Principles and Applications. Kluwer-Nijhoff Publishing, Boston (1987)
Schmeiser, C.B., Welch, C.J.: Test Development. In: Brennan, R.L. (ed.) Educational Measurement, 4th edn. Praeger Publishers, Westport (2006)
Flaugher, R.: Item Pools. In: Wainer, H. (ed.) Computerized Adaptive Testing: A Primer, 2nd edn. Lawrence Erlbaum Associates, Mahwah (2000)
Baker, F.B.: Item Response Theory: Parameter Estimation Techniques. Marcel Dekker, New York (1992)
Moodle.org: Open-source Community-based Tools for Learning, http://moodle.org/
Blackboard Home, http://www.blackboard.com
Questionmark...Getting Results, http://www.questionmark.com
Hsieh, C., Shih, T.K., Chang, W., Ko, W.: Feedback and Analysis from Assessment Metadata in E-learning. In: 17th International Conference on Advanced Information Networking and Applications (AINA 2003), pp. 155–158. IEEE Computer Society, Los Alamitos (2003)
Hung, J.C., Lin, L.J., Chang, W., Shih, T.K., Hsu, H., Chang, H.B., Chang, H.P., Huang, K.: A Cognition Assessment Authoring System for E-Learning. In: 24th International Conference on Distributed Computing Systems Workshops (ICDCS 2004 Workshops), pp. 262–267. IEEE Computer Society, Los Alamitos (2004)
Costagliola, G., Ferrucci, F., Fuccella, V.: A Web-Based E-Testing System Supporting Test Quality Improvement. In: Leung, H., Li, F., Lau, R., Li, Q. (eds.) ICWL 2007. LNCS, vol. 4823, pp. 264–275. Springer, Heidelberg (2008)
Wu, I.L.: Model management system for IRT-based test construction decision support system. Decision Support Systems 27(4), 443–458 (2000)
Chen, C.M., Duh, L.J., Liu, C.Y.: A Personalized Courseware Recommendation System Based on Fuzzy Item Response Theory. In: IEEE International Conference on e-Technology, e-Commerce and e-Service, pp. 305–308. IEEE Computer Society Press, Los Alamitos (2004)
Ho, R.G., Yen, Y.C.: Design and Evaluation of an XML-Based Platform-Independent Computerized Adaptive Testing System. IEEE Transactions on Education 48(2), 230–237 (2005)
Sun, K.: An Effective Item Selection Method for Educational Measurement. In: Advanced Learning Technologies, pp. 105–106 (2000)
Yen, W., Fitzpatrick, A.R.: Item Response Theory. In: Brennan, R.L. (ed.) Educational Measurement, 4th edn. Praeger Publishers, Westport (2006)
Kim, S., Cohen, A.S.: A Comparison of Linking and Concurrent Calibration under Item Response Theory. Applied Psychological Measurement 22(2), 131–143 (1998)
Embretson, S.E., Reise, S.P.: Item Response Theory for Psychologists. Lawrence Erlbaum, Mahwah (2000)
Assessment System Corporation: RASCAL (Rasch Analysis Program). Computer Software, Assessment Systems Corporation, St. Paul, Minnesota (1992)
Assessment System Corporation: ASCAL (2- and 3-parameter) IRT Calibration Program. Computer Software, Assessment Systems Corporation, St. Paul, Minnesota (1989)
Linacre, J.M., Wright, B.D.: A user’s guide to WINSTEPS. MESA Press, Chicago (2000)
Zimowski, M.F., Muraki, E., Mislevy, R.J., Bock, R.D.: BILOG-MG 3: Multiple-group IRT analysis and test maintenance for binary items. Computer Software, Scientific Software International, Chicago (1997)
Thissen, D.: MULTILOG user’s guide. Computer Software, Scientific Software International, Chicago (1991)
Muraki, E., Bock, R.D.: PARSCALE: IRT-based Test Scoring and Item Analysis for Graded Open-ended Exercises and Performance Tasks. Computer Software, Scientific Software International, Chicago (1993)
du Toit, M. (ed.): IRT from SSI. Scientific Software International. Lincolnwood, Illinois (2003)
Andrich, D., Sheridan, B., Luo, G.: RUMM: Rasch Unidimensional Measurement Model. Computer Software, RUMM Laboratory, Perth, Australia (2001)
von Davier, M.: WINMIRA: Latent Class Analysis, Dichotomous and Polytomous Rasch Models. Computer Software, Assessment Systems Corporation, St. Paul, Minnesota (2001)
Hanson, B.A.: IRT Command Language (ICL). Computer Software, http://www.b-a-h.com/software/irt/icl/index.html
Mead, A.D., Morris, S.B., Blitz, D.L.: Open-source IRT: A Comparison of BILOG-MG and ICL Features and Item Parameter Recovery, http://mypages.iit.edu/~mead/MeadMorrisBlitz2007.pdf
Hanson, B.A.: Estimation Toolkit for Item Response Models (ETIRM). Computer Software, http://www.b-a-h.com/software/cpp/etirm.html
Welch, B.B., Jones, K., Hobbs, J.: Practical programming in Tcl and Tk, 4th edn. Prentice Hall, Upper Saddle River (2003)
Open Source Initiative, http://www.opensource.org/docs/definition.php
General Public License, http://www.gnu.org/copyleft/gpl.html
Free Software Foundation, http://www.fsf.org/
Hulin, C.L., Lissak, R.I., Drasgow, F.: Recovery of Two- and Three-parameter Logistic Item Characteristic Curves: A Monte Carlo Study. Applied Psychological Measurement 6(3), 249–260 (1982)
Swaminathan, H., Gifford, J.A.: Estimation of Parameters in the Three-parameter Latent Trait Model. In: Weiss, D.J. (ed.) New Horizons in Testing. Academic Press, New York (1983)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2010 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Fotaris, P., Mastoras, T., Mavridis, I., Manitsaris, A. (2010). Extending LMS to Support IRT-Based Assessment Test Calibration. In: Lytras, M.D., et al. Technology Enhanced Learning. Quality of Teaching and Educational Reform. TECH-EDUCATION 2010. Communications in Computer and Information Science, vol 73. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-13166-0_75
Download citation
DOI: https://doi.org/10.1007/978-3-642-13166-0_75
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-13165-3
Online ISBN: 978-3-642-13166-0
eBook Packages: Computer ScienceComputer Science (R0)