Abstract
This study examines the prevalence, contexts, and demographic correlates of monotonic response patterns (MRPs) in online student evaluations. Results of two-level hierarchical generalized linear models show evidence of careless monotonic responses to a survey administered to students enrolled in a university-level foreign language course in the Republic of Korea. All else being equal, freshmen and students in classes with fewer survey participants were more likely to choose monotonic response patterns in course evaluations. Possible factors at work in generating MRPs are identified and discussed. The severity of the MRP problem in online ratings underscores the importance for administrators to consider possible validity threats in student evaluations before using them as tools to inform instructional and administrative decisions. It is also important to design course evaluation surveys in such a way as to minimize careless responses and to identify means to induce more thoughtful responses from college students.
Similar content being viewed by others
References
Aida, Y. (1994). Examination of Horwitz, Horwitz, and Cope’s construct of foreign language anxiety: the case of students of Japanese. The Modern Language Journal, 78, 155–168. https://doi.org/10.2307/329005.
Anderson, J., Brown, G., & Spaeth, S. (2006). Online student evaluations and response rates reconsidered. Innovate: Journal of Online Education, 2(6), 1–7 Retrieved from http://nsuworks.nova.edu/innovate/vol2/iss6/5.
Ballantyne, C. (2003). Online evaluations of teaching: an examination of current practice and considerations for the future. New Directions for Teaching and Learning, 43, 37–55. https://doi.org/10.1002/tl.127.
Barnette, J. J. (1996). Responses that may indicate nonattending behaviors in three self-administered educational surveys. Research in the Schools, 3, 49–59.
Barnette, J. J. (1999). Nonattending respondent effects on internal consistency of self-administered surveys: a Monte Carlo simulation study. Educational and Psychological Measurement, 59, 38–46. https://doi.org/10.1177/0013164499591003.
Centra, J. A. (1993). Reflective faculty evaluation: enhancing teaching and determining faculty effectiveness. San Francisco: Jossey-Bass.
Clift, J. C., & Imrie, B. W. (1980). The design of evaluation and learning. Higher Education, 9, 69–80. https://doi.org/10.1007/BF00149136.
Cranton, P., & Smith, R. A. (1986). A new look at the effect of course characteristics on student ratings of instruction. American Educational Research Journal, 23, 117–128.
Dommeyer, C. J., Baum, P., & Hanna, R. W. (2002). College student attitudes toward methods of collecting teaching evaluation: in-class versus on-line (electronic version). Journal of Education for Business, 78, 5–11. https://doi.org/10.1080/08832320209599691.
Ehler, C., Greene-Shortridge, T. M., Weekley, J. A., & Zajack, M. D. (2009). The exploration of statistical methods in detecting random responding. Annual Meeting of the Society for. Atlanta: Industrial/Organizational Psychology.
Feldman, K. A. (1976). Grades and college students’ evaluations of their courses and teachers. Research in Higher Education, 4, 69–111. https://doi.org/10.1007/BF00991462.
Feldman, K. A. (1978). Course characteristics and college students’ ratings of their teachers and courses: what we know and what we don’t. Research in Higher Education, 9, 199–242. https://doi.org/10.1007/BF00976997.
Fraile, R., & Bosch-Morell, F. (2014). Considering teaching history and calculating confidence intervals in student evaluations of teaching quality. An approach based on Bayesian inference. Higher Education, 70(1), 55–74. https://doi.org/10.1007/s10734-014-9823-0.
Goldberg, G., & Callahan, J. (1991). Objectivity of student evaluations of instructors. Journal of Education for Business, 66(6), 377–378.
Gravestock, P., Greenleaf, E., & Boggs, A. M. (2009). The validity of student course evaluations: an eternal debate? Learning and Teaching, 2, 152–158 Retrieved from https://www.academia.edu/184452/The_Validity_of_Student_Course_Evaluations_An_Eternal_Debate?auto=download.
Han, S., Kim, H., & Lee, J. (2005). A comprehensive study of Korean students’ evaluation of university teaching. [In Korean]. The Journal of Educational Administration, 23, 379–403.
Han, K., Choi, S., & Park, J. (2011). Problems in mandatory course evaluations. [In Korean]. Journal of the Korean Statistical Society, 18, 35–45.
Haslett, B. J. (1976). Student knowledgeability, student sex, class size, and class level: their interactions and influences on student ratings of instruction. Research in Higher Education, 5, 39–65. https://doi.org/10.1007/BF00991959.
Hong, K. (2006). An analysis of students’ response to course evaluation. [In Korean]. The Journal of Educational Information and Media, 12, 97–127.
Huang, J. L., Curran, P. G., Keeney, J., Poposki, E. M., & DeShon, R. P. (2011). Detecting and deterring insufficient effort responding to surveys. Journal of Business and Psychology, 27, 99–114. https://doi.org/10.1007/s10869-011-9231-8.
Jeon, J. (2010). Factors affecting L2 learners’ Listening comprehension and their listening difficulty—overcoming strategies. Korean Journal of Applied Linguistics., 26(4), 81–119.
Johnson, T. D. (2003). Online student ratings: will students respond? New Directions for Teaching and Learning, 96, 49–59. https://doi.org/10.1002/tl.122.
Johnson, J. A. (2005). Ascertaining the validity of individual protocols from Web-based personality inventories. Journal of Research in Personality, 39, 103–129. https://doi.org/10.1016/j.jrp.2004.09.009.
Kane, M. T. (2001). Current concerns in validity theory. Journal of Educational Measurement, 38(4), 319–342 Retrieved from http://cgi.server.uni-frankfurt.de/fb05/instpsych/johartig/test/images/Literatur/Kane%20(2001)%20[Validity].pdf.
Kane, M. T. (2013). Validating the interpretation and uses of test scores. Journal of Educational Measurement, 50(1), 1–73. https://doi.org/10.1111/jedm.12000.
Kim, M. (2005). Validity and reliability of course evaluation. [In Korean]. Asia Journal of Education, 6, 1–24.
Kim, H., Hong, Y., & Park, J. (2011). The effect of determinants related to Management University of the lecture evaluation of students based on data of Y University. [In Korean]. The Journal of Business History, 26, 201–224.
Krosnick, J. A. (1991). Response strategies for coping with the cognitive demands of attitude measures in surveys. Applied Cognitive Psychology, 5, 213–236. https://doi.org/10.1002/acp.2350050305.
Krosnick, J. A. (1999). Survey research. Annual Review of Psychology, 50, 537–567. https://doi.org/10.1146/annurev.psych.50.1.537.
Layne, B. H., DeCristoforo, J. R., & McGinty, D. (1999). Electronic versus traditional student ratings of instruction. Research in Higher Education, 40, 221–232. https://doi.org/10.1023/A:1018738731032.
Lee, S.-J., & Yu, J.-H. (2008). The mediation effect of self-efficacy between academic and career stress and adjustment to college. [In Korean]. The Korean Journal of Educational Psychology, 22, 589–607.
Linacre, J. M.(2009). Winsteps 3.682. Rasch Measurement. (Retrieved from www.winsteps.com).
Marsh, H. W. (1980). The influence of student, course, and instructor characteristics in evaluation of university teaching. American Educational Research Journal, 17, 219–237. https://doi.org/10.3102/00028312017002219.
Marsh, H. W., & Roche, L. A. (2000). Effects of grading leniency and low workload on students’ evaluations of popular myth, bias, validity, or innocent bystanders. Journal of Educational Psychology, 92, 202–228. https://doi.org/10.1037/0022-0663.92.1.202.
Marsh, H. W., Hau, K. T., Chung, C., & Siu, T. L. (1997). Students’ evaluation of university teaching: Chinese version of the students’ evaluation of educational quality instrument. Journal of Educational Psychology, 89, 586–572. https://doi.org/10.1177/0013164492052003017.
McGrath, R. E., Mitchell, M., Kim, B. H., & Hough, L. (2010). Evidence for response bias as a source of error variance in applied measurement. Psychological Bulletin, 136(3), 450–470. https://doi.org/10.1037/a0019216.
McKeachie, W. J. (1997). Student ratings. The validity of use. American Psychologist, 52, 1218–1225. https://doi.org/10.1037/0003-066X.52.11.1218.
Meade, A. W., & Craig, S. B. (2012). Identifying careless responses in survey data. Psychological Methods, 17(3), 437–455. https://doi.org/10.1037/a0028085.
Messick, S. (1989). Validity. In R. L. Linn (Ed.), Educational Measurement (3rd ed., pp. 13–103). NY: Macmillan.
Messick, S. (1995). Standards of validity and the validity of standards in performance assessment. Educational Measurement: Issues and Practices, Winter, 14, 5–8. https://doi.org/10.1111/j.1745-3992.1995.tb00881.x.
Moomaw, W. E. (1977). Practices and problems in evaluating instruction. New Directions for Higher Education, 17, 77–91. https://doi.org/10.1002/he.36919771708.
Moussu, L., & Llurda, E. (2008). Non-native English–speaking English language teachers: History and research. Language Teaching, 41, 315–348. https://doi.org/10.1017/S0261444808005028.
Ory, J. C. (2001). Faculty thoughts and concerns about student ratings. New Directions for Teaching and Learning, 87, 3–15. https://doi.org/10.1002/tl.23.
Park, I. (2012). A study on effects of student ratings of their learning on consistent responses in student ratings of college teaching. [In Korean]. Educational Methodology Studies, 24(1), 257–281.
Park, S., & Park, S. (2016). The latent classes of adjustment to university and the differences of the academic plan of freshmen. Korean Journal of Educational Administration, 34(2), 75–95.
Park, H. S., Choi, J., & Jacobson, M. (2006). The effects of student and course characteristics on course evaluation survey response, The 4th (Winter) Symposium. Seoul: Korean Association of Centers for Teaching and Learning on University Education Development.
Porter, S. R. (2011). Do college student surveys have any validity? The Review of Higher Education, 35, 45–76. https://doi.org/10.1353/rhe.2011.0034.
Raudenbush, S. W., Bryk, T. B., & Condon, R. (2014). HLM 7: hierarchical linear/ nonlinear modeling. USA: SSI Scientific software International.
Schinka, J. A., Kinder, B. N., & Kremer, T. (1997). Research validity scales for the NEO-PI-R: development and initial validation. Journal of Personality Assessment, 68(1), 127–138. https://doi.org/10.1207/s15327752jpa6801_10.
Seidlhofer, B. (1999). Double standards: teacher education in the expanding circle. World Englishes, 18(2), 233–245. https://doi.org/10.1111/1467-971X.00136.
Shin, S., & Kwon, J. (2016). A study of the improvement of the reliability of the course evaluation: focused on the management of course evaluation system. Soonchunhyang Journal of Humanities, 35(4), 115–145.
Sorenson, D. L., & Reiner, C. (2003). Charting the uncharted seas of online student ratings of instruction. New Directions for Teaching and Learning, 96, 1–21. https://doi.org/10.1002/tl.118.
Stratton, T. D., Witzke, D. B., Freund, M. J., Wilson, M. T., & Jacob, R. J. (2005). Validating dental and medical students’ evaluations of faculty teaching in an integrated, multi-instructor course. Journal of Dental Education, 69, 663–670 Retrieved from https://www.researchgate.net/publication/7795742_Validating_dental_and_medical_students'_evaluations_of_faculty_teaching_in_an_integrated_multi-instructor_course.
Tatro, N. C. (1995). Gender effects on student evaluations of faculty. Journal of Research and Development in Education, 28, 169–173.
Thorpe, S.W. (2002). Online student evaluation of instruction: an investigation of non-response bias. 42nd Annual Forum of the Association for Institutional Research Toronto, Canada.
Ting, K.-F. (2000). Cross-level effects of class characteristics on students’ perceptions of teaching quality. Journal of Educational Psychology, 92(4), 818–825. https://doi.org/10.1037/0022-0663.92.4.818.
Wachtel, H. K. (1998). Student evaluation of college teaching effectiveness: a brief review. Assessment & Evaluation in Higher Education, 23, 191–211. https://doi.org/10.1080/0260293980230207.
Watt, S., Simpson, C., Mckillop, C., & Nunn, V. (2002). Electronic course surveys: does automating feedback and reporting give better results? Assessment & Evaluation in Higher Education, 27, 325–337. https://doi.org/10.1080/0260293022000001346.
Woodrow, L. (2006). Anxiety and speaking English as a second language. RELC Journal, 373, 308–328. https://doi.org/10.1177/0033688206071315.
Worthington, A. C. (2002). The impact of student perceptions and characteristics on teaching evaluation: a case study in finance education. Assessment & Evaluation in Higher Education, 27, 49–64. https://doi.org/10.1080/02602930120105054.
Yeo, K. (2012). A study on university students’ perceptions and attitudes toward native English speaking teachers and non-native English speaking teachers. Journal of English Language and Literature, 105, 275–301. https://doi.org/10.5539/elt.v5n12p42.
Acknowledgements
This research was partly funded by Honam University in 2015. The first author would like to express her thanks to the editor and anonymous reviewers of Higher Education and Dr. Donald Bellomy of the Institute of Humanities and Social Studies at Honam University for their constructive feedback.
Author information
Authors and Affiliations
Corresponding author
Appendix
Appendix
Rights and permissions
About this article
Cite this article
Park, HS., Cheong, Y.F. Correlates of monotonic response patterns in online ratings of a university course. High Educ 76, 101–113 (2018). https://doi.org/10.1007/s10734-017-0199-9
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10734-017-0199-9