ABSTRACT
We investigate how to create a rubric that can be used to give feedback on code quality to students in introductory programming courses. Based on an existing model of code quality and a set of preliminary design rules, we constructed a rubric and put it through several design iterations. Each iteration focused on different aspects of the rubric, and solutions to various programming assignments were used to evaluate. The rubric appears to be complete for the assignments it was tested on. We articulate additional design aspects that can be used when drafting new feedback rubrics for programming courses.
- Heidi Goodrich Andrade. Teaching with rubrics: the good, the bad, and the ugly. College Teaching, 53(1):27--31, 2005.Google ScholarCross Ref
- Katrin Becker. Grading programming assignments using rubrics. ACM SIGCSE Bulletin, 35(3):253--253, June 2003. Google ScholarDigital Library
- Veronica Cateté, Erin Snider, and Tiffany Barnes. Developing a rubric for a creative CS Principles lab. In Proceedings of the 2016 ACM Conference on Innovation and Technology in Computer Science Education, ITiCSE '16, pages 290--295, New York, NY, USA, 2016. ACM. Google ScholarDigital Library
- Charles R Cooper. Holistic evaluation of writing. In Charles R Cooper and Lee Odell, editors, Evaluating Writing: Describing, Measuring, Judging. National Council of Teachers of English, Urbana, Illinois, 1977.Google Scholar
- R. Wayne Hamm, Kenneth D. Henderson, Jr., Marilyn L. Repsher, and Kathleen M. Timmer. A tool for program grading: The Jacksonville University scale. SIGCSE Bulletin, 15(1):248--252, February 1983. Google ScholarDigital Library
- John Hattie and Helen Timperley. The power of feedback. Review of Educational Research, 77(1):81--112, 2007.Google ScholarCross Ref
- James W Howatt. On criteria for grading student programs. ACM SIGCSE Bulletin, 26(3):3--7, 1994. Google ScholarDigital Library
- Anders Jonsson. Rubrics as a way of providing transparency in assessment. Assessment & Evaluation in Higher Education, 39(7):840--852, 2014.Google ScholarCross Ref
- Anders Jonsson and Gunilla Svingby. The use of scoring rubrics: reliability, validity and educational consequences. Educational Research Review, 2(2):130--144, 2007.Google ScholarCross Ref
- Barbara Kitchenham and Shari Lawrence Pfleeger. Software quality: the elusive target. IEEE software, 13(1):12--21, 1996. Google ScholarDigital Library
- Anastasiya A. Lipnevich, Leigh N. McCallen, Katharine Pace Miles, and Jeffrey K. Smith. Mind the gap! Students' use of exemplars and detailed rubrics as formative assessment. Instructional Science, 42(4):539--559, 2014.Google ScholarCross Ref
- Samuel Messick. The interplay of evidence and consequences in the validation of performance assessments. Educational Researcher, 23(2):13--23, 1994.Google ScholarCross Ref
- Samuel Messick. Validity of performance assessments. In Gary W Phillips, editor, Technical Issues in Large-Scale Performance Assessment. U.S. Department of Education, 1996.Google Scholar
- Barbara M Moskal and Jon A Leydens. Scoring rubric development: validity and reliability. Practical assessment, research & evaluation, 7(10):1--11, 2000.Google Scholar
- Ernesto Panadero and Anders Jonsson. The use of scoring rubrics for formative assessment purposes revisited: a review. Educational Research Review, 9(0):129 -- 144, 2013.Google ScholarCross Ref
- Tjeerd Plomp and Nienke Nieveen. An introduction to educational design research. In Proceedings of the Seminar Conducted at the East China Normal University {Z}. Shanghai: SLO-Netherlands Institute for Curriculum Development, 2007.Google Scholar
- Thomas Reeves. Design research from a technology perspective. In Jan van den Akker, Koeno Gravemeijer, Susan McKenney, and Nienke Nieveen, editors, Educational design research, pages 52--66. Routledge, 2006.Google Scholar
- Eric Roberts, John Lilly, and Bryan Rollins. Using undergraduates as teaching assistants in introductory programming courses: an update on the Stanford experience. In Proceedings of the Twenty-sixth SIGCSE Technical Symposium on Computer Science Education, SIGCSE '95, pages 48--52, New York, NY, USA, 1995. ACM. Google ScholarDigital Library
- D. Royce Sadler. The origins and functions of evaluative criteria. Educational Theory, 35(3):285--297, 1985.Google ScholarCross Ref
- D. Royce Sadler. Formative assessment and the design of instructional systems. Instructional science, 18(2):119--144, 1989.Google ScholarCross Ref
- Lon Smith and Jose Cordova. Weighted primary trait analysis for computer program evaluation. Journal of Computing Sciences in Colleges, 20(6):14--19, June 2005. Google ScholarDigital Library
- Martijn Stegeman, Erik Barendsen, and Sjaak Smetsers. Towards an empirically validated model for assessment of code quality. In Proceedings of the 14th Koli Calling International Conference on Computing Education Research, pages 99--108. ACM, 2014. Google ScholarDigital Library
- Briana E. Crotwell Timmerman, Denise C. Strickland, Robert L. Johnson, and John R. Payne. Development of a 'universal' rubric for assessing undergraduates' scientific reasoning skills using scientific writing. Assessment & Evaluation in Higher Education, 36(5):509--547, 2011.Google ScholarCross Ref
- Arto Vihavainen, Matti Paksula, and Matti Luukkainen. Extreme apprenticeship method in teaching programming for beginners. In Proceedings of the 42Nd ACM Technical Symposium on Computer- Science Education, SIGCSE '11, pages 93--98, New York, NY, USA, 2011. ACM. Google ScholarDigital Library
- Barbara E. Walvoord and Virginia Johnson Anderson. Effective grading: a tool for learning and assessment in college. Wiley, 2011.Google Scholar
Index Terms
- Designing a rubric for feedback on code quality in programming courses
Recommendations
Towards an empirically validated model for assessment of code quality
Koli Calling '14: Proceedings of the 14th Koli Calling International Conference on Computing Education ResearchWe present a pilot study into developing a model of feedback on code quality in introductory programming courses. To devise such a model, we analyzed professional standards of code quality embedded in three popular software engineering handbooks and ...
Improving pedagogical feedback and objective grading
SIGCSE 08It is important for learning that students receive enough of educational feedback of their work. To get the students to be seriously disposed to the feedback it has to be personal, objective and consistent. In large classes ensuring such feedback can be ...
Utilization of Rubrics for Self-assessment of Computer Science Students Enrolled in a Research Writing Course
WCCCE '17: Proceedings of the 22nd Western Canadian Conference on Computing EducationThis study 1 determined the utilization of rubrics in self-assessment of Computer Science students who underwent research writing course. Two sections with a total of 69 students served as participants of the study. The grades of the teacher in the ...
Comments