skip to main content
10.1145/2999541.2999555acmotherconferencesArticle/Chapter ViewAbstractPublication Pageskoli-callingConference Proceedingsconference-collections
research-article

Designing a rubric for feedback on code quality in programming courses

Published:24 November 2016Publication History

ABSTRACT

We investigate how to create a rubric that can be used to give feedback on code quality to students in introductory programming courses. Based on an existing model of code quality and a set of preliminary design rules, we constructed a rubric and put it through several design iterations. Each iteration focused on different aspects of the rubric, and solutions to various programming assignments were used to evaluate. The rubric appears to be complete for the assignments it was tested on. We articulate additional design aspects that can be used when drafting new feedback rubrics for programming courses.

References

  1. Heidi Goodrich Andrade. Teaching with rubrics: the good, the bad, and the ugly. College Teaching, 53(1):27--31, 2005.Google ScholarGoogle ScholarCross RefCross Ref
  2. Katrin Becker. Grading programming assignments using rubrics. ACM SIGCSE Bulletin, 35(3):253--253, June 2003. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Veronica Cateté, Erin Snider, and Tiffany Barnes. Developing a rubric for a creative CS Principles lab. In Proceedings of the 2016 ACM Conference on Innovation and Technology in Computer Science Education, ITiCSE '16, pages 290--295, New York, NY, USA, 2016. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Charles R Cooper. Holistic evaluation of writing. In Charles R Cooper and Lee Odell, editors, Evaluating Writing: Describing, Measuring, Judging. National Council of Teachers of English, Urbana, Illinois, 1977.Google ScholarGoogle Scholar
  5. R. Wayne Hamm, Kenneth D. Henderson, Jr., Marilyn L. Repsher, and Kathleen M. Timmer. A tool for program grading: The Jacksonville University scale. SIGCSE Bulletin, 15(1):248--252, February 1983. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. John Hattie and Helen Timperley. The power of feedback. Review of Educational Research, 77(1):81--112, 2007.Google ScholarGoogle ScholarCross RefCross Ref
  7. James W Howatt. On criteria for grading student programs. ACM SIGCSE Bulletin, 26(3):3--7, 1994. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Anders Jonsson. Rubrics as a way of providing transparency in assessment. Assessment & Evaluation in Higher Education, 39(7):840--852, 2014.Google ScholarGoogle ScholarCross RefCross Ref
  9. Anders Jonsson and Gunilla Svingby. The use of scoring rubrics: reliability, validity and educational consequences. Educational Research Review, 2(2):130--144, 2007.Google ScholarGoogle ScholarCross RefCross Ref
  10. Barbara Kitchenham and Shari Lawrence Pfleeger. Software quality: the elusive target. IEEE software, 13(1):12--21, 1996. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Anastasiya A. Lipnevich, Leigh N. McCallen, Katharine Pace Miles, and Jeffrey K. Smith. Mind the gap! Students' use of exemplars and detailed rubrics as formative assessment. Instructional Science, 42(4):539--559, 2014.Google ScholarGoogle ScholarCross RefCross Ref
  12. Samuel Messick. The interplay of evidence and consequences in the validation of performance assessments. Educational Researcher, 23(2):13--23, 1994.Google ScholarGoogle ScholarCross RefCross Ref
  13. Samuel Messick. Validity of performance assessments. In Gary W Phillips, editor, Technical Issues in Large-Scale Performance Assessment. U.S. Department of Education, 1996.Google ScholarGoogle Scholar
  14. Barbara M Moskal and Jon A Leydens. Scoring rubric development: validity and reliability. Practical assessment, research & evaluation, 7(10):1--11, 2000.Google ScholarGoogle Scholar
  15. Ernesto Panadero and Anders Jonsson. The use of scoring rubrics for formative assessment purposes revisited: a review. Educational Research Review, 9(0):129 -- 144, 2013.Google ScholarGoogle ScholarCross RefCross Ref
  16. Tjeerd Plomp and Nienke Nieveen. An introduction to educational design research. In Proceedings of the Seminar Conducted at the East China Normal University {Z}. Shanghai: SLO-Netherlands Institute for Curriculum Development, 2007.Google ScholarGoogle Scholar
  17. Thomas Reeves. Design research from a technology perspective. In Jan van den Akker, Koeno Gravemeijer, Susan McKenney, and Nienke Nieveen, editors, Educational design research, pages 52--66. Routledge, 2006.Google ScholarGoogle Scholar
  18. Eric Roberts, John Lilly, and Bryan Rollins. Using undergraduates as teaching assistants in introductory programming courses: an update on the Stanford experience. In Proceedings of the Twenty-sixth SIGCSE Technical Symposium on Computer Science Education, SIGCSE '95, pages 48--52, New York, NY, USA, 1995. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. D. Royce Sadler. The origins and functions of evaluative criteria. Educational Theory, 35(3):285--297, 1985.Google ScholarGoogle ScholarCross RefCross Ref
  20. D. Royce Sadler. Formative assessment and the design of instructional systems. Instructional science, 18(2):119--144, 1989.Google ScholarGoogle ScholarCross RefCross Ref
  21. Lon Smith and Jose Cordova. Weighted primary trait analysis for computer program evaluation. Journal of Computing Sciences in Colleges, 20(6):14--19, June 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Martijn Stegeman, Erik Barendsen, and Sjaak Smetsers. Towards an empirically validated model for assessment of code quality. In Proceedings of the 14th Koli Calling International Conference on Computing Education Research, pages 99--108. ACM, 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. Briana E. Crotwell Timmerman, Denise C. Strickland, Robert L. Johnson, and John R. Payne. Development of a 'universal' rubric for assessing undergraduates' scientific reasoning skills using scientific writing. Assessment & Evaluation in Higher Education, 36(5):509--547, 2011.Google ScholarGoogle ScholarCross RefCross Ref
  24. Arto Vihavainen, Matti Paksula, and Matti Luukkainen. Extreme apprenticeship method in teaching programming for beginners. In Proceedings of the 42Nd ACM Technical Symposium on Computer- Science Education, SIGCSE '11, pages 93--98, New York, NY, USA, 2011. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. Barbara E. Walvoord and Virginia Johnson Anderson. Effective grading: a tool for learning and assessment in college. Wiley, 2011.Google ScholarGoogle Scholar

Index Terms

  1. Designing a rubric for feedback on code quality in programming courses

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in
        • Published in

          cover image ACM Other conferences
          Koli Calling '16: Proceedings of the 16th Koli Calling International Conference on Computing Education Research
          November 2016
          189 pages
          ISBN:9781450347709
          DOI:10.1145/2999541

          Copyright © 2016 ACM

          Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 24 November 2016

          Permissions

          Request permissions about this article.

          Request Permissions

          Check for updates

          Qualifiers

          • research-article

          Acceptance Rates

          Koli Calling '16 Paper Acceptance Rate21of57submissions,37%Overall Acceptance Rate80of182submissions,44%

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader