skip to main content
10.1145/3582580.3582600acmotherconferencesArticle/Chapter ViewAbstractPublication PagesicetmConference Proceedingsconference-collections
research-article

Semi-Automatic Short-Answer Grading Tools for Thai Language using Natural Language Processing

Published:05 May 2023Publication History

ABSTRACT

The past decade has witnessed enormous advancement in online educational resources. One noteworthy advancement has been the development of automatic learning platforms. The introduction of this new technology has raised questions about its effectiveness in aiding educators to improve the engagement of students and evaluate their achievement of learning outcomes. While the use of open-ended questions to assess learners' outcomes is valuable, the workload demanded of educators can increase considerably when open-ended questions are used in large classes. We have experimented with a semi-automatic method to help grade short open-ended questions answered in Thai language. Our method employed Keyword Matching and unsupervised document grouping. Fixed types of questions were tested using different algorithms. Keyword Matching was found to be an effective method for a relatively fixed, yet open-ended set of answers. For non-fixed types of answers, Document Clustering proved suitable. In generating grading tools, we adopted three methods: Keyword Matching; Sentence Vector Similarity Ranking; and Document Clustering with TF-IDF and K-Means. The algorithms were found to be useful for online learning and grading specific content-based answers which, in turn, may be used as a guide in directing educators who wish to elicit information to provide feedback.

References

  1. M. M. M. Abdelmalak and J. L. Parra, “Expanding Learning Opportunities for Graduate Students with HyFlex Course Design,” International Journal of Online Pedagogy and Course Design, vol. 6, no. 4, pp. 19–37, Oct. 2016, doi: 10.4018/IJOPCD.2016100102.Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. B. Beatty, “Hybrid courses with flexible participation: The hyflex course design,” Practical Applications and Experiences in K-20 Blended Learning Environments, pp. 153–177, Dec. 2013, doi: 10.4018/978-1-4666-4912-5.CH011.Google ScholarGoogle ScholarCross RefCross Ref
  3. M. Detyna, R. Sanchez-Pizani, V. Giampietro, E. J. Dommett, and K. Dyer, “Hybrid flexible (HyFlex) teaching and learning: climbing the mountain of implementation challenges for synchronous online and face-to-face seminars during a pandemic,” Learn Environ Res, Apr. 2022, doi: 10.1007/s10984-022-09408-y.Google ScholarGoogle ScholarCross RefCross Ref
  4. N. Thi, H. Minh, and T. T. Linh, “Designing Online English Grammar Exercises 10th Graders via Learning Management System Chamilo,” Journal of English Language Teaching and Applied Linguistics , vol. 3, no. 5, pp. 55–63, May 2021, doi: 10.32996/JELTAL.2021.3.5.6.Google ScholarGoogle ScholarCross RefCross Ref
  5. B. L. Moorhouse, Y. Li, and S. Walsh, “E-Classroom Interactional Competencies: Mediating and Assisting Language Learning During Synchronous Online Lessons:,” https://doi.org/10.1177/0033688220985274, Feb. 2021, doi: 10.1177/0033688220985274.Google ScholarGoogle ScholarCross RefCross Ref
  6. M. A. Hussein, H. Hassan, and M. Nassef, “Automated language essay scoring systems: A literature review,” PeerJ Comput Sci, vol. 2019, no. 8, 2019, doi: 10.7717/PEERJ-CS.208.Google ScholarGoogle ScholarCross RefCross Ref
  7. V. S. Kumar and D. Boulanger, “Automated Essay Scoring and the Deep Learning Black Box: How Are Rubric Scores Determined?,” Int J Artif Intell Educ, vol. 31, no. 3, pp. 538–584, Sep. 2021, doi: 10.1007/S40593-020-00211-5.Google ScholarGoogle ScholarCross RefCross Ref
  8. D. Ramesh and S. K. Sanampudi, “An automated essay scoring systems: a systematic literature review,” Artif Intell Rev, vol. 55, no. 3, p. 2495, Mar. 2022, doi: 10.1007/S10462-021-10068-2.Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. V. Kumar and D. Boulanger, “Explainable Automated Essay Scoring: Deep Learning Really Has Pedagogical Value,” Front Educ (Lausanne), vol. 5, Oct. 2020, doi: 10.3389/FEDUC.2020.572367.Google ScholarGoogle ScholarCross RefCross Ref
  10. B. S. Bloom, M. D. Engelhart, E. J. Furst, and D. R. Krathwohl, Taxonomy of Educational Objectives: The Classification of Educational Goals. Handbook I: Cognitive Domain. 1956.Google ScholarGoogle Scholar
  11. J. Hoblos, “Experimenting with Latent Semantic Analysis and Latent Dirichlet Allocation on Automated Essay Grading,” in 2020 Seventh International Conference on Social Networks Analysis, Management and Security (SNAMS), Dec. 2020, pp. 1–7. doi: 10.1109/SNAMS52053.2020.9336533.Google ScholarGoogle ScholarCross RefCross Ref
  12. S. Ludwig, C. Mayer, C. Hansen, K. Eilers, and S. Brandt, “Automated Essay Scoring Using Transformer Models,” Psych, vol. 3, no. 4, pp. 897–915, Dec. 2021, doi: 10.3390/psych3040056.Google ScholarGoogle ScholarCross RefCross Ref
  13. E. Mayfield and A. W. Black, “Should You Fine-Tune BERT for Automated Essay Scoring?,” in Proceedings of the Fifteenth Workshop on Innovative Use of NLP for Building Educational Applications, 2020, pp. 151–162. doi: 10.18653/v1/2020.bea-1.15.Google ScholarGoogle ScholarCross RefCross Ref
  14. J. Devlin, M. W. Chang, K. Lee, and K. Toutanova, “BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding,” NAACL HLT 2019 - 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies - Proceedings of the Conference, vol. 1, pp. 4171–4186, Oct. 2018, doi: 10.48550/arxiv.1810.04805.Google ScholarGoogle ScholarCross RefCross Ref
  15. L.-H. Chang, I. Rastas, S. Pyysalo, and F. Ginter, “Deep learning for sentence clustering in essay grading support,” Apr. 23, 2021. https://arxiv.org/abs/2104.11556 (accessed Oct. 20, 2022).Google ScholarGoogle Scholar
  16. D. S. Morris, “Automatically grading Java programming assignments via reflection, inheritance, and regular expressions,” Proceedings - Frontiers in Education Conference, vol. 1, 2002, doi: 10.1109/FIE.2002.1157985.Google ScholarGoogle ScholarCross RefCross Ref
  17. J. Eisenstein, Introduction to Natural Language Processing. USA: Westchester Publishing Services, 2019.Google ScholarGoogle Scholar
  18. P. Singhapreecha, “Review of the book: A reference grammar of Thai,” Lingua, vol. 117, no. 8, pp. 1497–1512, Aug. 2007, doi: 10.1016/J.LINGUA.2006.09.005.Google ScholarGoogle ScholarCross RefCross Ref
  19. R. Arreerard, S. Mander, and S. Piao, “Survey on Thai NLP Language Resources and Tools,” in Proceedings of the Thirteenth Language Resources and Evaluation Conference, Jun. 2022, vol. 4695, pp. 6495–6505.Google ScholarGoogle Scholar
  20. J. Howard and S. Ruder, “Universal Language Model Fine-tuning for Text Classification,” ACL 2018 - 56th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference (Long Papers), vol. 1, pp. 328–339, Jan. 2018, doi: 10.48550/arxiv.1801.06146.Google ScholarGoogle ScholarCross RefCross Ref
  21. M. A. Hussein, H. Hassan, and M. Nassef, “Automated language essay scoring systems: A literature review,” PeerJ Comput Sci, vol. 2019, no. 8, 2019, doi: 10.7717/PEERJ-CS.208.Google ScholarGoogle ScholarCross RefCross Ref
  22. D. J. C. Mackay, Information theory, inference, and learning algorithms Cambridge. 2003.Google ScholarGoogle Scholar
  23. L. Logeswaran and H. Lee, “An efficient framework for learning sentence representations,” in 6th International Conference on Learning Representations, ICLR 2018 - Conference Track Proceedings, 2018.Google ScholarGoogle Scholar
  24. O. Sychev, A. Anikin, and A. Prokudin, “Automatic grading and hinting in open-ended text questions,” Cogn Syst Res, vol. 59, 2020, doi: 10.1016/j.cogsys.2019.09.025.Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Semi-Automatic Short-Answer Grading Tools for Thai Language using Natural Language Processing

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Other conferences
      ICETM '22: Proceedings of the 2022 5th International Conference on Education Technology Management
      December 2022
      415 pages
      ISBN:9781450398015
      DOI:10.1145/3582580

      Copyright © 2022 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 5 May 2023

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article
      • Research
      • Refereed limited

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format .

    View HTML Format