G-Asks: An Intelligent Automatic Question Generation System for Academic Writing Support

Authors

  • Ming Liu School of Electrical and Information Engineering, University of Sydney
  • Rafael A. Calvo School of Electrical and Information Engineering, University of Sydney
  • Vasile Rus Department of Computer Science, University of Memphis

DOI:

https://doi.org/10.5087/dad.2012.205

Abstract

Many electronic feedback systems have been proposed for writing support. However, most of these systems only aim at supporting writing to communicate instead of writing to learn, as in the case of literature review writing. Trigger questions are potentially forms of support for writing to learn, but current automatic question generation approaches focus on factual question generation for reading comprehension or vocabulary assessment. This article presents a novel Automatic Question Generation (AQG) system, called G-Asks, which generates specific trigger questions as a form of support for students' learning through writing. We conducted a large-scale case study, including 24 human supervisors and 33 research students, in an Engineering Research Method course at The University of Sydney and compared questions generated by G-Asks with human generated question. The results indicate that G-Asks can generate questions as useful as human supervisors (`useful' is one of five question quality measures) while significantly outperforming Human Peer and Generic Questions in most quality measures after filtering out questions with grammatical and semantic errors. Furthermore, we identified the most frequent question types, derived from the human supervisors' questions and discussed how the human supervisors generate such questions from the source text.

Downloads

Published

2012-03-16

Issue

Section

Articles