Skip to main content

Markov Decision Process for MOOC Users Behavioral Inference

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 11475))

Abstract

Studies on massive open online courses (MOOCs) users discuss the existence of typical profiles and their impact on the learning process of the students. However defining the typical behaviors as well as classifying the users accordingly is a difficult task. In this paper we suggest two methods to model MOOC users behaviour given their log data. We mold their behavior into a Markov Decision Process framework. We associate the user’s intentions with the MDP reward and argue that this allows us to classify them.

Supported by ANR-15-IDFN-0003-04.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Romero, C., Ventura, S.: Educational data science in massive open online courses. Wiley Interdisc. Rev. Data Min. Knowl. Discov. 7, e1187 (2017)

    Article  Google Scholar 

  2. Ramesh, A., Goldwasser, D., Huang, B., Daume III, H., Getoor, L.: Modeling learner engagement in MOOCs using probabilistic soft logic. In: NIPS Workshop on Data Driven Education, p. 62 (2013)

    Google Scholar 

  3. Corrin, L., de Barba, P., Corin, C., Kennedy, G.: Visualizing patterns of student engagement and performance in MOOCs (2014)

    Google Scholar 

  4. Ye, C., Biswas, G.: Early prediction of student dropout and performance in MOOCs using higher granularity temporal information. J. Learn. Anal. 1, 169–172 (2014)

    Article  Google Scholar 

  5. Thompson, C., Brooks, C., Teasley, S.: Towards a general method for building predictive models of learner success using educational time series data. In: Workshops of the International Conference on Learning Analytics and Knowledge (2014)

    Google Scholar 

  6. Geigle, C., Zhai, C.X.: Modeling MOOC student behavior with two-layer hidden Markov models. In: Learning at Scale (2017)

    Google Scholar 

  7. Sutton, R., Barto, A.: Reinforcement Learning: An Introduction. The MIT Press, Cambridge (2005)

    MATH  Google Scholar 

  8. Andrew, N., Russell, S.J. : Algorithms for Inverse Reinforcement Learning. In: ICML 2000 Proceedings of the Seventeenth International Conference on Machine Learning (2000)

    Google Scholar 

  9. Wulfmeier, M., Ondruska, P., Posner, I.: Maximum Entropy Deep Inverse Reinforcement Learning (2016)

    Google Scholar 

  10. Levine, S., Popovie, Z., Koltun, V.: Nonlinear inverse reinforcement learning with Gaussian processes. In: Advances in Neural Information Processing Systems 24 (NIPS) (2011)

    Google Scholar 

  11. Surana, A., Srivastava, K.: Bayesian nonparametric inverse reinforcement learning for switched Markov decision processes. In: ICMLA 2014 13th International Conference on Machine Learning and Applications (2014)

    Google Scholar 

  12. Babe, M., Marivate, V.V., Subramanian, K., Littman, M.: Apprenticeship learning about multiple intentions. In: Proceedings of the International Conference on International Conference on Machine Learning (2011)

    Google Scholar 

  13. Ramachandran, D., Amir, E.: Bayesian inverse reinforcement learning. In: IJCAI 2007 Proceedings of the 20th International Joint Conference on Artifical Intelligence (2007)

    Google Scholar 

  14. Fox, E., Sudderth, E., Jordan, M., Willsky, A.: Nonparametric bayesian learning of switching linear dynamical systems. In: Neural Information Processing Systems (2009)

    Google Scholar 

  15. Michini, B., How, J.: Improving the efficiency of Bayesian inverse reinforcement learning. In: ICRA (2012)

    Google Scholar 

  16. Zhu, X., Ghahramani, Z.: Learning from labeled and unlabeled data with label propagation. Technical Report, Carnegie Mellon University (2002)

    Google Scholar 

  17. Viterbi, A.: Error bounds for convolutional codes and an asymptotically optimum decoding algorithm (1967)

    Google Scholar 

  18. Jarboui, F., Rocchisani, V., Kirchenmann, W.: Users behavioural inference with Markovian decision process and active learning. In: IAL@PKDD/ECML (2017)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Firas Jarboui .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Jarboui, F. et al. (2019). Markov Decision Process for MOOC Users Behavioral Inference. In: Calise, M., Delgado Kloos, C., Reich, J., Ruiperez-Valiente, J., Wirsing, M. (eds) Digital Education: At the MOOC Crossroads Where the Interests of Academia and Business Converge. EMOOCs 2019. Lecture Notes in Computer Science(), vol 11475. Springer, Cham. https://doi.org/10.1007/978-3-030-19875-6_9

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-19875-6_9

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-19874-9

  • Online ISBN: 978-3-030-19875-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics