Massive Open Program Evaluation: Crowdsourcing's Potential to Improve E-Learning Quality

Massive Open Program Evaluation: Crowdsourcing's Potential to Improve E-Learning Quality

Tonya B. Amankwatia
ISBN13: 9781522504665|ISBN10: 1522504664|EISBN13: 9781522504672
DOI: 10.4018/978-1-5225-0466-5.ch006
Cite Chapter Cite Chapter

MLA

Amankwatia, Tonya B. "Massive Open Program Evaluation: Crowdsourcing's Potential to Improve E-Learning Quality." Revolutionizing Modern Education through Meaningful E-Learning Implementation, edited by Badrul H. Khan, IGI Global, 2016, pp. 109-130. https://doi.org/10.4018/978-1-5225-0466-5.ch006

APA

Amankwatia, T. B. (2016). Massive Open Program Evaluation: Crowdsourcing's Potential to Improve E-Learning Quality. In B. Khan (Ed.), Revolutionizing Modern Education through Meaningful E-Learning Implementation (pp. 109-130). IGI Global. https://doi.org/10.4018/978-1-5225-0466-5.ch006

Chicago

Amankwatia, Tonya B. "Massive Open Program Evaluation: Crowdsourcing's Potential to Improve E-Learning Quality." In Revolutionizing Modern Education through Meaningful E-Learning Implementation, edited by Badrul H. Khan, 109-130. Hershey, PA: IGI Global, 2016. https://doi.org/10.4018/978-1-5225-0466-5.ch006

Export Reference

Mendeley
Favorite

Abstract

Given the complexity of developing programs, services, policies, and support for e-learning, leaders may find it challenging to regularly evaluate programs to improve quality. Are there new opportunities to expand user and stakeholder input, or involve others in e-learning program evaluation? This chapter asks researchers and practitioners to rethink existing paradigms and methods for program evaluation. Crowdsourced input may help leaders and stakeholders address persistent evaluation challenges and improve e-learning quality, especially in Massive Open Online Courses (MOOCs). After reviewing selected evaluation paradigms, models, and methods, this chapter offers a possible role for crowdsourced input. This chapter examines the topics of crowd definition, affordances, and problems, to begin a taxonomical framework with possible applications for e-learning. The goal is to provide a reference for advancing the discussion and examination of crowdsourced input.

Request Access

You do not own this content. Please login to recommend this title to your institution's librarian or purchase it from the IGI Global bookstore.