Towards automated assessment of participation and value creation in networked learning

Authors

  • Christopher Deneen Centre for Change and Complexity in Learning, Education Futures, University of South Australia
  • Maarten De Laat Centre for Change and Complexity in Learning, Education Futures, University of South Australia https://orcid.org/0000-0003-2243-2667
  • Alrike Claassen Centre for Change and Complexity in Learning, Education Futures, University of South Australia
  • Andrew Zamecnik Centre for Change and Complexity in Learning, Education Futures, University of South Australia

Keywords:

MOOCs, AI, LLMs, Value Creation, Qualitative Analysis

Abstract

This paper investigates the integration of artificial intelligence (AI) in Massive Open Online Courses (MOOCs) to improve student engagement and learning outcomes. MOOCs have transformed the landscape of online education, offering wide accessibility but at the price of high dropout rates and limited ability to measure student engagement. With MOOCs' expansive user base, it has been difficult until now to move beyond conventional metrics of course completion and quiz scores, which are insufficient for understanding the multifaceted nature of student engagement. The study addresses these challenges through a novel method of applying a large language model (LLM) to develop a more complex and meaningful understanding of student engagement and experiences in MOOCs.
The research is grounded in the networked learning (NL) research tradition by using the Community of Inquiry framework to explore social, cognitive, and teaching presence in online educational environments and automate the assessment of student learning participation. The study focuses on social engagement and interaction, crucial for perceived value creation. Theoretical soundness, data targeting and quality, interpretation benchmarks, and robust data sampling and analysis strategies constitute the foundational pillars of the proposed predictive system. This framework guides the study's approach to address common MOOC issues, such as learner isolation and limited interaction, which are critical factors in students' perceived value and learning.
The methodology involves a dual approach of human and LLM coding, testing the ability of LLMs to replicate human coding accuracy. The study was conducted in a non-massive open online course to test system effectiveness before applying it to larger MOOCs. Analysis indicates that an initial noncomplex LLM model can match human coding performance to 60% reliability, representing a significant step forward in automating qualitative analysis in online education, with potential for enhanced results using more complex LLM modelling.
Key findings show that AI can effectively measure and enhance student engagement in MOOCs. The use of LLMs in coding qualitative data offers a reliable method for understanding student participation, which is vital for improving educational outcomes in online courses. The research highlights the potential of AI in transforming online education by providing deeper insights into student engagement and learning processes.
The study also provides insights into the interplay of AI-measured engagement and learners' value perception, advancing the understanding of networked learning participation and its impact on educational outcomes. Implications extend to curriculum development, pedagogical strategies, and the broader field of online education. Future work includes expanding this methodology to more extensive MOOC environments and incorporating qualitative interviews to enrich the analysis.

Downloads

Published

06-05-2024

How to Cite

Deneen, C., De Laat, M., Claassen, A., & Zamecnik, A. (2024). Towards automated assessment of participation and value creation in networked learning . Networked Learning Conference, 14(1). Retrieved from https://journals.aau.dk/index.php/nlc/article/view/8160