Towards automated assessment of participation and value creation in networked learning
DOI:
https://doi.org/10.54337/nlc.v14i1.8160Keywords:
MOOCs, AI, LLMs, Value Creation, Qualitative AnalysisAbstract
This paper investigates the integration of artificial intelligence (AI) in Massive Open Online Courses (MOOCs) to improve student engagement and learning outcomes. MOOCs have transformed the landscape of online education, offering wide accessibility but at the price of high dropout rates and limited ability to measure student engagement. With MOOCs' expansive user base, it has been difficult until now to move beyond conventional metrics of course completion and quiz scores, which are insufficient for understanding the multifaceted nature of student engagement. The study addresses these challenges through a novel method of applying a large language model (LLM) to develop a more complex and meaningful understanding of student engagement and experiences in MOOCs.
The research is grounded in the networked learning (NL) research tradition by using the Community of Inquiry framework to explore social, cognitive, and teaching presence in online educational environments and automate the assessment of student learning participation. The study focuses on social engagement and interaction, crucial for perceived value creation. Theoretical soundness, data targeting and quality, interpretation benchmarks, and robust data sampling and analysis strategies constitute the foundational pillars of the proposed predictive system. This framework guides the study's approach to address common MOOC issues, such as learner isolation and limited interaction, which are critical factors in students' perceived value and learning.
The methodology involves a dual approach of human and LLM coding, testing the ability of LLMs to replicate human coding accuracy. The study was conducted in a non-massive open online course to test system effectiveness before applying it to larger MOOCs. Analysis indicates that an initial noncomplex LLM model can match human coding performance to 60% reliability, representing a significant step forward in automating qualitative analysis in online education, with potential for enhanced results using more complex LLM modelling.
Key findings show that AI can effectively measure and enhance student engagement in MOOCs. The use of LLMs in coding qualitative data offers a reliable method for understanding student participation, which is vital for improving educational outcomes in online courses. The research highlights the potential of AI in transforming online education by providing deeper insights into student engagement and learning processes.
The study also provides insights into the interplay of AI-measured engagement and learners' value perception, advancing the understanding of networked learning participation and its impact on educational outcomes. Implications extend to curriculum development, pedagogical strategies, and the broader field of online education. Future work includes expanding this methodology to more extensive MOOC environments and incorporating qualitative interviews to enrich the analysis.
Downloads
Published
How to Cite
Issue
Section
Categories
License
Copyright (c) 2024 Christopher Deneen, Maarten De Laat, Alrike Claassen, Andrew Zamecnik
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
CC BY-NC-ND
This license enables reusers to copy and distribute the material in any medium or format in unadapted form only, for noncommercial purposes only, and only so long as attribution is given to the creator. CC BY-NC-ND includes the following elements:
BY: credit must be given to the creator.
NC: Only noncommercial uses of the work are permitted.
ND: No derivatives or adaptations of the work are permitted.