|
International Journal of Applied Information Systems
Foundation of Computer Science (FCS), NY, USA
|
| Volume 12 - Issue 41 |
| Published: September 2023 |
| Authors: God’Swill Theophilus, Christopher Ifeanyi Eke |
10.5120/ijais2023451951
|
God’Swill Theophilus, Christopher Ifeanyi Eke . MACHINE LEARNING-BASED E-LEARNERS’ ENGAGEMENT LEVEL PREDICTION USING BENCHMARK DATASETS. International Journal of Applied Information Systems. 12, 41 (September 2023), 23-32. DOI=10.5120/ijais2023451951
@article{ 10.5120/ijais2023451951,
author = { God’Swill Theophilus,Christopher Ifeanyi Eke },
title = { MACHINE LEARNING-BASED E-LEARNERS’ ENGAGEMENT LEVEL PREDICTION USING BENCHMARK DATASETS },
journal = { International Journal of Applied Information Systems },
year = { 2023 },
volume = { 12 },
number = { 41 },
pages = { 23-32 },
doi = { 10.5120/ijais2023451951 },
publisher = { Foundation of Computer Science (FCS), NY, USA }
}
%0 Journal Article
%D 2023
%A God’Swill Theophilus
%A Christopher Ifeanyi Eke
%T MACHINE LEARNING-BASED E-LEARNERS’ ENGAGEMENT LEVEL PREDICTION USING BENCHMARK DATASETS%T
%J International Journal of Applied Information Systems
%V 12
%N 41
%P 23-32
%R 10.5120/ijais2023451951
%I Foundation of Computer Science (FCS), NY, USA
The wide adoption of e-learning especially during and after the pandemic has given rise to the concern of learners’ motivation and involvement. E-leaner engagement level recognition over time has become critical since there is little to no physical interaction. In this paper, a benchmark dataset was utilized in predicting learners’ engagement levels in a blended e-learning system. Information Gain feature ranker was leveraged to ascertain the significance of the features. This study performed a comparative study on some machine learning algorithms including; Decision Tree, Naïve Bayes, Random Forest, Logistics Regression, Stochastic Gradient Descent, LogitBoost, Sequential Minimal Optimization, Voted Perceptron, and AdaptiveBoost. Each model was accessed using the 10-fold cross-validation. We measure the performance of the models before and after feature selection. The predictive results show that Sequential Minimal Optimization outperformed other models by attaining an accuracy of 90% with precision, recall, and f-measure values of 0.895, 0.897, and 0.895 respectively.