doi: 10.56294/dm2023164
ORIGINAL
Enhancing Academic Outcomes through an Adaptive Learning Framework Utilizing a Novel Machine Learning-Based Performance Prediction Method
Mejora de los resultados académicos a través de un marco de aprendizaje adaptativo utilizando un novedoso método de predicción del rendimiento basado en el aprendizaje automático
Aymane Ezzaim1 *, Aziz Dahbi1
*, Abdelfatteh Haidine1
*, Abdelhak Aqqal1
*
1Laboratory of Information Technologies. National School of Applied Sciences, Chouaib Doukkali University. El Jadida, Morocco.
Cite as: Ezzaim A, Dahbi A, Haidine A, Aqqal A. Enhancing Academic Outcomes through an Adaptive Learning Framework Utilizing a Novel Machine Learning-Based Performance Prediction Method. Data and Metadata. 2023; 2:164. https://doi.org/10.56294/dm2023164
Submitted: 04-08-2023 Revised: 29-08-2023 Accepted: 02-10-2023 Published: 11-12-2023
Editor: Javier
Gonzalez-Argote
Note: Paper presented at the International Conference on Artificial Intelligence and Smart Environments (ICAISE’2023).
ABSTRACT
Introduction:E landscapes have been transformed by technological advancements, enabling adaptive and flexible learning through AI-based and decision-oriented adaptive learning systems. The increasing importance of this solutions is underscored by the pivotal role of the learner model, representing the core of the teaching-learning dynamic. This model, encompassing qualities, knowledge, abilities, behaviors, preferences, and unique distinctions, plays a crucial role in customizing the learning experience. It influences decisions related to learning materials, teaching strategies, and presentation styles.
Objective: This study meets the need for applying AI-driven adaptive learning in education, implementing a novel method that uses self-esteem (ES), emotional intelligence (EQ), and demographic data to predict student performance and adjust the learning process.
Methods: Our study involved collecting and processing data, constructing a predictive machine learning model, implementing it as an online solution, and conducting an experimental study with 146 high school students in computer science and French as foreign language. The aim was to tailor the teaching-learning process to the learners’ performance.
Results: significant correlations were observed between self-esteem, emotional intelligence, demographic data, and final grades. The predictive model demonstrated a 90 % accuracy rate. In the experimental group, the results indicated higher scores, with an average of 15,78/20 compared to the control group’s 12,53/20 in computer science. Similarly, in French as a foreign language, the experimental group achieved an average of 13,78/20, surpassing the control group’s 10,47/20.
Conclusion: the achieved results motivate the creation of a multifactorial AI-driven adaptive learning platform. Recognizing the necessity for improvement, we aim to refine the predicted performance score through the incorporation of a diagnostic evaluation, ensuring an optimal grouping of learners.
Keywords: Adaptive Learning; Artificial Intelligence; Machine Learning; Education; Performance.
RESUMEN
Introducción: Los paisajes educativos han sido transformados por los avances tecnológicos, permitiendo un aprendizaje adaptativo y flexible a través de sistemas de aprendizaje adaptativo basados en IA y orientados a la toma de decisiones. La creciente importancia de estas soluciones se ve subrayada por el papel fundamental del modelo de alumno, que representa el núcleo de la dinámica de enseñanza-aprendizaje. Este modelo, que abarca cualidades, conocimientos, habilidades, comportamientos, preferencias y distinciones únicas,juega un papel crucial en la personalización de la experiencia de aprendizaje. Influye en las decisiones relacionadas con los materiales de aprendizaje, las estrategias de enseñanza y los estilos de presentación.
Objetivo: Este estudio satisface la necesidad de aplicar el aprendizaje adaptativo impulsado por IA en la educación, implementando un método novedoso que utiliza la autoestima, la inteligencia emocional y datos demográficos para predecir el desempeño de los estudiantes y ajustar el proceso de aprendizaje.
Métodos: Nuestro estudio implicó la recopilación y el procesamiento de datos, la construcción de un modelo predictivo de aprendizaje automático, su implementación como un servicio web y la realización de un estudio experimental con 102 estudiantes de secundaria en informática y francés como lengua extranjera. El objetivo era adaptar el proceso de enseñanza-aprendizaje al desempeño de los alumnos.
Resultados: Se observaron correlaciones significativas entre autoestima, inteligencia emocional, datos demográficos y calificaciones finales. El modelo predictivo mostró una tasa de precisión del 90 % y el experimento reveló que el grupo experimental logró puntuaciones más altas, con un promedio de 16,75/20 en comparación con el 13/20 del grupo de control.
Conclusión: Los resultados obtenidos motivan la creación de una plataforma de aprendizaje adaptativo multifactorial impulsada por IA. Reconociendo la necesidad de mejorar, nuestro objetivo es refinar el puntaje de desempeño previsto mediante la incorporación de una evaluación de diagnóstico, asegurando una agrupación óptima de los estudiantes.
Palabras clave: Aprendizaje Adaptativo; Inteligencia Artificial; Aprendizaje Automático; Educación; Rendimiento.
INTRODUCTION
The adaptation in computers has been evident since the advent of artificial intelligence (AI), with scalable systems providing personalized interactions for individual users.(1) In education, adaptivity involves crafting a student model that reflects beliefs, preferences, and needs. This model guides the system in delivering tailored learning materials, sequences, feedback, tutoring, and interface, adapting to individual characteristics.(2)
Our study explores the development of AI-based adaptation in education. In essence, adaptive learning embodies an educational methodology that individualizes each student's learning experience, encompassing various variables such as cognitive, affective, and demographic backgrounds.(3)
In this context, our study aims to tailor the difficulty of educational content to each learner, utilizing performance predictions based on their SE, EQ, and demographic data. Performance is a key facet of learner characteristics, closely linked to the learning process. Throughout this process, the learner develops knowledge across various aspects of learning such as cognitive, emotional and psychomotor domains.(4,5,6) This emphasizes how crucial it is to investigate the intricate relationships that exist between SE, EQ, and learner performance. By looking at how these components interact, this will highlight the dynamics that affect students' academic progress and the learning process overall.
Drawing on current research, multiple studies have investigated the connection between EQ and cognitive abilities, as well as the correlation between SE and academic achievement.(7,8,9) For example, according to Ford and Tamir, those with high emotional intelligence (EQ) can attain their objective using their social flexibility.(10) In other context, Schutte and her colleagues make a substantial contribution by emphasizing that those with higher EQ perform better while solving cognitive tasks.(11) In a distinct investigation featuring 321 students and utilizing the Coopersmith SE Inventory (CSEI), a positive correlation manifested between SE and the comprehensive outcomes of the second-semester test.(12)
Furthermore, we stress that the choice of demographic factors is linked to their impact on ES. As asserted by Salsali and Silverstone, SE can be influenced by various psychological and demographic factors, including age, gender, income, and occupation.(13) In this regard, and to improve the accuracy of predicting learner performance, our dataset will include important variables including age, gender, parental financial position, parental marital status, and location of residence.
Our research presents a new AI-based adaptive learning approach. First, we built a machine learning model capable of predicting learners' current performance by integrating new psychological variables related to the learner profile (EQ, SE, and demographic information). The solution is then put into practice in an educational setting to proactively identify student gaps and modify instructional content based on each student's performance level.
This article encompasses our research methodology, including data collection, data processing, model building, and deployment. We delve deeper into the implementation of the experiment and its progress. In the “Results” section, we describe the predictive effectiveness of EQ and SE, compare various machine learning techniques, and present the students' results as a comparison between the two groups (control and experimental). Our results are analyzed and interpreted in the “Discussion” section, which also considers ramifications and offers suggestions for predictive modeling in education. The conclusion highlights the limitations of the research and presents future perspectives.
METHODS
This experimental study targeted high school students, specifically those enrolled in the scientific common core. The data collection process involved compiling a dataset from 100 students at Oulad Zerrad High School, under the regional direction of Kelaa des Sraghna, Marrakech, Morocco. This dataset was utilized for training our predictive model. For the experimental phase, 73 students from Moulay Alhassan High school, under the prefecture of Aïn Chock Province, Casablanca, Morocco. The students were belonging to the scientific common core levels and divided to two groups A and B, maintaining a balanced representation (A=38, B=35).
In the computer science course, group A served as the experimental group, while group B functioned as the control group. Conversely, in the French as a foreign language course, the experimental and control groups were interchanged. This strategic allocation aims to minimize potential biases and ensure that any observed differences in performance between the groups can be attributed to the educational interventions rather than inherent disparities in the groups themselves.
Data collection
In the pursuit of primary data, a meticulously designed questionnaire was formulated to align with the research prerequisites. The questionnaire encompasses closed-ended queries that systematically address pertinent parameters such as gender, age, parental financial and civil standing, as well as the residential status of the student (either residing in a boarding school or off-campus). Concurrently, during the dataset collection official records from the Oulad zerrad high school administration were methodically utilized to extract the conclusive grades of each participating student during the inaugural semester. Subsequently, we used the Bar-On scale and the Rosenberg scale as two well-respected and trustworthy instruments for measuring EQ and SE, respectively. While the Rosenberg scale is a proven tool for measuring levels of SE, the Bar-On scale is renowned for its thorough assessment of emotional and social components. In addition, the Bar-On scale, with 35 items, rates on a 5-point scale (0 to 175), indicating higher scores for greater EQ.(14) The Rosenberg SE Scale, a 10-item Likert scale (10 to 40), inversely links higher values to lower SE.(15)
Data processing
Regarding the data processing phase for our machine learning model, strict protocols were implemented to ensure robust analysis. The collected data were subjected to systematic coding, which converted the qualitative data into numerical forms (See Table 1). This process is essential for machine learning algorithms because they perform best when given numerical inputs.
Table 1. Systematic coding of categorical data |
|||||||
Gender |
Parental financial situation |
parental marital situation |
Residential status |
||||
Female |
0 |
Lower class |
0 |
Married |
0 |
Resident |
0 |
Lower-middle class |
1 |
Divorced |
1 |
||||
Male |
1 |
Middle class |
2 |
Adopted |
2 |
Boarder |
1 |
Upper-middle class |
3 |
Widow |
3 |
||||
High class |
4 |
After the data was coded, correlation analysis was performed to find patterns among the various variables. This included evaluating the statistical relationships between academic achievement, demographic information, and SE and emotional EQ.
Using a sample of 100 high school students—40 % male and 60 % female—the Correlation Heatmap presents an interesting discovery (See figure 1). The “final grade” and “Rosenberg SE Scale score” have a strongly negative correlation value of -0,92, indicating a strong relationship between the two. This shows that higher SE is linked to better final grades. To put it simply, “SE” shows up as a potentially significant indicator of student success. Furthermore, there is a noticeable but weak association between “Final Grade” and “EQ” (0,47), indicating that when EQ levels rise, there is a tendency for final grades to improve.
It is remarkable that academic results were regularly lower than 13/20 for 14 adolescents whose parents were divorced; 43 percent of them scored below 10/20, compared to only 9 percent of students whose parents were married. This trend indicates that parents’ marital status has a major influence on their children’s educational outcomes, particularly in divorced families.
This phase allowed us to refine the predictive variables of learner performance, focusing on Age, SE, EQ, parental financial status, parental marital status and residence.
Figure 1. Correlation matrix heatmap.
|
|
Figure 3. Disparity in academic performance based on parental marital status.
Prediction Model Building
After processing, the data was ready to be used by a machine learning algorithm, which paved the way for building our performance prediction model. For this, we used the functionalities of the Scikit-Learn library. Our dataset was split: 30 % for testing and 70 % for training. The primary goal was predicting the final grade, a pivotal performance indicator, using other factors as predictive features.
The choice of machine learning was based on the comparison of the accuracy of three algorithms that address complexity, simplicity, sequential learning, and adaptability to nonlinear interactions, namely:
● Random Forest Regressor (RFR).(16,17)
● Linear Regression (LR).(18,19)
● Gradient Boosting Regressor (GBR).(20,21)
● Support Vector Regression (SVR).(22,23)
This diverse set of algorithms aimed to leverage their unique strengths for a comprehensive and precise predictive modeling approach.
In order to choose the best performing model, we chose to calculate metrics adapted to regression tasks. Mean square error (MSE), root mean square error (RMSE) and R-squared (R2) score were specifically used. These metrics provide an in-depth assessment of the models' predictive performance, as shown in table 2.
Table 2. The Summary of Evaluation Metrics for Various Regression Models' Performance |
|||
Algorithm |
MSE |
RMSE |
R2 score |
Gradient Boosting Regressor (GBR) |
0,51 |
0,72 |
0,9 |
Random Forest Regressor (RFR) |
0,62 |
0,79 |
0,88 |
Linear Regression (LR) |
0,64 |
0,8 |
0,88 |
Support Vector Regression (SVR) |
3,3 |
1,81 |
0,38 |
The table above showcases key models performances. The MSE (0,51) for GBR is the lowest, suggesting the least average squared deviations. Its RMSE of 0,85 indicates that, on average, predictions differ by 0,72 units. The R2 score of 0.9, which explains 90 % of the variance in the target variable, validates the robustness. With comparable outcomes, RFR and LR account for 88 % of the variation. With an RMSE of 1,81 and an R2 value of 0,38, on the other hand, SVR performs less well, suggesting a poorer fit and reduced explanatory power. The SVR model's suboptimal performance may be due to factors like our limited dataset (100 records), potentially impeding its ability to grasp intricate data patterns. Hyperparameter tuning, crucial for SVR, involves optimizing parameters such as kernel function and regularization. We addressed this by tuning hyperparameters with the following parameter grid:
● C: [0.1, 1, 10, 100].
● Epsilon: [0.01, 0.1, 0.2, 0.5].
● Kernel: ['linear', 'rbf', 'poly'].
Results indicate the best parameters:
● C: 0.1.
● Epsilon: 0.5.
● Kernel: 'linear.
Evaluation metrics on the test set include Mean Squared Error of 0,75, Root Mean Squared Error of 0,87, and R-squared Score of 0,86, suggesting satisfactory model performance. Overall, according on these measures, the GBR appears to perform the best of the models you have studied, with the lowest MSE, RMSE, and greatest R2 score.
The experiment's execution
Our study attempts to evaluate how our solution affects student performance in the accredited computer science and French as a foreign language program in Moroccan high schools. A quantitative research methodology is used to accomplish this, enabling quantifiable data collection and analysis. The selected experimental strategy makes it easier to do statistical analyses, relationship analysis, and descriptive analysis.
The experiment involved two groups (A and B) in a high school setting. One group received traditional teaching, while the other had data collected on SE, EQ, and demographic data. Using our machine learning model (GBR), we predicted their performance. The experimental group was then divided into three subgroups based on predicted performances scores (PPS): low performance (PPS < 10), middle performance (10 ≤ PPS < 15), and high performance (PPS ≥ 15). Each subgroup received adapted course content in terms of difficulty, activities, exercises, and support using the Teaching at the Right Level (TaRL). This approach involves a student-centric instructional methodology that tailors teaching to the specific learning needs of individual students.(24)
In the context of our experiment, the TaRL Approach was implemented by categorizing the experimental group into subgroups based on predicted performance scores. This segmentation allowed for a more personalized and adaptive teaching strategy, with each subgroup receiving course content tailored to their specific proficiency levels. The approach aims to ensure that each student receives the necessary support and challenges aligned with their learning abilities. In order to optimize class time, the flipped classroom strategy was implemented. Theoretical course material was delivered online through Moodle learning management system (LMS) prior to the start of the session, allowing in-class time for practical activities, presentations, and discussions. By maximizing the utilization of in-person training, this pedagogical approach increases student engagement, promotes active learning, and creates more dynamic and applicable learning opportunities.(25)
RESULTS
Figure 4. Distribution of Students Across Predicted Performance Subgroups.
Following the application of our machine learning model to predict learner performances, the ensuing distribution is delineated as follows: In Group A, designated as the experimental group in the computer science course, the segmentation comprises 12 (32 %) students in the low-performance category, 18 (47 %) in the medium-performance category, and 8 (21 %) in the high-performance category. In the case of Group B, acting as the experimental group in the French as a foreign language course, the distribution manifests as 15 (17 %) students in the low-performance category, 14 (40 %) in the medium-performance category, and 6 (17 %) in the high-performance category.
Table 3. Number of Students in Groups A and B with Diagnostic Assessment Scores Outside the Range of Their Respective Subgroups. |
||
Subgroups |
Group A |
Group B |
High performance |
1 |
2 |
Middle performance |
4 |
3 |
Low performance |
3 |
2 |
Based on diagnostic evaluations for both Group A and B, the overall performance prediction accuracy is 79,45 %, with precision rates of 78,94 % for Group A and 80 % for Group B. These results validate our model's efficacy, with a 10 % margin to be discussed in the following section. Notably, students with performance prediction values outside their subgroup ranges are classified as follows: high-performance predictions with diagnostic scores below 15, medium-performance predictions with scores below 10 and above 15, and low-performance predictions with scores above 10.
Figure 5. Comparison of Average PPS, Diagnostic Assessment Results, and Summative Assessment Results in Computer Science and French as a Foreign Language for Experimental and Control Groups
The depicted graph illustrates a notable enhancement in performance, evident in both the computer science and French as foreign language courses. In the computer science course, Group A, representing the experimental group, exhibited a comprehensive average performance prediction score of 12,67. Concurrently, the average diagnostic evaluation score was 11,99, while the overall average summative evaluation score reached 15,78. This signifies an upward trajectory from predictive and diagnostic to summative outcomes. A comparative analysis with the control group, Group B, taught conventionally, reveals a lower general average summative evaluation score of 12,53.
Similarly, in the French as a foreign language course, Group B (experimental) indicates an ascending trend from predictive and diagnostic to summative assessments. In contrast, the control group (Group A) taught traditionally achieved a lower general average summative evaluation score of 10,47.
DISCUSSION
The study's comprehensive approach, from meticulous data processing to the practical application of machine learning models, has yielded valuable insights into enhancing student performance. The strong negative correlation between "Final Grade" and SE as a potential determinant of academic success. This aligns with existing literature emphasizing the influence of socio-emotional factors on learning outcomes.
The machine learning phase, employing algorithms like RFR, LR, GBR, and SVR, revealed GBR as the optimal performer. Its superior accuracy and predictive capabilities suggest its suitability for student performance prediction.
The experimental phase, featuring TaRL and flipped classroom methodologies, demonstrated a tailored, adaptive teaching strategy's efficacy. The distribution of predicted performance scores and subsequent precision rates for Group A and Group B underscored the model's ability to categorize students effectively. Highlighting students whose performance predictions deviate from their subgroup's range underscores the importance of refining the classification process. To enhance classification accuracy, we propose incorporating a diagnostic assessment before the classification stage using this formula.
Let:
● D be the diagnostic assessment score,
● P be the predicted performance score,
● C be the combined classification score.
The combined classification score (C) is calculated as follows:
C=0,2×D+0,8×P
By utilizing the diagnostic assessment score for 20 % of the classification and combining it with the predicted performance score for the remaining 80 %, we can leverage the strengths of both approaches. This hybrid classification method acknowledges the unique characteristics of each subject matter and addresses exceptional cases more effectively. The diagnostic assessment serves as a valuable supplement to the predictive model, providing a more nuanced and accurate basis for student classification. This approach aligns with the understanding that a holistic evaluation, combining predictive modeling and diagnostic insights, can significantly improve the precision of student performance categorization.
Analyzing the performance outcomes in computer science and French courses, the experimental groups exhibited marked improvement. GBR's strong average performance prediction scores, coupled with rising diagnostic and summative evaluation scores, validate the model's potential to enhance student learning. Comparisons with control groups reinforced the positive impact, suggesting that predictive modeling could revolutionize educational practices by providing personalized, data-driven insights.
In summary, our study, combining data processing and machine learning, reveals an important role of EQ, SE and demographic data in academic success. The experimental phase, with adaptive teaching methods, shows marked improvements and highlights the transformative potential of predictive modeling in personalized education.
CONCLUSION
In conclusion, our research demonstrates the revolutionary possibilities of using machine learning models with socio-emotional elements in teaching. The ability to accurately forecast student performance and the success that follows in customizing instructional approaches highlight the potential benefits of a data-driven, individualized education model. The research advances our knowledge of academic results while also providing educators with useful takeaways that will facilitate the development of more flexible and efficient teaching and learning strategies.
Our perspective involves leveraging technology to automate the entire educational workflow, from data collection to the delivery of learning materials. By integrating our predictive model into a Learning Management System (LMS) as a plugin, we aim to seamlessly analyze student data and classify them into specific performance groups. This automation not only streamlines the classification process but also enables the dynamic adaptation of learning content based on individual needs. Implementing this automated system in an online learning platform enhances scalability, accessibility, and personalization, providing a more efficient and tailored educational experience for each student. This transformative approach aligns with the evolving landscape of educational technology and educational reforms in our country, Morocco, contributing to the advancement of data-driven and adaptive learning environments.
REFERENCES
1. Rasheed F, Wahid A. Learning style detection in E-learning systems using machine learning techniques. Expert Syst Appl 2021;174:114774. https://doi.org/10.1016/j.eswa.2021.114774.
2. Chou C-Y, Lai KR, Chao P-Y, Lan CH, Chen T-H. Negotiation based adaptive learning sequences: Combining adaptivity and adaptability. Comput Educ 2015;88:215‑26. https://doi.org/10.1016/j.compedu.2015.05.007.
3. Ezzaim A, Dahbi A, Haidine A, Aqqal A. AI-Based Adaptive Learning: A Systematic Mapping of the Literature. JUCS - J Univers Comput Sci 2023;29:1161‑97. https://doi.org/10.3897/jucs.90528.
4. Ariffin MM, Oxley A, Sulaiman S. Evaluating game-based learning effectiveness in higher education. Procedia-Soc Behav Sci 2014;123:20‑7.
5. Sotomayor TM, Proctor MD. Assessing combat medic knowledge and transfer effects resulting from alternative training treatments. J Def Model Simul 2009;6:121‑34.
6. Li N, Marsh V, Rienties B. Modelling and managing learner satisfaction: Use of learner feedback to enhance blended and online learning experience. Decis Sci J Innov Educ 2016;14:216‑42.
7. Day AL, Carroll SA. Using an ability-based measure of emotional intelligence to predict individual performance, group performance, and group citizenship behaviours. Personal Individ Differ 2004;36:1443‑58.
8. Jordan PJ, Troth AC. Managing emotions during team problem solving: Emotional intelligence and conflict resolution. Hum Perform 2004;17:195‑218.
9. Fernández-Berrocal P, Extremera N, Lopes PN, Ruiz-Aranda D. When to cooperate and when to compete: Emotional intelligence in interpersonal decision-making. J Res Personal 2014;49:21‑4.
10. Ford BQ, Tamir M. When getting angry is smart: emotional preferences and emotional intelligence. Emotion 2012;12:685.
11. Schutte NS, Schuettpelz E, Malouff JM. Emotional intelligence and task performance. Imagin Cogn Personal 2001;20:347‑54.
12. Vishalakshi KK, Yeshodhara K. Relationship between self-esteem and academic achievement of secondary school students. Education 2012;1:83‑4.
13. Auza-Santiváñez JC, Díaz JAC, Cruz OAV, Robles-Nina SM, Escalante CS, Huanca BA. Bibliometric Analysis of the Worldwide Scholarly Output on Artificial Intelligence in Scopus. Gamification and Augmented Reality 2023;1:11–11. https://doi.org/10.56294/gr202311.
14. Castillo JIR. Aumented reality im surgery: improving precision and reducing ridk. Gamification and Augmented Reality 2023;1:15–15. https://doi.org/10.56294/gr202315.
15. Castillo-Gonzalez W, Lepez CO, Bonardi MC. Augmented reality and environmental education: strategy for greater awareness. Gamification and Augmented Reality 2023;1:10–10. https://doi.org/10.56294/gr202310.
16. Aveiro-Róbalo TR, Pérez-Del-Vallín V. Gamification for well-being: applications for health and fitness. Gamification and Augmented Reality 2023;1:16–16. https://doi.org/10.56294/gr202316.
17. Salsali M, Silverstone PH. Low self-esteem and psychiatric patients: Part II – The relationship between self-esteem and demographic factors and psychosocial stressors in psychiatric patients. Ann Gen Hosp Psychiatry 2003;2:3. https://doi.org/10.1186/1475-2832-2-3.
18. Bar-On R. The Bar-On model of emotional-social intelligence (ESI) 1. Psicothema 2006:13‑25.
19. Rosenberg M. Rosenberg self-esteem scale. J Relig Health 1965.
20. Ali J, Khan R, Ahmad N, Maqsood I. Random forests and decision trees. Int J Comput Sci Issues IJCSI 2012;9:272.
21. Patil S, Patil A, Handikherkar V, Desai S, Phalle VM, Kazi FS. Remaining Useful Life (RUL) Prediction of Rolling Element Bearing Using Random Forest and Gradient Boosting Technique, American Society of Mechanical Engineers Digital Collection; 2019. https://doi.org/10.1115/IMECE2018-87623.
22. Kumari K, Yadav S. Linear regression analysis study. J Prim Care Spec 2018;4:33‑6.
23. Su X, Yan X, Tsai C-L. Linear regression. Wiley Interdiscip Rev Comput Stat 2012;4:275‑94.
24. Anchaleechamaikorn T, Lamjiak T, Thongpe T, Thiralertpanit L, Polvichai J. Predict Condominium Prices in Bangkok Based on Ensemble Learning Algorithm with various factors. 2023 Int. Tech. Conf. CircuitsSystems Comput. Commun. ITC-CSCC, IEEE; 2023, p. 1‑4.
25. Prettenhofer P, Louppe G. Gradient boosted regression trees in scikit-learn. PyData 2014, 2014.
26. Xia Y, Liu Y, Chen Z. Support Vector Regression for prediction of stock trend. 2013 6th Int. Conf. Inf. Manag. Innov. Manag. Ind. Eng., vol. 2, IEEE; 2013, p. 123‑6.
27. Zhang F, O’Donnell LJ. Support vector regression. Mach. Learn., Elsevier; 2020, p. 123‑40.
28. Jazuli L. TEACHING AT THE RIGHT LEVEL (TaRL) THROUGH THE ALL SMART CHILDREN APPROACH (SAC) IMPROVES STUDENT’S LITERATURE ABILITY. PROGRES Pendidik 2022;3:156‑65.
29. Gilboy MB, Heinerichs S, Pazzaglia G. Enhancing student engagement using the flipped classroom. J Nutr Educ Behav 2015;47:109‑14.
FINANCING
None.
CONFLICT OF INTEREST
The authors declare that there are no conflicts of interest.
AUTHORSHIP CONTRIBUTION
Conceptualization: Aymane Ezzaim, Aziz Dahbi
Data curation: Aymane Ezzaim
Formal analysis: Aziz Dahbi, Abdelfatteh Haidine
Research: Aymane Ezzaim
Methodology: Aymane Ezzaim, Aziz Dahbi, Abdelfatteh Haidine
Supervision: Aziz Dahbi, Abdelhak Aqqal
Validation: Aziz Dahbi
Original drafting and editing: Aymane Ezzaim, Aziz Dahbi
Writing - proofreading and editing: Aymane Ezzaim, Aziz Dahbi, Abdelhak Aqqal