Intelligent Data-Driven Task Offloading Framework for Internet of Vehicles Using Edge Computing and Reinforcement Learning
DOI:
https://doi.org/10.56294/dm2025521Keywords:
Internet of Vehicles (IoV), Mobile Edge Computing (MEC), Task Offloading, Deep Reinforcement Learning (DRL), Particle Swarm Optimization (PSO)Abstract
Introduction: The Internet of Vehicles (IoV) was enabled through innovative developments featuring advanced automotive networking and communication to fulfill the need for real-time applications that are latency-sensitive, such as autonomous driving and emergency management. Given that the servers were much farther away from the actual site of operation, traditional cloud computing faced huge delays in processing. Mobile Edge Computing (MEC) resolved this challenge by enabling localized data processing, reducing latency and enhancing resource utilization.
Methods: This study proposed an Efficient Mobile Edge Computing-based Internet of Vehicles Task Offloading Framework (EMEC-IoVTOF). The framework integrated deep reinforcement learning (DRL) to optimize task offloading decisions, focusing on minimizing latency and energy consumption while accounting for bandwidth and computational constraints. Offloading costs were calculated using mathematical modeling and further optimized through Particle Swarm Optimization (PSO). An adaptive inertia weight mechanism was implemented to avoid local optimization and enhance task allocation decisions.
Result: The proposed framework was thus proved effective for any latency reduction and energy consumption optimization in efficiently improving the overall system performance. DRL and MEC together facilitate scalability in task distribution by ensuring robust performance in dynamic vehicular environments. Integration with PSO further enhances the decision-making process and makes the system highly adaptable to dynamic task demands and network conditions.
Discussion:The findings highlighted the potential of EMEC-IoVTOF to address key challenges in IoV systems, including latency, energy efficiency, and bandwidth utilization. Future research could explore real-world deployment and adaptability to complex vehicular scenarios, further validating its scalability and reliability.
References
Ning, Z., Zhang, K., Wang, X., Guo, L., Hu, X., Huang, J., ... & Kwok, R. Y. (2020). Intelligent edge computing in internet of vehicles: A joint computation offloading and caching solution. IEEE Transactions on Intelligent Transportation Systems, 22(4), 2212-2225.
Hu, J., Li, Y., Zhao, G., Xu, B., Ni, Y., & Zhao, H. (2021). Deep reinforcement learning for task offloading in edge computing assisted power IoT. IEEE Access, 9, 93892-93901.
Zhao, H., Hua, J., Zhang, Z., & Zhu, J. (2022). Deep Reinforcement Learning-Based Task Offloading for Parked Vehicle Cooperation in Vehicular Edge Computing. Mobile Information Systems, 2022.
Lin, B., Lin, K., Lin, C., Lu, Y., Huang, Z., & Chen, X. (2021). Computation offloading strategy based on deep reinforcement learning for connected and autonomous vehicle in vehicular edge computing. Journal of Cloud Computing, 10(1), 33.
Zhang, J., Guo, H., & Liu, J. (2020). Adaptive task offloading in vehicular edge computing networks: a reinforcement learning based scheme. Mobile Networks and Applications, 25(5), 1736-1745.
Qu, G., Wu, H., Li, R., & Jiao, P. (2021). DMRO: A deep meta reinforcement learning-based task offloading framework for edge-cloud computing. IEEE Transactions on Network and Service Management, 18(3), 3448-3459.
Geng, L., Zhao, H., Wang, J., Kaushik, A., Yuan, S., & Feng, W. (2023). Deep Reinforcement Learning Based Distributed Computation Offloading in Vehicular Edge Computing Networks. IEEE Internet of Things Journal
Abu-Maizer, M. (2022). The impact of the CANVA program on the learning of the ninth grade students in Jordanian schools of HTML. Al-Balqa Journal for Research and Studies, 25(2), 122-142.https://doi.org/10.35875/1105-025-002-008
Shokr, L., AlAgry, D., & Al-Sagga, S. (2022). Critical Assessment of Core Self‐Evaluations Theory. Al-Balqa Journal for Research and Studies, 25(2), 162-184.https://doi.org/10.35875/1105-025-002-010
Bouazza, M., &AlSsaideh, A. (2023). The impact of the digital economy on enhancing the quality of banking services an application study on Islamic banks operating in Jordan. Al-Balqa Journal for Research and Studies, 26(1), 89-107. https://doi.org/10.35875/1105-026-001-007
Li, S., Hu, X., & Du, Y. (2021). Deep reinforcement learning for computation offloading and resource allocation in unmanned-aerial-vehicle assisted edge computing. Sensors, 21(19), 6499.
Ghoneim, R., &Arabasy, M. (2024). The Role of Artworks of Architectural Design in Emphasizing the Arab Identity. Al-Balqa Journal for Research and Studies, 27(1), 1-14.
https://doi.org/10.35875/1105.027.001.001
Khouli, A. (2024). Psychological dimensions in diplomatic practice. Al-Balqa Journal for Research and Studies, 27(1), 15-28.https://doi.org/10.35875/1105.027.001.002
Al-Dabbas, N. (2024). The Scope and Procedures of the Expert Recusal in the Arbitration Case: A Fundamental Analytical Study in Accordance with Jordanian Law. Al-Balqa Journal for Research and Studies, 27(2), 291-306.https://doi.org/10.35875/t99vfb66
Zhan, W., Luo, C., Wang, J., Wang, C., Min, G., Duan, H., & Zhu, Q. (2020). Deep-reinforcement-learning-based offloading scheduling for vehicular edge computing. IEEE Internet of Things Journal, 7(6), 5449-5465
Wu, Y., Xia, J., Gao, C., Ou, J., Fan, C., Ou, J., & Fan, D. (2022). Task offloading for vehicular edge computing with imperfect CSI: A deep reinforcement approach. Physical Communication, 55, 101867.
Daban, S., &Boulasnan, farida. (2024). Post-traumatic Stress Disorder and Acute Stress Disorder AmongEmergencyUnits Doctors and Nurses. Al-Balqa Journal for Research and Studies, 27(3), 22-41.https://doi.org/10.35875/qvczf726
Aawishe, S., Al-Hassan, T. & Mansour, A. (2024). The Status of Digital Evidence in Administrative Litigation. Al-Balqa Journal for Research and Studies, 27(3), 42-55.https://doi.org/10.35875/pgdx2798
Huang, J., Wan, J., Lv, B., Ye, Q., & Chen, Y. (2023). Joint computation offloading and resource allocation for edge-cloud collaboration in internet of vehicles via deep reinforcement learning. IEEE Systems Journal.
Li, X. (2021). A computing offloading resource allocation scheme using deep reinforcement learning in mobile edge computing systems. Journal of Grid Computing, 19(3), 35.
Alshaketheep, K., Mansour, A., Deek, A., Zraqat, O., Asfour, B., &Deeb, A. (2024). Innovative digital marketing for promoting SDG 2030 knowledge in Jordanian universities in the Middle East. Discover Sustainability, 5(1), 219. https://doi.org/10.1007/s43621-024-00419-8
Al-Adwan, A. S., Alsoud, M., Li, N., Majali, T. E., Smedley, J., & Habibi, A. (2024a). Unlocking future learning: Exploring higher education students' intention to adopt meta-education. Heliyon, 10(9).https://doi.org/10.1016/j.heliyon.2024.e29544
Al-Adwan, A. S., Al Masaeed, S., Yaseen, H., Balhareth, H., Al-Mu'ani, L. A., &Pavlíková, M. (2024b). Navigating the roadmap to meta-governance adoption. Global Knowledge, Memory and Communication. https://doi.org/10.1108/GKMC-02-2024-0105
Xue, Z., Liu, C., Liao, C., Han, G., & Sheng, Z. (2023). Joint service caching and computation offloading scheme based on deep reinforcement learning in vehicular edge computing systems. IEEE Transactions on Vehicular Technology.
Alzghoul, A., Khaddam, A. A., Abousweilem, F., Irtaimeh, H. J., &Alshaar, Q. (2024). How business intelligence capability impacts decision-making speed, comprehensiveness, and firm performance. Information Development, 40(2), 220-233.https://doi.org/10.1177/02666669221108438
Halteh, K., AlKhoury, R., Ziadat, S. A., Gepp, A., & Kumar, K. (2024). Using machine learning techniques to assess the financial impact of the COVID-19 pandemic on the global aviation industry. Transportation Research Interdisciplinary Perspectives, 24, 101043.https://doi.org/10.1016/j.trip.2024.101043
AlKhouri, R., Halteh, P., Halteh, K., & Tiwari, M. (2024). The role of virtue ethics in enhancing reputation through combatting financial crimes. Journal of Money Laundering Control, 27(2), 228-241. https://doi.org/10.1108/JMLC-02-2023-0033
Lv, Z., Chen, D., & Wang, Q. (2020). Diversified technologies in internet of vehicles under intelligent edge computing. IEEE transactions on intelligent transportation systems, 22(4), 2048-2059.
Alsaaidah, A. M., Shambour, Q. Y., Abualhaj, M. M., & Abu-Shareha, A. A. (2024). A novel approach for e-health recommender systems. Bulletin of Electrical Engineering and Informatics, 13(4), 2902-2912.https://doi.org/10.11591/eei.v13i4.7749
Alsharaiah, M., Abualhaj, M., Baniata, L., Al-saaidah, A., Kharma, Q., & Al-Zyoud, M. (2024). An innovative network intrusion detection system (NIDS): Hierarchical deep learning model based on Unsw-Nb15 dataset. International Journal of Data and Network Science, 8(2), 709-722.10.5267/j.ijdns.2024.1.007
Baniata, L. H., Kang, S., Alsharaiah, M. A., &Baniata, M. H. (2024). Advanced Deep Learning Model for Predicting the Academic Performances of Students in Educational Institutions. Applied Sciences, 14(5), 1963.https://doi.org/10.3390/app14051963
Abu-Shareha, A. A., Abualhaj, M. M., Alsharaiah, M. A., Shambour, Q. Y., & Al-Saaidah, A. (2024). A New Framework for Evaluating Random Early Detection Using Markov Modulate Bernoulli Process Stationary Distribution. International Journal of Intelligent Engineering & Systems, 17(4).10.22266/ijies2024.0831.72
Zhao, J., Quan, H., Xia, M., & Wang, D. (2023). Adaptive Resource Allocation for Mobile Edge Computing in Internet of Vehicles: A Deep Reinforcement Learning Approach. IEEE Transactions on Vehicular Technology.
Wang, J., Hu, J., Min, G., Zhan, W., Zomaya, A. Y., & Georgalas, N. (2021). Dependent task offloading for edge computing based on deep reinforcement learning. IEEE Transactions on Computers, 71(10), 2449-2461.
Kong, X., Duan, G., Hou, M., Shen, G., Wang, H., Yan, X., &Collotta, M. (2022). Deep reinforcement learning-based energy-efficient edge computing for internet of vehicles. IEEE Transactions on Industrial Informatics, 18(9), 6308-6316.
Xu, X., Shen, B., Ding, S., Srivastava, G., Bilal, M., Khosravi, M. R., ... & Wang, M. (2020). Service offloading with deep Q-network for digital twinning-empowered internet of vehicles in edge computing. IEEE Transactions on Industrial Informatics, 18(2), 1414-1423.
Gao, H., Wang, X., Wei, W., Al-Dulaimi, A., & Xu, Y. (2023). Com-DDPG: task offloading based on multiagent reinforcement learning for information-communication-enhanced mobile edge computing in the internet of vehicles. IEEE Transactions on Vehicular Technology.
Zhao, N., Ye, Z., Pei, Y., Liang, Y. C., & Niyato, D. (2022). Multi-agent deep reinforcement learning for task offloading in UAV-assisted mobile edge computing. IEEE Transactions on Wireless Communications, 21(9), 6949-6960.
Lambrecht, J., & Funk, E. (2019). Edge-Enabled Autonomous Navigation and Computer Vision as a Service: A Study on Mobile Robot’s Onboard Energy Consumption and Computing Requirements. In J. Lambrecht & E. Funk, Advances in intelligent systems and computing (p. 291). Springer Nature. https://doi.org/10.1007/978-3-030-36150-1_24
https://www.kaggle.com/datasets/harunachiromagombe/internet-of-vehicles-dataset
Downloads
Published
Issue
Section
License
Copyright (c) 2025 Anber Abraheem Shlash Mohammad, Sulieman Ibraheem Shelash Al-Hawary, Ayman Hindieh , Asokan Vasudevan, Hussam Mohd Al-Shorman, Ahmad Samed Al-Adwan, Muhammad Turki Alshurideh, Imad Ali (Author)
This work is licensed under a Creative Commons Attribution 4.0 International License.
The article is distributed under the Creative Commons Attribution 4.0 License. Unless otherwise stated, associated published material is distributed under the same licence.