首页 | 本学科首页   官方微博 | 高级检索  
     


Resource optimization for UAV-assisted mobile edge computing system based on deep reinforcement learning
Abstract:
Computational efficiency is a direction worth considering in moving edge computing (MEC) systems. However, the computational efficiency of UAV-assisted MEC systems is rarely studied. In this paper, we maximize the computational efficiency of the MEC network by optimizing offloading decisions, UAV flight paths, and allocating users’ charging and offloading time reasonably. The method of deep reinforcement learning is used to optimize the resources of UAV-assisted MEC system in complex urban environment, and the user’s computation-intensive tasks are offloaded to the UAV-mounted MEC server, so that the overloaded tasks in the whole system can be alleviated. We study and design a framework algorithm that can quickly adapt to task offload decision making and resource allocation under changing wireless channel conditions in complex urban environments. The optimal offloading decisions from state space to action space is generated through deep reinforcement learning, and then the user’s own charging time and offloading time are rationally allocated to maximize the weighted sum computation rate. Finally, combined with the radio map to optimize the UAC trajectory to improve the overall weighted sum computation rate of the system. Simulation results show that the proposed DRL+TO framework algorithm can significantly improve the weighted sum computation rate of the whole MEC system and save time. It can be seen that the MEC system resource optimization scheme proposed in this paper is feasible and has better performance than other benchmark schemes.
Keywords:Mobile edge computing(MEC)  Deep reinforcement learning  Resource allocation  Trajectory optimization  Computation efficiency
本文献已被 ScienceDirect 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号