In distributed computing environments, computation offloading is a vital strategy for maximizing the performance and energy efficiency of mobile devices. Distributed deep learning-based offloading (DDLO) and deep reinforcement learning for online computation offloading (DROO) are two popular methods for solving the computation offloading problem.
In DDLO, the data is divided into smaller pieces during offloading and distributed throughout the systems or devices. In DROO, an agent is trained to determine the optimum offloading choices based on the resources at hand, the network environment, and the application's performance requirements.
Comparison is presented of both approaches, emphasizing their benefits and drawbacks and the situations when one approach is more suitable than the other. Our findings indicate that while deep reinforcement learning is more able to respond to environmental changes, distributed deep learning-based offloading is more efficient in terms of computational resources.
Figure 1: Comparison of gain ratios of the DDLO and the DROO algorithms
Figure 2: Comparison of training loss of the DDLO and the DROO algorithms
S. K. Mishra, H. K. Challa, K. S. Kotha and D. P. Yarramreddy, "Task Offloading Technique Selection In Mobile Edge Computing," 2024 International Conference on Advancements in Smart, Secure and Intelligent Computing (ASSIC), Bhubaneswar, India, 2024, pp. 1-6, doi: 10.1109/ASSIC60049.2024.10507901.
keywords: {Performance evaluation;Multi-access edge computing;Distributed databases;Deep reinforcement learning;Mobile handsets;Energy efficiency;Computational efficiency;Mobile edge computing;computation offloading;deep reinforcement learning [9]},
URL: https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10507901&isnumber=10507806