TY - GEN
T1 - TPTO
T2 - 29th IEEE International Conference on Parallel and Distributed Systems, ICPADS 2023
AU - Gholipour, Niloofar
AU - De Assuncao, Marcos Dias
AU - Agarwal, Pranav
AU - Gascon-Samson, Julien
AU - Buyya, Rajkumar
N1 - Publisher Copyright:
© 2023 IEEE.
PY - 2023
Y1 - 2023
N2 - Emerging applications in healthcare, autonomous vehicles, and wearable assistance require interactive and low-latency data analysis services. Unfortunately, cloud-centric architectures cannot fulfill the low-latency demands of these applications, as user devices are often distant from cloud data centers. Edge computing aims to reduce the latency by enabling processing tasks to be offloaded to resources located at the network's edge. However, determining which tasks must be offloaded to edge servers to reduce the latency of application requests is not trivial, especially if the tasks present dependencies. This paper proposes a Deep Reinforcement Learning (DRL) approach called TPTO, which leverages Transformer Networks and Proximal Policy Optimization (PPO) to offload dependent tasks of IoT applications in edge computing. We consider users with various preferences, where devices can offload computation to an edge server via wireless channels. Performance evaluation results demonstrate that under fat application graphs, TPTO is more effective than state-of-the-art methods, such as Greedy, HEFT, and MRLCO, by reducing latency by 30.24%, 29.61%, and 12.41%, respectively. In addition, TPTO presents a training time approximately 2.5 times faster than an existing DRL approach.
AB - Emerging applications in healthcare, autonomous vehicles, and wearable assistance require interactive and low-latency data analysis services. Unfortunately, cloud-centric architectures cannot fulfill the low-latency demands of these applications, as user devices are often distant from cloud data centers. Edge computing aims to reduce the latency by enabling processing tasks to be offloaded to resources located at the network's edge. However, determining which tasks must be offloaded to edge servers to reduce the latency of application requests is not trivial, especially if the tasks present dependencies. This paper proposes a Deep Reinforcement Learning (DRL) approach called TPTO, which leverages Transformer Networks and Proximal Policy Optimization (PPO) to offload dependent tasks of IoT applications in edge computing. We consider users with various preferences, where devices can offload computation to an edge server via wireless channels. Performance evaluation results demonstrate that under fat application graphs, TPTO is more effective than state-of-the-art methods, such as Greedy, HEFT, and MRLCO, by reducing latency by 30.24%, 29.61%, and 12.41%, respectively. In addition, TPTO presents a training time approximately 2.5 times faster than an existing DRL approach.
KW - Edge computing
KW - Transformers
KW - reinforcement learning
KW - task offloading
UR - https://www.scopus.com/pages/publications/85190254399
U2 - 10.1109/ICPADS60453.2023.00164
DO - 10.1109/ICPADS60453.2023.00164
M3 - Contribution to conference proceedings
AN - SCOPUS:85190254399
T3 - Proceedings of the International Conference on Parallel and Distributed Systems - ICPADS
SP - 1115
EP - 1122
BT - Proceedings - 2023 IEEE 29th International Conference on Parallel and Distributed Systems, ICPADS 2023
PB - IEEE Computer Society
Y2 - 17 December 2023 through 21 December 2023
ER -