
تعداد نشریات | 8 |
تعداد شمارهها | 427 |
تعداد مقالات | 5,567 |
تعداد مشاهده مقاله | 6,736,678 |
تعداد دریافت فایل اصل مقاله | 5,692,826 |
Optimal Resource Management in Fog-Cloud Environments via A2C Reinforcement Learning: Dynamic Task Scheduling and Task Result Caching | ||
AUT Journal of Electrical Engineering | ||
مقالات آماده انتشار، پذیرفته شده، انتشار آنلاین از تاریخ 30 شهریور 1404 اصل مقاله (1.58 M) | ||
نوع مقاله: Research Article | ||
شناسه دیجیتال (DOI): 10.22060/eej.2025.24181.5657 | ||
نویسندگان | ||
Mohammad Hassan Nataj Solhdar؛ Mohamad Mehdi Esnaashari* | ||
Faculty of Computer Engineering, K. N. Toosi University of Technology, Tehran, Iran | ||
چکیده | ||
In order to effectively manage tasks in fog-cloud environments, this paper proposes a two-agent architecture-based framework. In this framework, a task scheduling agent is responsible for selecting the computing execution node and allocating resources, while a separate agent manages the caching of results. In each decision cycle, the resource manager first checks whether a valid, fresh result already exists in the cache; if so, the cached result is immediately returned. Otherwise, the execution agent evaluates current conditions — such as network load, nodes’ computational capacity, and user proximity — and assigns the task to the most appropriate node. After task execution completes, an independent storage agent is selected to store the results, potentially operating on a node distinct from the execution node. Through extensive simulations and comparisons with advanced methods (e.g., A3C-R2N2, DDQN, LR-MMT, and LRR-MMT), we demonstrate significant improvements in response latency, computational efficiency, and inter-node communication management. The proposed framework decouples execution scheduling from result storage through two distinct agents while implementing history-based caching that tracks both task request frequencies and result recency. This design enables effective adaptation to variable workloads and dynamic network conditions. The two-agent architecture and history-based caching serve as core innovations that optimize resource utilization and enhance system responsiveness. The resulting decoupled, history-based strategy delivers scalable, low-latency performance and provides a robust solution for real-time service delivery in fog-cloud environments. | ||
کلیدواژهها | ||
Task Scheduling؛ Result Caching؛ Reinforcement Learning؛ Fog-Cloud Environment؛ Advantage Actor-Critic (A2C)؛ Resource Management | ||
آمار تعداد مشاهده مقاله: 8 تعداد دریافت فایل اصل مقاله: 2 |