An Intelligent Resource Allocation Framework for Cloud Data Centers using DRL

Authors

  • Sonu Thapa
  • Harish Dutt Sharma
  • Mukesh Kumar

Keywords:

Cloud Computing, Resource Optimization, Deep Reinforcement Learning

Abstract

Cloud data centers experience highly dynamic workloads that require efficient and adaptive
resource management. Traditional allocation methods based on heuristics or rule-based
scheduling often fail to handle fluctuating workloads, leading to inefficient resource utilization,
higher energy consumption, and potential service level agreement (SLA) violations. This
paper proposes an intelligent resource allocation framework for cloud data centers using
deep reinforcement learning (DRL). The framework models the resource allocation problem
as a Markov Decision Process in which a DRL agent observes system states, including CPU
utilization, memory usage, and task queue characteristics, to determine optimal allocation
actions. By continuously learning from the cloud environment, the agent adapts resource
allocation policies to improve system efficiency. Experimental evaluation in a simulated
cloud environment demonstrates that the proposed DRL-based approach enhances resource
utilization and reduces SLA violations compared with conventional scheduling methods.
The results highlight the potential of deep reinforcement learning for scalable and intelligent
resource management in modern cloud data centers.

Downloads

Published

2026-04-30

How to Cite

Sonu Thapa, Sharma, H. D. ., & Kumar, . M. . (2026). An Intelligent Resource Allocation Framework for Cloud Data Centers using DRL. International Journal of Engineering Technology and Computer Research, 14(2). Retrieved from https://ijetcr.org/index.php/ijetcr/article/view/610

Issue

Section

Articles