Self-Optimizing Network Routing using Reinforcement Learning in Dynamic Distributed Systems
Abstract
Dynamic distributed systems often experience fluctuating network conditions, varying traffic loads, and unpredictable node behavior, which make efficient routing a challenging task. Traditional routing protocols rely on static or heuristically defined rules that may not adapt effectively to such dynamic environments. This paper presents a self-optimizing network routing frameworkbased on reinforcement learning that enables routing agents to learn optimal forwarding strategies through continuous interaction with the network environment. By observing network states such as congestion levels, latency, and link availability, the proposed approach dynamically adjusts routing decisions to improve overall network performance. The reinforcement learning model continuously updates its policy to minimize packet delay, reduce congestion, and enhance reliability. Experimental evaluation in simulated distributed network scenarios demonstrates that the proposed approach achieves improved adaptability and better routing efficiency compared to conventional routing methods. The results highlight the potential of reinforcement learning for building intelligent and autonomous routing mechanisms in modern dynamic distributed systems.
Downloads
Published
How to Cite
Issue
Section
License

This work is licensed under a Creative Commons Attribution 4.0 International License.
International Journal of Engineering Technology and Computer Research (IJETCR) by Articles is licensed under a Creative Commons Attribution 4.0 International License.