Issue |
E3S Web Conf.
Volume 511, 2024
International Conference on “Advanced Materials for Green Chemistry and Sustainable Environment” (AMGSE-2024)
|
|
---|---|---|
Article Number | 01021 | |
Number of page(s) | 14 | |
DOI | https://doi.org/10.1051/e3sconf/202451101021 | |
Published online | 10 April 2024 |
Multi-Agent Reinforcement Learning for Power System Operation and Control
1 Lovely Professional University, Phagwara, Punjab, India
2 Department of EEE, GRIET, Bachupally, Hyderabad, Telangana, India
3 Uttaranchal University, Dehradun 248007, India
4 Chitkara Centre for Research and Development, Chitkara University, Himachal Pradesh 174103 India
5 Centre of Research Impact and Outcome, Chitkara University, Rajpura 140417, Punjab, India
* Corresponding author: alok.jain@lpu.co.in
drjamisridevi@gmail.com
manbirbisht@uumail.in
abhiraj.malhotra.orp@chitkara.edu.in
ish.kapila.orp@chitkara.edu.in
This study investigates the use of Multi-Agent Reinforcement Learning (MARL) to enhance the efficiency of power system operation and control. The simulated power system environment is represented as a multi-agent system, where intelligent agents are used to mimic generators and loads. The MARL framework utilizes Q-learning algorithms to allow agents to independently adjust their activities in accordance with changing operating circumstances. The resulting simulated data represents a wide-ranging power grid scenario, including buses with different generator capacity, load needs, and transmission line capacities. The findings indicate a significant improvement in the stability of the system via Multi-Agent Reinforcement Learning (MARL), since the agents’ capacity to learn and adapt enables them to quickly alter the outputs of generators and meet the needs of the load, so ensuring that voltage and frequency levels remain within acceptable limits. The MARL framework significantly improves economic efficiency by enabling actors to optimize their behaviors in order to reduce the total costs of the system. The agility of the MARL-based control method is emphasized by the decrease in response time to dynamic disturbances, as agents demonstrate quick and efficient reactions to unforeseen occurrences. The favorable results highlight the potential of MARL as a decentralized decision-making model in power systems, providing advantages in terms of stability, economic efficiency, and the capacity to respond to disruptions. Although the research uses artificial data in a controlled setting, the observed enhancements indicate the flexibility and efficacy of the MARL framework. Future research should prioritize the integration of more practical situations and tackling computational obstacles to further confirm the suitability and expandability of Multi-Agent Reinforcement Learning (MARL) in actual power systems.
Key words: Multi-Agent Reinforcement Learning / Power System Control / Decentralized Decision-Making / System Stability / Economic Efficiency
© The Authors, published by EDP Sciences, 2024
This is an Open Access article distributed under the terms of the Creative Commons Attribution License 4.0, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.