Open Access
Issue |
E3S Web Conf.
Volume 511, 2024
International Conference on “Advanced Materials for Green Chemistry and Sustainable Environment” (AMGSE-2024)
|
|
---|---|---|
Article Number | 01021 | |
Number of page(s) | 14 | |
DOI | https://doi.org/10.1051/e3sconf/202451101021 | |
Published online | 10 April 2024 |
- “Multi-Agent Reinforcement Learning for Power System Operation and Control Search | ScienceDirect.com.” Accessed: Jan. 19, 2024. [Online]. Available: https://www.sciencedirect.com/search?qs=Multi-Agent%20Reinforcement%20Learning%20for%20Power%20System%20Operation%20and%20Control [Google Scholar]
- E. Kadoche, S. Gourvénec, M. Pallud, and T. Levent, “MARLYC: MultiAgent Reinforcement Learning Yaw Control,” Renew Energy, vol. 217, Nov. 2023, doi: 10.1016/j.renene.2023.119129. [CrossRef] [Google Scholar]
- X. Wang, A. D’Ariano, S. Su, and T. Tang, “Cooperative train control during the power supply shortage in metro system: A multi-agent reinforcement learning approach,” Transportation Research Part B: Methodological, vol. 170, pp. 244–278, Apr. 2023, doi: 10.1016/j.trb.2023.02.015. [CrossRef] [Google Scholar]
- A. Mughees et al., “Energy-efficient joint resource allocation in 5G HetNet using Multi-Agent Parameterized Deep Reinforcement learning,” Physical Communication, vol. 61, Dec. 2023, doi: 10.1016/j.phycom.2023.102206. [CrossRef] [Google Scholar]
- F. Monfaredi, H. Shayeghi, and P. Siano, “Multi-agent deep reinforcement learning-based optimal energy management for grid-connected multiple energy carrier microgrids,” International Journal of Electrical Power and Energy Systems, vol. 153, Nov. 2023, doi: 10.1016/j.ijepes.2023.109292. [CrossRef] [Google Scholar]
- P. Li, J. Shen, Z. Wu, M. Yin, Y. Dong, and J. Han, “Optimal real-time Voltage/Var control for distribution network: Droop-control based multiagent deep reinforcement learning,” International Journal of Electrical Power and Energy Systems, vol. 153, Nov. 2023, doi: 10.1016/j.ijepes.2023.109370. [Google Scholar]
- Y. Jiang, J. Liu, and H. Zheng, “Optimal scheduling of distributed hydrogen refueling stations for fuel supply and reserve demand service with evolutionary transfer multi-agent reinforcement learning,” Int J Hydrogen Energy, vol. 54, pp. 239–255, Feb. 2024, doi: 10.1016/j.ijhydene.2023.04.128. [CrossRef] [Google Scholar]
- T. Zhang, J. Liu, H. Wang, Y. Li, N. Wang, and C. Kang, “Fault diagnosis and protection strategy based on spatio-temporal multi-agent reinforcement learning for active distribution system using phasor measurement units,” Measurement (Lond), vol. 220, Oct. 2023, doi: 10.1016/j.measurement.2023.113291. [Google Scholar]
- N. Harder, R. Qussous, and A. Weidlich, “Fit for purpose: Modeling wholesale electricity markets realistically with multi-agent deep reinforcement learning,” Energy and AI, vol. 14, Oct. 2023, doi: 10.1016/j.egyai.2023.100295. [CrossRef] [Google Scholar]
- B. Zhang, W. Hu, A. M. Y. M. Ghias, X. Xu, and Z. Chen, “Two-timescale autonomous energy management strategy based on multi-agent deep reinforcement learning approach for residential multicarrier energy system,” Appl Energy, vol. 351, Dec. 2023, doi: 10.1016/j.apenergy.2023.121777. [Google Scholar]
- B. Zhang, D. Cao, W. Hu, A. M. Y. M. Ghias, and Z. Chen, “PhysicsInformed Multi-Agent deep reinforcement learning enabled distributed voltage control for active distribution network using PV inverters,” International Journal of Electrical Power and Energy Systems, vol. 155, Jan. 2024, doi: 10.1016/j.ijepes.2023.109641. [Google Scholar]
- Z. cheng Qiu, J. fei Hu, and X. min Zhang, “Multi-agent reinforcement learning vibration control and trajectory planning of a double flexible beam coupling system,” Mech Syst Signal Process, vol. 200, Oct. 2023, doi: 10.1016/j.ymssp.2023.110502. [Google Scholar]
- X. Li, J. Ren, and Y. Li, “Multi-mode filter target tracking method for mobile robot using multi-agent reinforcement learning,” Eng Appl Artif Intell, vol. 127, Jan. 2024, doi: 10.1016/j.engappai.2023.107398. [CrossRef] [Google Scholar]
- X. Guo, X. Zhang, and X. Zhang, “Incentive-oriented power-carbon emissions trading-tradable green certificate integrated market mechanisms using multi-agent deep reinforcement learning,” Appl Energy, vol. 357, Mar. 2024, doi: 10.1016/j.apenergy.2023.122458. [Google Scholar]
- J. Wang and L. Sun, “Multi-objective multi-agent deep reinforcement learning to reduce bus bunching for multiline services with a shared corridor,” Transp Res Part C Emerg Technol, vol. 155, Oct. 2023, doi: 10.1016/j.trc.2023.104309. [Google Scholar]
- K. Xiong et al., “Coordinated energy management strategy for multi-energy hub with thermo-electrochemical effect based power-to-ammonia: A multiagent deep reinforcement learning enabled approach,” Renew Energy, vol. 214, pp. 216–232, Sep. 2023, doi: 10.1016/j.renene.2023.05.067. [CrossRef] [Google Scholar]
- Y. Gao, Y. Matsunami, S. Miyata, and Y. Akashi, “Multi-agent reinforcement learning dealing with hybrid action spaces: A case study for off-grid oriented renewable building energy system,” Appl Energy, vol. 326, Nov. 2022, doi: 10.1016/j.apenergy.2022.120021. [Google Scholar]
- D. Liu et al., “Multi-agent quantum-inspired deep reinforcement learning for real-time distributed generation control of 100% renewable energy systems,” Eng Appl Artif Intell, vol. 119, Mar. 2023, doi: 10.1016/j.engappai.2022.105787. [Google Scholar]
- P. Do, V.-T. Nguyen, A. Voisin, B. Iung, and W. A. F. Neto, “Multi-agent deep reinforcement learning-based maintenance optimization for multidependent component systems,” Expert Syst Appl, vol. 245, p. 123144, Jul. 2024, doi: 10.1016/J.ESWA.2024.123144. [CrossRef] [Google Scholar]
- J. Bae, J. M. Kim, and S. J. Lee, “Deep reinforcement learning for a multiobjective operation in a nuclear power plant,” Nuclear Engineering and Technology, vol. 55, no. 9, pp. 3277–3290, Sep. 2023, doi: 10.1016/j.net.2023.06.009. [Google Scholar]
- T. Zhang, Z. Dong, and X. Huang, “Multi-objective optimization of thermal power and outlet steam temperature for a nuclear steam supply system with deep reinforcement learning,” Energy, vol. 286, Jan. 2024, doi: 10.1016/j.energy.2023.129526. [Google Scholar]
- B. Wu, X. Zuo, G. Chen, G. Ai, and X. Wan, “Multi-agent deep reinforcement learning based real-time planning approach for responsive customized bus routes,” Comput Ind Eng, Feb. 2023, doi: 10.1016/j.cie.2023.109840. [Google Scholar]
- Y. Duan et al., “Telemetry-aided cooperative multi-agent online reinforcement learning for DAG task scheduling in computing power networks,” Simul Model Pract Theory, vol. 132, p. 102885, Apr. 2024, doi: 10.1016/J.SIMPAT.2023.102885. [CrossRef] [Google Scholar]
- X. Wang, J. Zhou, B. Qin, and L. Guo, “Coordinated control of wind turbine and hybrid energy storage system based on multi-agent deep reinforcement learning for wind power smoothing,” J Energy Storage, vol. 57, Jan. 2023, doi: 10.1016/j.est.2022.106297. [Google Scholar]
- A. Ajagekar, B. Decardi-Nelson, and F. You, “Energy management for demand response in networked greenhouses with multi-agent deep reinforcement learning,” Appl Energy, vol. 355, Feb. 2024, doi: 10.1016/j.apenergy.2023.122349. [CrossRef] [Google Scholar]
- Md.Z. ul Haq, H. Sood, and R. Kumar, “Effect of using plastic waste on mechanical properties of fly ash based geopolymer concrete,” Mater Today Proc, 2022. [Google Scholar]
- H. Sood, R. Kumar, P. C. Jena, and S. K. Joshi, “Optimizing the strength of geopolymer concrete incorporating waste plastic,” Mater Today Proc, 2023. [Google Scholar]
- H. Sood, R. Kumar, P. C. Jena, and S. K. Joshi, “Eco-friendly approach to construction: Incorporating waste plastic in geopolymer concrete,” Mater Today Proc, 2023. [Google Scholar]
- K. Kumar et al., “Understanding Composites and Intermetallic: Microstructure, Properties, and Applications,” in E3S Web of Conferences, EDP Sciences, 2023, p. 01196. [CrossRef] [EDP Sciences] [Google Scholar]
- K. Kumar et al., “Breaking Barriers: Innovative Fabrication Processes for Nanostructured Materials and Nano Devices,” in E3S Web of Conferences, EDP Sciences, 2023, p. 01197. [CrossRef] [EDP Sciences] [Google Scholar]
- S. Dixit and A. Stefańska, “Bio-logic, a review on the biomimetic application in architectural and structural design,” Ain Shams Engineering Journal, 2022, doi: 10.1016/J.ASEJ.2022.101822. [Google Scholar]
- M. Kumar et al., “Coordination behavior of Schiff base copper complexes and structural characterization,” MRS Adv, vol. 7, no. 31, pp. 939–943, Nov. 2022, doi: 10.1557/S43580-022-00348-6. [CrossRef] [Google Scholar]
- H. D. Nguyen et al., “A critical review on additive manufacturing of Ti6Al-4V alloy: Microstructure and mechanical properties,” Journal of Materials Research and Technology, vol. 18, pp. 4641–4661, May 2022, doi: 10.1016/J.JMRT.2022.04.055. [CrossRef] [Google Scholar]
- D. Aghimien et al., “Barriers to Digital Technology Deployment in Value Management Practice,” Buildings, vol. 12, no. 6, Jun. 2022, doi: 10.3390/BUILDINGS12060731. [CrossRef] [Google Scholar]
- A. Saini, G. Singh, S. Mehta, H. Singh, and S. Dixit, “A review on mechanical behaviour of electrodeposited Ni-composite coatings,” International Journal on Interactive Design and Manufacturing, Oct. 2022, doi: 10.1007/S12008-022-00969-Z. [Google Scholar]
Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.