Пожалуйста, используйте этот идентификатор, чтобы цитировать или ссылаться на этот ресурс:
http://hdl.handle.net/11701/44292
Полная запись метаданных
Поле DC | Значение | Язык |
---|---|---|
dc.contributor.author | Zhadan, Anastasia Yu. | - |
dc.contributor.author | Wu, Haitao | - |
dc.contributor.author | Kudin, Pavel S. | - |
dc.contributor.author | Zhang, Yuyi | - |
dc.contributor.author | Petrosian, Ovanes L. | - |
dc.date.accessioned | 2023-10-23T18:15:29Z | - |
dc.date.available | 2023-10-23T18:15:29Z | - |
dc.date.issued | 2023-09 | - |
dc.identifier.citation | Zhadan A. Yu.,Wu H., Kudin P. S., Zhang Y., Petrosian O. L. Microgrid control for renewable energy sources based on deep reinforcement learning and numerical optimization approaches. Vestnik of Saint Petersburg University. Applied Mathematics. Computer Science. Control Processes, 2023, vol. 19, iss. 3, pp. 391–402. https://doi.org/10.21638/11701/spbu10.2023.307 | en_GB |
dc.identifier.other | https://doi.org/10.21638/11701/spbu10.2023.307 | - |
dc.identifier.uri | http://hdl.handle.net/11701/44292 | - |
dc.description.abstract | Optimal scheduling of battery energy storage system plays crucial part in distributed energy system. As a data driven method, deep reinforcement learning does not require system knowledge of dynamic system, present optimal solution for nonlinear optimization problem. In this research, financial cost of energy consumption reduced by scheduling battery energy using deep reinforcement learning method (RL). Reinforcement learning can adapt to equipment parameter changes and noise in the data, while mixed-integer linear programming (MILP) requires high accuracy in forecasting power generation and demand, accurate equipment parameters to achieve good performance, and high computational cost for large-scale industrial applications. Based on this, it can be assumed that deep RL based solution is capable of outperform classic deterministic optimization model MILP. This study compares four state-of-the-art RL algorithms for the battery power plant control problem: PPO, A2C, SAC, TD3. According to the simulation results, TD3 shows the best results, outperforming MILP by 5 % in cost savings, and the time to solve the problem is reduced by about a factor of three. | en_GB |
dc.description.sponsorship | This work was supported by St. Petersburg State University (project ID: 94062114). | en_GB |
dc.language.iso | en | en_GB |
dc.publisher | St Petersburg State University | en_GB |
dc.relation.ispartofseries | Vestnik of St Petersburg University. Applied Mathematics. Computer Science. Control Processes;Volume 19; Issue 3 | - |
dc.subject | reinforcement learning | en_GB |
dc.subject | energy management system | en_GB |
dc.subject | distributed energy system | en_GB |
dc.subject | numerical optimization | en_GB |
dc.title | Microgrid control for renewable energy sources based on deep reinforcement learning and numerical optimization approaches | en_GB |
dc.type | Article | en_GB |
Располагается в коллекциях: | Issue 3 |
Файлы этого ресурса:
Файл | Описание | Размер | Формат | |
---|---|---|---|---|
07.pdf | 2,78 MB | Adobe PDF | Просмотреть/Открыть |
Все ресурсы в архиве электронных ресурсов защищены авторским правом, все права сохранены.