coordination, and on-line continuous adaptation to
manage concept drift and changing process
parameters.
REFERENCES
Dogru, O., Xie, J., Prakash, O., Chiplunkar, R., Soesanto,
J., Chen, H., Velswamy, K., Ibrahim, F., & Huang, B.
(2024). Reinforcement learning in process industries:
Review and perspective. IEEE/CAA Journal of
Automatica Sinica, 11(2), 283–300. https://doi.org/10
.1109/JAS.2024.124227
Kannari, L., Wessberg, N., Hirvonen, S., Kantorovitch, J.,
& Paiho, S. (2025). Reinforcement learning for control
and optimization of real buildings: Identifying and
addressing implementation hurdles. Journal of Building
Engineering, 62, 112283. https://doi.org/10.1016/j.job
e.2025.112283 VTT's Research Information Portal
Martins, M. S. E., Sousa, J. M. C., & Vieira, S. (2025). A
systematic review on reinforcement learning for
industrial combinatorial optimization problems.
Applied Sciences, 15(3), 1211. https://doi.org/10.3390
/app15031211 MDPI
Yu, P., Wan, H., Zhang, B., Wu, Q., Zhao, B., Xu, C., &
Yang, S. (2025). Review on system identification,
control, and optimization based on artificial
intelligence. Mathematics, 13(6), 952. https://doi.org/
10.3390/math13060952 MDPI
Farooq, A., & Iqbal, K. (2025). A survey of reinforcement
learning for optimization in automation. arXiv preprint.
https://arxiv.org/abs/2502.09417 arXiv+1arXiv+1
Wu, W., Yang, P., Zhang, W., Zhou, C., & Shen, X. (2022).
Accuracy-guaranteed collaborative DNN inference in
industrial IoT via deep reinforcement learning. arXiv
preprint. https://arxiv.org/abs/2301.00130 arXiv
Rjoub, G., Islam, S., Bentahar, J., Almaiah, M. A., &
Alrawashdeh, R. (2024). Enhancing IoT intelligence: A
transformer-based reinforcement learning methodolog
y arXiv preprint. https://arxiv.org/abs/2404.04205
arXiv
Xu, J., Wan, W., Pan, L., Sun, W., & Liu, Y. (2024). The
fusion of deep reinforcement learning and edge
computing for real-time monitoring and control
optimization in IoT environments. arXiv preprint.
https://arxiv.org/abs/2403.07923 arXiv
Kegyes, T., Süle, Z., & Abonyi, J. (2021). The applicability
of reinforcement learning methods in the development
of Industry 4.0 applications. Complexity, 2021,
7179374. https://doi.org/10.1155/2021/7179374IEEE
Journal of Automation and Systems
Nian, R., Liu, J., & Huang, B. (2020). A review on
reinforcement learning: Introduction and applications
in industrial process control. Computers & Chemical
Engineering, 139, 106886. https://doi.org/10.1016/j.c
ompchemeng.2020.106886Taylor & Francis Online
Benard, N., Pons-Prats, J., Periaux, J., Bugeda, G., Bonnet,
J.-P., & Moreau, E. (2015). Multi-input genetic
algorithm for experimental optimization of the
reattachment downstream of a backward-facing step
with surface plasma actuator. 46th AIAA
Plasmadynamics and Lasers Conference, 2957.
https://doi.org/10.2514/6.2015-2957Wikipedi
Dracopoulos, D. C., & Kent, S. (1997). Genetic
programming for prediction and control. Neural
Computing & Applications, 6(4), 214–228.
https://doi.org/10.1007/BF01413894Wikipedia
Bäck, T., & Schwefel, H.-P. (1993). An overview of
evolutionary algorithms for parameter optimization.
Evolutionary Computation, 1(1), 1–23.
https://doi.org/10.1162/evco.1993.1.1.1Wikipedia
Michalewicz, Z., Janikow, C. Z., & Krawczyk, J. B. (1992).
A modified genetic algorithm for optimal control
problems. Computers & Mathematics with
Applications, 23(12), 83–94. https://doi.or g/10.1016/
0898-1221(92)90131-KWikipedia
Lee, C., Kim, J., Babcock, D., & Goodman, R. (1997).
Application of neural networks to turbulence control for
drag reduction. Physics of Fluids, 9(6), 1740–1747.
https://doi.org/10.1063/1.869290Wikipedia
Brunton, S. L., & Noack, B. R. (2015). Closed-loop
turbulence control: Progress and challenges. Applied
Mechanics Reviews, 67(5), 050801. https://doi.org/10.
1115/1.4031175Wikipedia
Javadi-Moghaddam, J., & Bagheri, A. (2010). An adaptive
neuro-fuzzy sliding mode based genetic algorithm
control system for under water remotely operated
vehicle. Expert Systems with Applications, 37(1), 647–
660. https://doi.org/10.1016/j.eswa.200