
data, enabling real-time parameter optimization and
superior performance in terms of stability and speed
compared to traditional methods. This approach sig-
nificantly reduced overshoot and settling time, mak-
ing it ideal for complex control environments (Saini
et al., 2023).
2.3 Reinforcement Learning for
Autonomous PID Tuning
RL (Reinforcement Learning) has emerged as a dom-
inant tool for adaptive PID tuning. Recent research
explored RL-based approaches where agents learn op-
timal control strategies by interacting with the en-
vironment. For instance, one study utilized model-
based RL to achieve robust PID tuning. The method
effectively handled non-linearity and uncertainties,
demonstrating robust performance under varying con-
ditions (Trujillo et al., 2022).
2.4 Hybrid Approaches Combining ML
and Classical Methods
Several papers propose hybrid approaches that inte-
grate ML with traditional PID tuning techniques. For
example, researchers employed RLS (Recursive Least
Squares) for system identification and ANN (Artifi-
cial Neural Networks) for parameter estimation. This
combination ensured precise tuning and reduced com-
putational overheads (Dogru et al., 2022).
2.5 Application-Specific
Implementations
Industrial Systems: Studies on electromechanical ac-
tuators showed how ML-based tuning could improve
operational efficiency. One example involved tuning
a 3-stage cascaded PID for BLDC (Brush Less Di-
rect Current) motors, which yielded a 90 percent, im-
provement in overshoot and reduced energy consump-
tion.
Process Control: In chemical and thermal process
industries, ML-based PID tuning has been applied to
optimize control loops, resulting in improved energy
efficiency and product quality (Jesawada et al., 2022).
Neural networks are seen to outperform some
other intelligent methods in terms of PID adaptive and
tuning (Lazar et al., 2004), (Iplikci, 2010).
Collection of the accurate data labels can be de-
manding in actual engineering problems (Guan and
Yamamoto, 2020).
ML methods have gained widespread attention
since they are data driven and real-time capable and
the literature has focused on diagnosing PID con-
troller performance issues. Machine Learning clas-
sifiers such as SVM (Suport Vector Machine), deci-
sion trees, and neural networks have been used to de-
tect performance degradation in the absence of de-
tailed system models. Other studies delve into hy-
brid configurations that integrate conventional con-
trol alongside ML to enhance reliability in several
fields, notably in manufacturing, power plants, and
aerospace. Future work entails handling more com-
plex datasets for higher accuracy, developing explain-
able models, and evolving to predictive maintenance
to apply maintenance actions before the problem and
prevent the faults. (Ya
˘
gcı et al., 2024).
This study utilizes the use of neural networks
and reinforcement learning to develop an adaptive
PID controller to control pressure drops in non-linear
fluid systems. The method integrates Hammerstein
identification for system identification and actor-critic
learning to enable real-time PID tuning. This hy-
brid approach improves adaptability and robustness,
achieving better performance than traditional PID
controllers in simulation. This study reveals that a
combination of neural networks and ML can lead
to modern nonlinear environment control solutions,
which is a scalable and is advanced solution for com-
plex industrial fluid systems (Bawazir et al., 2024).
The authors present a generalized and readily tun-
able method to discriminate between acceptable and
poor closed-loop performance. Their approach de-
fines optimal but feasible closed-loop performance
based on intuitive quality factors. A diversified set of
CPI (Control Performance Indices) serve as discrim-
inative features for the offline generated training set.
Thus, the proposed system is intended to be used im-
mediately without further learning (i.e, during regular
operation) (Grelewicz et al., 2023).
The paper explores usage of neural networks for
PID tuning. The challenge discussed is selection of
training sample and suggests replacement of PID con-
trollers with the stated PID tuning method, for better
control (Zhilov, 2022)
DRL (Deep Reinforcement Learning) based PI
gain tuning in robot driver system is proposed, which
utilizes simulation training. D3QN is implemented to
reduce errors and optimize gains. A significant im-
provement is seen in the performance as compared
to older fuzzy logic controllers in testing of vehicles
(Park et al., 2022)
A PID controller is compared to gradient descent
tuning and CNN-based cloning. The study concludes
that PID control displays more accurate and stable re-
sults when tested (Abed et al., 2020)
INCOFT 2025 - International Conference on Futuristic Technology
388