
data will be used as historical data for transfer learn-
ing purposes. As we can see, this data exhibits dif-
ferent optimal regions depending on the mass of the
cupper.
Instead of conducting a comparison with manual
tuning, we opted to employ another optimization ap-
proach: Bayesian Optimization (BO) without transfer
learning. For both BO techniques (with and without
transfer learning), we used the developed tool ’Open-
box’ by (Jiang et al., 2024), which is publicly avail-
able at
a
. For BO without transfer learning, the default
configuration was used (i.e, surrogate type =
′
gp
′
,
initial runs = 1, init strategy =
′
de f ault
′
). For
BO with transfer learning, the following configura-
tion was used: initial trials = 5, init strategy =
′
de f ault
′
, surrogate type =
′
tlbo topov3 gp
′
,
acq optimizer type =
′
random scipy
′
. The con-
sidered ranges (upper and lower bounds) for the
proportional gain (KP) and integral gain (KI) were
defined as [4,100] and [0.1,3], respectively. The ini-
tial guess is chosen as K
p0
= 89.96 and K
i0
= 0.2003
for both approaches.
The comparison results between BO with and
without transfer learning are shown in Figure 8. The
x-axis represents the number of iterations, while the
y-axis represents the cost. Evidently, the BO with
transfer learning achieves a convergence rate that is
76% faster than BO without transfer learning. A
notable performance obtained even considering the
moderate model prediction quality. There is poten-
tial for further enhancement in this percentage if ad-
ditional efforts are invested in refining the model pre-
dictions.To ensure a fair comparison between the two
optimization algorithms, it is advisable to run them
with several initial guess points. This approach helps
to mitigate the influence of any single starting point
on the optimization results, providing a more compre-
hensive evaluation of each algorithm’s performance.
By using multiple initial guesses, we can better as-
sess the robustness and effectiveness of the algorithms
across a broader range of scenarios. Due to time con-
straints, this will be done in the future, together with
direct application to the real thermal plant.
5 CONCLUSION
In this study, we proposed a novel approach to ac-
celerate the auto-tuning of PI controllers during the
commissioning phase. By combining transfer learn-
ing and Bayesian optimization, we aimed to minimize
the number of iterations needed to reach the opti-
mal solution. Transfer learning was utilized to extract
a
https://github.com/PKU-DAIR/open-box
valuable insights from historical data obtained from a
simulation model. The effectiveness of our approach
was demonstrated through its application to a thermal
plant, significantly reducing the number of iterations
required to achieve the optimizer’s optimal solution.
ACKNOWLEDGMENT
The research was supported by VLAIO under the
project HBC.2022.0052 Tupic-ICON performed in
Flanders Make.
REFERENCES
Aidan, O. (2006). Handbook of PI and PID Controller Tun-
ing Rules. Imperial College Press.
Bazanella, A. S., Campestrini, L., and Eckhard, D. (2011).
Data-driven controller design: The h
2
approach.
Springer Science and Business Media.
Boulkroune, B., Jordens, X., Mrak, B., Verhelst, J., De-
praetere, B., Meskens, J., and Bovijn, P. (2024). En-
hancing pi tuning in plant commissioning through
bayesian optimization. In ECC 2024, page
3220–3225.
Boyd, S., Hast, M., and
˚
Astr
¨
om, K. (2016). Mimo pid tun-
ing via iterated lmi restriction. Int. J. Robust Nonlin-
ear Control, 26:1718–1731.
Campi, M., Lecchini, A., and Savaresi, S. (2002). Vir-
tual reference feedback tuning: A direct method
for the design of feedback controllers. Automatica,
38(8):1337–1346.
Doerr, A., Nguyen-Tuong, D., Marco, A., Schaal, S., and
Trimpe, S. (2017). Model-based policy search for au-
tomatic tuning of multivariate pid controllers. arXiv.
arXiv:1703.02899.
Fujimoto, Y., Sato, H., and Nagahara, M. (2023). Controller
tuning with bayesian optimization and its accelera-
tion: Concept and experimental validation. Asian J
Control, 25:2408–2414.
Garnett, R. (2023). Bayesian Optimization. Cambridge
University Press, Cambridge, 1st edition.
Gevers, M. (2002). Modelling, identification and control,
page 3–16. Springer-Verlag.
Hjalmarsson, H. (1998). Control of nonlinear systems using
iterative feedback tuning. In Proceedings of the 1998
American Control Conference, page 2083–2087.
Ho, W., Hong, Y., Hansson, A., Hjalmarsson, H., and Deng,
J. (2003). Relay autotuning of pid controllers using
iterative feedback tuning. Automatica, 39(1):149–57.
Jiang, H., Shen, Y., Li, Y., Xu, B., Du, S., Zhang, W., Zhang,
C., and Cui, B. (2024). Openbox: A python toolkit for
generalized black-box optimization. Journal of Ma-
chine Learning Research, 25(120):1–11.
Kaneko, O., Soma, S., and Fujii, T. (2005). A new approach
to parameter tuning of controllers by using one-shot
Enhancing PI Tuning for Plant Commissioning Using Transfer Learning and Bayesian Optimization
241