habilitation planning of sewer pipes. The DRL agent
learns an improved policy in terms of lower cost and
higher reliability and uses GCN to leverage the rela-
tional information encoded in the graph structure of
the sewer network. Our framework is successfully
evaluated on a real dataset to show its potential for
applications in infrastructure maintenance planning.
The proposed approach is network and environment
agnostic, is not intended to solve the speciﬁc case
study described in this paper but to serve as a feasi-
bility study for applying the combination of deep rein-
forcement learning with graph neural networks for as-
set management problems. Different neural network
architectures can be plugged in, and the environment
can be easily modiﬁed with speciﬁc problem settings.
An asset deterioration model that more accurately
resembles reality remains an open problem for the fu-
ture. This includes a more sophisticated way of ex-
tracting/predicting fail rates and the use of additional
data sources to include geographic and demographic
data of the surrounding area, such as trafﬁc load, tree
density, and soil information of assets network. An-
other future problem is a reward function that better
accounts for different costs (e.g., replacement cost,
failure cost, unavailability costs) and asset-speciﬁc
aspects (e.g., material, length, impact on surrounding
infrastructure).
ACKNOWLEDGEMENTS
This research has been partially funded by NWO un-
der the grant PrimaVera NWA.1160.18.238.
REFERENCES
Ahmad, R. and Kamaruddin, S. (2012). An overview of
time-based and condition-based maintenance in in-
dustrial application. Computers & Industrial Engi-
neering, 63(1):135–149.
Almasan, P., Su
´
arez-Varela, J., Badia-Sampera, A., Rusek,
K., Barlet-Ros, P., and Cabellos-Aparicio, A. (2020).
Deep Reinforcement Learning meets Graph Neural
Networks: exploring a routing optimization use case.
arXiv:1910.07421 [cs]. arXiv: 1910.07421.
Battaglia, P. W., Hamrick, J. B., Bapst, V., Sanchez-
Gonzalez, A., Zambaldi, V., Malinowski, M., Tac-
chetti, A., Raposo, D., Santoro, A., Faulkner, R., Gul-
cehre, C., Song, F., Ballard, A., Gilmer, J., Dahl, G.,
Vaswani, A., Allen, K., Nash, C., Langston, V., Dyer,
C., Heess, N., Wierstra, D., Kohli, P., Botvinick, M.,
Vinyals, O., Li, Y., and Pascanu, R. (2018). Relational
inductive biases, deep learning, and graph networks.
Birolini, A. (2013). Reliability engineering: theory and
practice. Springer Science & Business Media.
Chen, Y. F., Everett, M., Liu, M., and How, J. P. (2017). So-
cially aware motion planning with deep reinforcement
learning. In 2017 IEEE/RSJ International Confer-
ence on Intelligent Robots and Systems (IROS), pages
1343–1350.
da Costa, P. R., Rhuggenaath, J., Zhang, Y., and Akcay,
A. (2020). Learning 2-opt heuristics for the traveling
salesman problem via deep reinforcement learning. In
Asian Conference on Machine Learning, pages 465–
480. PMLR.
Dai, H., Khalil, E. B., Zhang, Y., Dilkina, B., and Song,
L. (2018). Learning combinatorial optimization algo-
rithms over graphs.
Fey, M. and Lenssen, J. E. (2019). Fast graph represen-
tation learning with PyTorch Geometric. In ICLR
Workshop on Representation Learning on Graphs and
Manifolds.
Fontecha, J. E., Agarwal, P., Torres, M. N., Mukherjee,
S., Walteros, J. L., and Rodr
˜
Aguez, J. P. (2021). A
two-stage data-driven spatiotemporal analysis to pre-
dict failure risk of urban sewer systems leveraging ma-
chine learning algorithms. Risk Analysis.
Garg, S., Bajpai, A., and Mausam (2019). Size Independent
Neural Transfer for RDDL Planning. Proceedings of
the International Conference on Automated Planning
and Scheduling, 29:631–636.
Hansen, B. D., Jensen, D. G., Rasmussen, S. H., Tamouk,
J., Uggerby, M., and Moeslund, T. B. (2019). General
Sewer Deterioration Model Using Random Forest. In
2019 IEEE Symposium Series on Computational In-
telligence (SSCI), pages 834–841.
Hu, L., Liu, Z., Hu, W., Wang, Y., Tan, J., and Wu,
F. (2020). Petri-net-based dynamic scheduling of
ﬂexible manufacturing system via deep reinforcement
learning with graph convolutional network. Journal of
Manufacturing Systems, 55:1–14.
Janisch, J., Pevn
´
y, T., and Lis
´
y, V. (2021). Symbolic Rela-
tional Deep Reinforcement Learning based on Graph
Neural Networks. arXiv:2009.12462 [cs]. arXiv:
2009.12462.
Joshi, C. K., Laurent, T., and Bresson, X. (2019). An ef-
ﬁcient graph convolutional network technique for the
travelling salesman problem.
Kaelbling, L. P., Littman, M. L., and Moore, A. W. (1996).
Reinforcement Learning: A Survey. Journal of Artiﬁ-
cial Intelligence Research, 4:237–285.
Kipf, T. N. and Welling, M. (2017). Semi-supervised clas-
siﬁcation with graph convolutional networks.
Li, F., Sun, Y., Ma, L., and Mathew, J. (2011). A grouping
model for distributed pipeline assets maintenance de-
cision. In 2011 International Conference on Quality,
Reliability, Risk, Maintenance, and Safety Engineer-
ing, pages 601–606.
Li, Y. (2017). Deep reinforcement learning: An overview.
arXiv preprint arXiv:1701.07274.
Luong, N. C., Hoang, D. T., Gong, S., Niyato, D., Wang,
P., Liang, Y.-C., and Kim, D. I. (2019). Applica-
tions of Deep Reinforcement Learning in Communi-
cations and Networking: A Survey. IEEE Communi-
cations Surveys Tutorials, 21(4):3133–3174. Confer-
ICAART 2022 - 14th International Conference on Agents and Artiﬁcial Intelligence
584