
Grati, R., Fattouch, N., and Boukadi, K. (2025). Ontologies
for smart agriculture: A path toward explainable ai–a
systematic literature review. Ieee Access.
Gu, H. and et al (2017). An object-based semantic classifi-
cation method for high resolution remote sensing im-
agery using ontology. Remote Sensing (RS), 9(4):329.
Hammouda, N., Mahfoudh, M., and Boukadi, K. (2023).
Mooncab: a modular ontology for computational
analysis of animal behavior. In 2023 20th ACS/IEEE
International Conference on Computer Systems and
Applications (AICCSA), pages 1–8. IEEE.
Hammouda, N., Mahfoudh, M., and Boukadi, K. (2024).
Moonev: Modular ontology evaluation and validation
tool. Procedia Computer Science, 246:3532–3541.
Hammouda, N., Mahfoudh, M., Grati, R., and Boukadi, K.
(2025). Predicting sheep body condition scores via
explainable deep learning model. International CAIP,
30.
Hamza, M. C. and Bourabah, A. (2024). Exploratory study
on the relationship between age, reproductive stage,
body condition score, and liver biochemical profiles
in rembi breed ewes. Iranian Journal of Veterinary
Medicine, 18(2).
Horrocks, I., Patel-Schneider, P. F., Boley, H., Tabet, S.,
Grosof, B., Dean, M., et al. (2004). Swrl: A semantic
web rule language combining owl and ruleml. W3C
Member submission, 21(79):1–31.
Kondylakis, H., Nikolaos, A., Dimitra, P., Anastasios, K.,
Emmanouel, K., Kyriakos, K., Iraklis, S., Stylianos,
K., and Papadakis, N. (2021). Delta: a modular ontol-
ogy evaluation system. Information, 12(8):301.
Kong, X., Liu, S., and Zhu, L. (2024). Toward human-
centered xai in practice: A survey. Machine Intelli-
gence Research, 21(4):740–770.
Kosov, P., El Kadhi, N., Zanni-Merk, C., and Gardashova,
L. (2024a). Advancing xai: new properties to broaden
semantic-based explanations of black-box learning
models. Procedia Computer Science, 246:2292–2301.
Kosov, P., El Kadhi, N., Zanni-Merk, C., and Gardashova,
L. (2024b). Semantic-based xai: Leveraging ontol-
ogy properties to enhance explainability. In 2024 In-
ternational Conference on Decision Aid Sciences and
Applications (DASA), pages 1–5. IEEE.
Lundberg, S. M. and Lee, S.-I. (2017). A unified approach
to interpreting model predictions. Advances in neural
information processing systems, 30.
Marcos, D., Lobry, S., and Tuia, D. (2019). Semantically
interpretable activation maps: what-where-how expla-
nations within cnns. In 2019 IEEE/CVF International
Conference on Computer Vision Workshop (ICCVW),
pages 4207–4215. IEEE.
Ngo, Q. H., Kechadi, T., and Le-Khac, N.-A. (2022).
Oak4xai: model towards out-of-box explainable arti-
ficial intelligence for digital agriculture. In Interna-
tional Conference on Innovative Techniques and Ap-
plications of Artificial Intelligence, pages 238–251.
Springer.
Poveda-Villal
´
on, M., G
´
omez-P
´
erez, A., and Su
´
arez-
Figueroa, M. C. (2014). Oops!(ontology pitfall scan-
ner!): An on-line tool for ontology evaluation. In-
ternational Journal on Semantic Web and Information
Systems (IJSWIS), 10(2):7–34.
Retzlaff, C. O., Angerschmid, A., Saranti, A., Schnee-
berger, D., Roettger, R., Mueller, H., and Holzinger,
A. (2024). Post-hoc vs ante-hoc explanations: xai de-
sign guidelines for data scientists. Cognitive Systems
Research, 86:101243.
Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). ” why
should i trust you?” explaining the predictions of any
classifier. In Proceedings of the 22nd ACM SIGKDD
international conference on knowledge discovery and
data mining, pages 1135–1144.
Ribeiro, M. T., Singh, S., and Guestrin, C. (2018). Anchors:
High-precision model-agnostic explanations. In Pro-
ceedings of the AAAI conference on artificial intelli-
gence, volume 32.
Rodr
´
ıguez Alvarez, J., Arroqui, M., Mangudo, P., Toloza,
J., Jatip, D., Rodriguez, J. M., Teyseyre, A., Sanz,
C., Zunino, A., Machado, C., et al. (2019). Estimat-
ing body condition score in dairy cows from depth
images using convolutional neural networks, transfer
learning and model ensembling techniques. Agron-
omy, 9(2):90.
Sajitha, P., Andrushia, A. D., Mostafa, N., Shdefat, A. Y.,
Suni, S., and Anand, N. (2023). Smart farming ap-
plication using knowledge embedded-graph convolu-
tional neural network (kegcnn) for banana quality de-
tection. Journal of Agriculture and Food Research,
14:100767.
Selvaraju, R. R., Cogswell, M., Das, A., Vedantam, R.,
Parikh, D., and Batra, D. (2017). Grad-cam: Visual
explanations from deep networks via gradient-based
localization. In Proceedings of the IEEE international
conference on computer vision, pages 618–626.
Sharma, S. and Jain, S. (2024). Ontoxai: a semantic web
rule language approach for explainable artificial intel-
ligence. Cluster Computing, 27(10):14951–14975.
Shimizu, C., Hammar, K., and Hitzler, P. (2021). Modular
ontology modeling. Semantic Web (SW), (Preprint):1–
31.
Shimizu, C., Hirt, Q., and Hitzler, P. (2019). Modl:
a modular ontology design library. arXiv preprint
arXiv:1904.05405.
Sun, C., Xu, H., Chen, Y., and Zhang, D. (2024). As-xai:
Self-supervised automatic semantic interpretation for
cnn. Advanced Intelligent Systems, 6(12):2400359.
Uschold, M. and King, M. (1995). Towards a methodology
for building ontologies. Citeseer.
Vall, E. (2020). Guide harmonis
´
e de notation de l’etat cor-
porel (nec) pour les animaux de ferme du sahel: rumi-
nants de grande taille (bovins, camelins) et de petite
taille (ovins, caprins) et
´
equid
´
es (asins et
´
equins).
V
´
azquez-Mart
´
ınez, I., Tırınk, C., Salazar-Cuytun, R.,
Mezo-Solis, J. A., Garcia Herrera, R. A., Orzuna-
Orzuna, J. F., and Chay-Canul, A. J. (2023). Pre-
dicting body weight through biometric measurements
in growing hair sheep using data mining and machine
learning algorithms. Tropical Animal Health and Pro-
duction, 55(5):307.
Toward Semantic Explainable AI in Livestock: MoonCAB Enrichment for O-XAI to Sheep BCS Prediction
63