Towards Semantic Integration for Explainable Artificial Intelligence in
the Biomedical Domain
Catia Pesquita
LASIGE, Faculdade de Ci
ˆ
encias da Universidade de Lisboa, Portugal
Keywords:
Ontology Alignment, Knowledge Graph Alignment, Ontology Matching, Knowledge Graphs, Ontologies,
Semantic Web, Explainable Artificial Intelligence, Machine Learning, Healthcare, Clinical Research, Health
Informatics.
Abstract:
Explainable artificial intelligence typically focuses on data-based explanations, lacking the semantic context
needed to produce human-centric explanations. This is especially relevant in healthcare and life sciences
where the heterogeneity in both data sources and user expertise, and the underlying complexity of the domain
and applications poses serious challenges. The Semantic Web represents an unparalleled opportunity in this
area: it provides large amounts of freely available data in the form of Knowledge Graphs, which link data to
ontologies, and can thus act as background knowledge for building explanations closer to human conceptu-
alizations. In particular, knowledge graphs support the computation of semantic similarity between objects,
providing an understanding of why certain objects are considered similar or different. This is a basic aspect of
explainability and is at the core of many machine learning applications. However, when data covers multiple
domains, it may be necessary to integrate different ontologies to cover the full semantic landscape of the un-
derlying data. We propose a methodology for semantic explanations in the biomedical domain that is based on
the semantic annotation and integration of heterogenous data into a common semantic landscape that supports
semantic similarity assessments. This methodology builds upon state of the art semantic web technologies and
produces post-hoc explanations that are independent of the machine learning method employed.
1 INTRODUCTION
Recent successes in black-box models, such as deep
neural networks, are revolutionizing artificial intelli-
gence (AI) applications, but despite their impressive
successes, their effectiveness and integration in real-
world applications are still limited by their inability
to explain their decisions in a human-understandable
way. These limitations stem from ethical concerns,
but also accountability, safety and liability (Guidotti
et al., 2018). In critical use cases, for instance, in
clinical decision making, there is reluctance in the
deployment of such models because the cost of mis-
classification is potentially very high, endangering pa-
tients’ health and lives (Miotto et al., 2018). More-
over, models that predict natural phenomena may bet-
ter contribute to scientific advancements when re-
searchers are able to understand them. This is ev-
ident in the application of black-box models in ge-
nomics, drug-discovery and pathology, among others
(Min et al., 2017).
This is one of the historical challenges of AI:
the ability of a model to afford explanations of how
and why it arrived at a particular outcome. How-
ever, the definition of explainable AI (XAI) is still
not agreed upon by the community and is often used
interchangeably with interpretable or comprehensible
AI (Guidotti et al., 2018). While interpretation re-
quires transparency in the underlying mechanisms of
a system, a comprehensible one can be opaque while
emitting symbols a user can reason over. Both en-
able explanations of decisions, but they do not yield
explanations themselves, leaving explanation genera-
tion to human analysts who may deduce different ex-
planations depending on their background knowledge
about the data and its domain. Relatively few works
address these issues and they are typically based on
researchers’ intuitions of what constitutes a ‘good’
explanation, without taking into account how humans
explain decisions and behaviour to each other, ar-
guably a strong starting point to improve human in-
teractions with explanatory AI (Miller, 2019).
Pesquita, C.
Towards Semantic Integration for Explainable Artificial Intelligence in the Biomedical Domain.
DOI: 10.5220/0010389707470753
In Proceedings of the 14th International Joint Conference on Biomedical Engineering Systems and Technologies (BIOSTEC 2021) - Volume 5: HEALTHINF, pages 747-753
ISBN: 978-989-758-490-9
Copyright
c
2021 by SCITEPRESS – Science and Technology Publications, Lda. All rights reserved
747
2 SEMANTIC TECHNOLOGIES
FOR XAI IN BIOMEDICINE
One can argue that what is needed for humans to
understand each other can be transferred to what is
needed to make AI outcomes understandable for hu-
mans. We need to fulfil the properties of human un-
derstanding, namely, that human explanations imply
social interaction which is grounded in a shared con-
text, and that users select explanations from a large
space of possible explanations based on their under-
standing of the context (Miller, 2019). The vast ma-
jority of works in XAI lack the ability to integrate
background knowledge into the process to create a
shared context, rendering them inadequate to build
explanations for common users without AI exper-
tise. Providing this contextualization is an even bigger
challenge in areas such as systems medicine where
data comes from different domains and with differ-
ent levels of granularity or personalized medicine,
which often relies on highly diverse data, ranging
from molecules, organelles, cells, tissues, organs, all
the way up to individuals, environmental factors, pop-
ulations, and ecosystems (Holzinger et al., 2019).
2.1 Explainable Knowledge-enabled
Systems
In the scientific and healthcare domains, where ma-
chine learning (ML) methods methods and particu-
larly black-box methods such as deep learning have
been gaining traction, it has been proposed that suc-
cessful explainable-AI systems need to be able to link
ML models to representations of domain knowledge
(Holzinger et al., 2017; Wollschlaeger et al., 2020).
Recently (Chari et al., 2020) defined Explainable
Knowledge-enabled systems as ”AI systems that in-
clude a representation of the domain knowledge in the
field of application, have mechanisms to incorporate
the users’ context, are interpretable, and host expla-
nation facilities that generate user-comprehensible,
context-aware, and provenance enabled explanations
of the mechanistic functioning of the AI system and
the knowledge used.
However, most existing approaches that tackle
XAI in knowledge-enabled systems focus on inter-
pretability and not on building explanations. A re-
cent survey (Chari et al., 2020) in this area presents
neuro-symbolic approaches (Hitzler et al., 2020) as
a potential solution, and although some preliminary
works tackle explanation of deep learning for image
recognition (e.g., (Zhou et al., 2018; Sarker et al.,
2017)) or transfer learning (Chen et al., 2018) most
simply allow for the inclusion of knowledge in the
machine learning approaches but do not yield expla-
nations themselves. Few XAI approaches exist in
clinical application areas and most are still focused in
statistical explanations (e.g.,(Lundberg et al., 2018)).
(Phan et al., 2017) employs neuro-symbolic learning
to predict human behaviour, producing simple expla-
nations based on the identification of key features.
However, explanations are flat, limited in expressiv-
ity.
2.2 Ontologies and Knowledge Graphs
We have established that for AI outcomes to be truly
useful, they need to support interpretation, and this
requires semantic context. By semantic context, we
mean the situation in which a term or entity appears.
In relational databases and spreadsheets, semantic
context is sometimes lacking because important in-
formation about what the various data fields mean and
how they relate to one another is often implicit in the
names of database tables and column headers. What
is needed is a way to express the semantic connec-
tions between data items in a way that is expressive
enough to capture nuanced relationships while at the
same time formalized and restrictive enough to allow
software as well as humans to make inferences based
on the links.
Semantic web (SW) technologies and artefacts
(such as ontologies and knowledge graphs) are a
potential solution to the problem of human-centric,
knowledge-enabled explanation since they provide
this semantic context (Lecue, 2019; d’Amato, 2020).
Ontologies establish a conceptual model that repre-
sents the concepts of a domain and their relationships
with one another (Munn and Smith, 2013), in a way
that can be understood both by humans and machines.
In general, an ontology encoding a domain of knowl-
edge reflects the consensus of specialists or communi-
ties dealing with that domain. Domains are expressed
mainly through classes referencing real-world entities
(e.g. “Head”, “Fever”), and the relationships found
between them (e.g. “Face is part of Head”, “Fever is
a Vital Sign Finding”). The classes are frequently ac-
companied by lexical properties, such as preferred la-
bels, synonyms, textual descriptions, etc., which pro-
vide human-readable definitions. In contrast, relation-
ships between classes are represented not in text but
through formal axioms, or statements, which make
their intended meaning more amenable to automatic
manipulation and reasoning, thus allowing the use of
modern computational power to operate on the mean-
ing, rather than the structure, of the real-world enti-
ties. A knowledge graph (KG) is frequently taken
to mean a collection of data items with relations es-
HEALTHINF 2021 - 14th International Conference on Health Informatics
748
tablished between them and described according to
an ontology. The ontological layer of a KG thus de-
scribes and imposes some order on the data of a do-
main of interest.
In the biomedical and healthcare domains, the Se-
mantic Web represents an unparalleled opportunity
since it provides large amounts of freely available
data and a set of technologies dedicated to data shar-
ing, integration, management, and reasoning (Fer-
reira et al., 2020). The availability of over 1,000
open biomedical ontologies in BioPortal and more
than 2 billion data items publicly available as a KG
(i.e., Linked Open Data) represents a unique oppor-
tunity to integrate clinical and biomedical data. In
the biomedical domain, these datasets range from se-
mantic annotations for the functions of gene prod-
ucts, abnormal phenotypes related to diseases and
genes, or drug adverse events. In electronic health
records, patient-level data is commonly described us-
ing standardized coding schemes and ontologies, such
as UMLS, SNOMED-CT and ICD-9/10. This anno-
tation is mostly confined to final diagnosis and proce-
dures, which are frequently used for billing purposes,
whereas finer-grained clinical information is typically
found in free text format, making its linking to on-
tologies a greater challenge. However, once data and
AI outcomes are integrated with ontologies and KGs,
they can serve as background knowledge to XAI ap-
plications and in this way afford the semantic context
that is essential for explanations closer to human con-
ceptualizations and thus more useful in real-world ap-
plications.
3 A METHODOLOGY FOR
SEMANTIC EXPLANATIONS
XAI techniques can be categorized according to how
they support human reasoning into inductive reason-
ing, querying and similarity modelling (Wang et al.,
2019). Semantic explanations can actually support
the three kinds of explanations, since KGs naturally
support reasoning, querying and semantic similarity
computation. To produce semantic explanations we
need to address three challenges: (1) how to link in-
put data and AI outcomes to their meaning; (2) how to
link this meaningful data with what is already known
and (3) how to use this contextual information to build
effective explanations.
Figure 1 depicts the proposed methodology for
semantic explanations in biomedical AI applications.
It builds upon our well-established experience in se-
mantic technologies within the biomedical domain
and addresses the main challenges faced when build-
ing semantic explanations.
The core of the methodology is an integrated KG
that supports the XAI approaches. The integrated KG
is built by connecting heterogeneous data (scientific,
clinical, etc.) to existing domain ontologies to provide
a rich semantic layer to the data. This is achieved by
performing (1) Ontology selection to determine the
optimal set of ontologies to adequately describe the
data, (2) Semantic annotation, to link the data to
the ontologies, and (3) Semantic integration to es-
tablish links between the ontologies. The final step
is to build (4) Semantic explanation approaches that
explore background knowledge afforded by the KG.
This methodology focuses on model agnostic expla-
nations which are able to work regardless of the ma-
chine learning model employed and can be integrated
into already existing approaches.
Let us consider a simple example of semantic ex-
planations in healthcare, where we have trained a ma-
chine learning model that identifies patients with res-
piratory tract infections based on EHR data. Three
patients arrive at the hospital with similar complaints.
Patient A is described as having ”fever”, B as having
”fever w/ infection/cough”, and C as having a ”respi-
ratory tract infection”. Figure 2 describes the seman-
tic annotation of these cases using a subgraph of the
SNOMED-CT.
Our machine learning model has classified both B
and C as positive examples. While for patient C the
classification as having a respiratory tract infection
is straightforward, the same is not true for patient B.
However, by having the patients described within the
same semantic landscape which now includes back-
ground knowledge about symptoms and diseases, we
are able to understand why patient B was also classi-
fied as having a respiratory tract infection. Notice that
although the semantic annotation links each record to
different concepts in the ontology, it is possible to rea-
son that both patients B and C are more similar to each
other than to patient A since both B and C suffer from
an ”Infectious process” associated with a ”Respira-
tory finding”. Of course in this simplified example,
patients are only described by easily comprehensible
textual features, but the methodology extends to cases
where objects are described my several annotations,
across multiple domains.
The following describes each step of the method-
ology, highlights the main challenges that need to be
addressed and provides a brief overview of state of
the art semantic web technologies and tools that can
be employed to tackle them.
Towards Semantic Integration for Explainable Artificial Intelligence in the Biomedical Domain
749
Figure 1: A methodology for semantic explanations for heterogeneous biomedical data.
3.1 Ontology Selection
Regarding the first challenge, it is expectable that
multiple ontologies are needed to achieve a good cov-
erage of semantic annotations, especially in multi-
domain applications. However, to improve reason-
ing support, the selection should be focused on the
minimum set of ontologies that still provides ade-
quate granularity and scope to ensure the best possi-
ble coverage. Automated ontology recommendation
services, such as BioPortal Recommender are able
to recommend one or more ontologies that provide
adequate coverage for input textual data (Mart
´
ınez-
Romero et al., 2017) considering aspects such as
complementarity of resources and semantic richness.
When multiple ontologies are selected, they should as
much as possible be aligned and integrated to build a
single unified semantic landscape. In previous work,
we have developed automated approaches to select
appropriate ontologies for data integration follwoing
these principles (Faria et al., 2014).
3.2 Semantic Annotation
Once suitable ontologies are selected, the next step is
semantic annotation, i.e. connecting data to its mean-
ing that is encoded in an ontology creating a knowl-
edge graph. To ensure a high-quality semantic de-
scription of the data explored by the machine learning
models, we need to not only annotate feature values
but also metadata (e.g. feature labels) and classifica-
tion targets in the case of supervised learning. The se-
mantic annotation of biomedical text needs to address
several challenges (Jovanovi
´
c and Bagheri, 2017): in
the case of clinical notes, the use of abbreviation
and acronyms as well as the prevalence of spelling
mistakes and of meaningless notes (e.g., filling in a
mandatory field with a period); in the case of biomed-
ical terms, there is a high degree of ambiguity both
in terms of polysemy and homonymy, which is fur-
ther compounded with the use of acronyms that corre-
spond to words (e.g. the CAT gene). These challenges
are addressed by semantic annotation tools specifi-
cally designed for the semantic annotation of biomed-
ical and clinical text (Tchechmedjiev et al., 2018)
and recent advances in word embeddings specifically
trained in the biological (Lee et al., 2020) and clin-
ical domains (Alsentzer et al., 2019) are improving
the performance in biomedical semantic annotation
(Gonc¸alves et al., 2019).
3.3 Semantic Integration
It is not uncommon that a specific application re-
quires multiple ontologies to describe the underlying
data. One one hand, because complex applications
such as clinical care and research require the integra-
tion of multiple domains of knowledge, and on the
other because uncoordinated development often re-
sults in the adoption of multiple ontologies and con-
trolled vocabularies that cover the same or similar do-
mains. In these situations, where more than one on-
tology is used to annotate the data and to establish
a ”shared context”, we need to identify the connec-
tions and relations between different ontologies and
knowledge graphs. Discovering the semantic links
or alignments between ontologies and the data sets
that they organize can be very difficult, particularly
if the datasets are large and complex, as is routinely
the case in the biomedical domain. Biomedical and
clinical datasets are particularly challenging to align
for several reasons. Massive amounts of multimodal
HEALTHINF 2021 - 14th International Conference on Health Informatics
750
Figure 2: Example of the semantic annotation of patient records and machine learning target.
and diverse data are currently being generated by re-
searchers, hospitals and mobile devices around the
world, and their combined analysis presents unique
opportunities for healthcare, science, and society. The
data can range from molecular to phenotypic, be-
havioural to clinical, individual to population, genetic
to environmental. Biomedical Big Data goes well be-
yond the recognized challenges in handling large vol-
umes of data or large numbers of data sources, and
presents specific challenges pertaining to the hetero-
geneity and complexity of data as well as to the com-
plexity of its subsequent analysis.
Recent advances in semantic technologies sup-
port the rapid integration of datasets automatically
or semi-automatically by encoding machine-readable
representations of the meanings of data and metadata
items; in particular, the success of the Linked Data
movement demonstrates both the possibility of, and
need for, semantic integration of data from diverse
sources on a massive scale. This challenge can be ad-
dressed by employing ontology matching and linked
data matching techniques which are able to identify
meaningful links between entities described with dif-
ferent ontologies or vocabularies, in effect building a
KG that connects all relevant entities through contex-
tualized relations.
However, finding these relations is challenging,
because biomedical vocabulary is rich and complex,
different ontologies may model related concepts in
a different way, and the relations between concepts
may be themselves complex and semantically rich. In
previous work, we have developed the Agreement-
MakerLight ontology matching system which em-
ploys a set of diverse computational techniques rang-
ing from lexical matching, to machine learning, rea-
soning, graph visualization and user interaction to
perform the alignment of ontologies and knowledge
graphs. It is particularly suited to handle the chal-
lenges in biomedical ontology matching (Faria et al.,
2018), including finding complex correspondences
(Oliveira and Pesquita, 2018).
3.4 Semantic Explanations
When the outcomes of the ML models are semanti-
cally integrated with the input data, a shared semantic
landscape can be explored by explanations. Although
explanations based on reasoning and querying can
be employed with the proposed methodology, since
KGs naturally support both activities, here we focus
on semantic similarity-based explanations. A natu-
ral process in human learning is to identify similar
and distinguishing features to group similar objects
and discriminate different ones. At their core, many
types of AI approaches take into account similarity
modelling, including distance-based methods, such
as clustering; classification into different kinds, such
as supervised learning; and dimensionality reduction,
such as matrix factorization or autoencoders. Expla-
nations for these approaches can be based on under-
standing why certain objects are considered similar or
different (Wang et al., 2019).
The semantic similarity between two objects can
be measured by comparing the ontology entities that
describe them(Pesquita, 2017). A simple semantic
similarity measure based on the ratio of shared ontol-
ogy classes, would score the similarity between pa-
tient B and patient C as 3/6. Computing the simi-
larity between the classification target and each pa-
tient is also possible, with patient A having a score
of 1/6, B having a score of 1, and patient C having
a score of 2/5. Both types of similarities, between
instances, and between an instance and a classifica-
tion target can be presented as explanations. There is
Towards Semantic Integration for Explainable Artificial Intelligence in the Biomedical Domain
751
a large number of semantic similarity measures that
take into account different ontology properties and
object properties, providing more sophisticated sim-
ilarity measures(). There are several challenges in
measuring semantic similarity in the biomedical do-
main, namely how to address the multiple aspects
that a KG can represent in the context of a specific
application (Sousa et al., 2020), how to adequately
consider the specificity of ontology classes (Aouicha
and Taieb, 2016) and how to employ multiple ontolo-
gies(Ferreira and Couto, 2019). We have extensive
experience in biomedical semantic similarity, having
developed methods for its computation and evalua-
tion, e.g. (Pesquita, 2017; Cardoso et al., 2020).
4 CONCLUSIONS
This work proposes a methodology to enable seman-
tic explanations of machine learning applications in
the biomedical domain. The methodology tackles
the main challenges in providing human-centric ex-
planations based on a contextualized understanding
of the data and AI outcomes. It leverages the large
amounts of freely available biomedical data and meta-
data in the form of Knowledge Graphs, and builds
upon state of the art solutions for semantic annota-
tion and integration to embed the data and AI out-
comes with already established knowledge within the
domain. It then explores semantic similarity between
instances and between instances and outcomes to sup-
port similarity-based explanations. This methodology
affords post-hoc explanations that are built indepen-
dently of the machine learning algorithms employed,
and can thus be integrated into any application for
which data can be semantically annotated with exist-
ing biomedical ontologies.
In future work, we will employ this methodol-
ogy to build a semantic explanation system integrat-
ing our existing contributions in semantic annotation,
integration and similarity and apply it to the explana-
tion of biomedical machine learning applications, in-
cluding protein-protein interaction prediction, gene-
disease association and disease progression predic-
tion.
ACKNOWLEDGEMENTS
This work was funded by the Portuguese FCT through
the LASIGE Research Unit (UIDB/00408/2020 and
UIDP/00408/2020), and also by the SMILAX project
(PTDC/EEI-ESS/4633/2014).
REFERENCES
Alsentzer, E., Murphy, J., Boag, W., Weng, W.-H., Jindi,
D., Naumann, T., and McDermott, M. (2019). Pub-
licly available clinical bert embeddings. In Proceed-
ings of the 2nd Clinical Natural Language Processing
Workshop, pages 72–78.
Aouicha, M. B. and Taieb, M. A. H. (2016). Computing
semantic similarity between biomedical concepts us-
ing new information content approach. Journal of
biomedical informatics, 59:258–275.
Cardoso, C., Sousa, R. T., K
¨
ohler, S., and Pesquita, C.
(2020). A collection of benchmark data sets for
knowledge graph-based similarity in the biomedical
domain. Database, 2020.
Chari, S., Gruen, D. M., Seneviratne, O., and McGuinness,
D. L. (2020). Directions for explainable knowledge-
enabled systems.
Chen, J., Lecue, F., Pan, J., Horrocks, I., and Chen, H.
(2018). Knowledge-based transfer learning explana-
tion. In 16th International Conference on Principles
of Knowledge Representation and Reasoning, pages
349–358. AAAI Press.
d’Amato, C. (2020). Machine learning for the semantic
web: Lessons learnt and next research directions. Se-
mantic Web, (Preprint):1–9.
Faria, D., Pesquita, C., Mott, I., Martins, C., Couto, F. M.,
and Cruz, I. F. (2018). Tackling the challenges of
matching biomedical ontologies. Journal of biomedi-
cal semantics, 9(1):4.
Faria, D., Pesquita, C., Santos, E., Cruz, I. F., and Couto,
F. M. (2014). Automatic background knowledge se-
lection for matching biomedical ontologies. PloS one,
9(11):e111226.
Ferreira, J. D. and Couto, F. M. (2019). Multi-domain se-
mantic similarity in biomedical research. BMC bioin-
formatics, 20(10):23–31.
Ferreira, J. D., Teixeira, D. C., and Pesquita, C. (2020).
Biomedical ontologies: Coverage, access and use.
In Wolkenhauer, O., editor, Systems Medicine Inte-
grative, Qualitative and Computational Approaches,
pages 382 – 395. Academic Press, Elsevier.
Gonc¸alves, R. S., Kamdar, M. R., and Musen, M. A. (2019).
Aligning biomedical metadata with ontologies using
clustering and embeddings. In European Semantic
Web Conference, pages 146–161. Springer.
Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Gian-
notti, F., and Pedreschi, D. (2018). A survey of meth-
ods for explaining black box models. ACM computing
surveys (CSUR), 51(5):1–42.
Hitzler, P., Bianchi, F., Ebrahimi, M., and Sarker, M. K.
(2020). Neural-symbolic integration and the semantic
web. Semantic Web, 11(1):3–11.
Holzinger, A., Biemann, C., Pattichis, C. S., and Kell,
D. B. (2017). What do we need to build explainable
ai systems for the medical domain? arXiv preprint
arXiv:1712.09923.
Holzinger, A., Langs, G., Denk, H., Zatloukal, K., and
M
¨
uller, H. (2019). Causability and explainability of
HEALTHINF 2021 - 14th International Conference on Health Informatics
752
artificial intelligence in medicine. Wiley Interdisci-
plinary Reviews: Data Mining and Knowledge Dis-
covery, 9(4):e1312.
Jovanovi
´
c, J. and Bagheri, E. (2017). Semantic annota-
tion in biomedicine: the current landscape. Journal
of biomedical semantics, 8(1):44.
Lecue, F. (2019). On the role of knowledge graphs in ex-
plainable ai. Semantic Web, (Preprint):1–11.
Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C. H.,
and Kang, J. (2020). Biobert: a pre-trained biomedi-
cal language representation model for biomedical text
mining. Bioinformatics, 36(4):1234–1240.
Lundberg, S. M., Nair, B., Vavilala, M. S., Horibe, M.,
Eisses, M. J., Adams, T., Liston, D. E., Low, D. K.-
W., Newman, S.-F., Kim, J., et al. (2018). Explain-
able machine-learning predictions for the prevention
of hypoxaemia during surgery. Nature biomedical en-
gineering, 2(10):749–760.
Mart
´
ınez-Romero, M., Jonquet, C., O’connor, M. J., Gray-
beal, J., Pazos, A., and Musen, M. A. (2017). Ncbo
ontology recommender 2.0: an enhanced approach
for biomedical ontology recommendation. Journal of
biomedical semantics, 8(1):21.
Miller, T. (2019). Explanation in artificial intelligence: In-
sights from the social sciences. Artificial Intelligence,
267:1–38.
Min, S., Lee, B., and Yoon, S. (2017). Deep learn-
ing in bioinformatics. Briefings in bioinformatics,
18(5):851–869.
Miotto, R., Wang, F., Wang, S., Jiang, X., and Dudley, J. T.
(2018). Deep learning for healthcare: review, oppor-
tunities and challenges. Briefings in bioinformatics,
19(6):1236–1246.
Munn, K. and Smith, B. (2013). Applied ontology: An in-
troduction, volume 9. Walter de Gruyter.
Oliveira, D. and Pesquita, C. (2018). Improving the inter-
operability of biomedical ontologies with compound
alignments. Journal of biomedical semantics, 9(1):1.
Pesquita, C. (2017). Semantic similarity in the gene ontol-
ogy. In The gene ontology handbook, pages 161–173.
Humana Press, New York, NY.
Phan, N., Dou, D., Wang, H., Kil, D., and Piniewski, B.
(2017). Ontology-based deep learning for human be-
havior prediction with explanations in health social
networks. Information sciences, 384:298–313.
Sarker, M. K., Xie, N., Doran, D., Raymer, M., and Hitzler,
P. (2017). Explaining trained neural networks with se-
mantic web technologies: First steps. In Twelfth Inter-
national Workshop on Neural-Symbolic Learning and
Reasoning 2017, London, UK, July 17-18, 2017.
Sousa, R. T., Silva, S., and Pesquita, C. (2020). Evolving
knowledge graph similarity for supervised learning in
complex biomedical domains. BMC bioinformatics,
21(1):6.
Tchechmedjiev, A., Abdaoui, A., Emonet, V., Melzi, S.,
Jonnagaddala, J., and Jonquet, C. (2018). En-
hanced functionalities for annotating and indexing
clinical text with the ncbo annotator+. Bioinformat-
ics, 34(11):1962–1965.
Wang, D., Yang, Q., Abdul, A., and Lim, B. Y. (2019). De-
signing theory-driven user-centric explainable ai. In
Proceedings of the 2019 CHI conference on human
factors in computing systems, pages 1–15.
Wollschlaeger, B., Eichenberg, E., and Kabitzsch, K.
(2020). Explain yourself: A semantic annotation
framework to facilitate tagging of semantic informa-
tion in health smart homes. In HEALTHINF, pages
133–144.
Zhou, B., Bau, D., Oliva, A., and Torralba, A. (2018). In-
terpreting deep visual representations via network dis-
section. IEEE transactions on pattern analysis and
machine intelligence, 41(9):2131–2145.
Towards Semantic Integration for Explainable Artificial Intelligence in the Biomedical Domain
753