A Description Language for Similarity, Belief Change and Trust
Aaron Hunter
Department of Computing, British Columbia Institute of Technology, Burnaby, Canada
Keywords:
Trust, Belief Revision, Knowledge Representation.
Abstract:
We introduce a simple framework for describing and reasoning about situations where an agent receives infor-
mation reported from external sources, and these reports cause them to change their beliefs. Our framework
is inspired by classic action description languages, which use sets of causal statments to specify action effects
in terms of transition systems. We suggest that this style of language can effectively capture important prop-
erties of similarity and trust, which are required to perform belief revision in practical settings. The language
introduced in this paper allows us to specify a similarity relation on states, and it also allows us to explicitly
associate an incoming report with a specific formula to be used as the input for a suitable belief revision op-
erator. The result is a flexible framework that can describe a variety of belief change functions, and it can
also capture the trust that is held in the reporting agent in a simple and transparent way. We demonstrate the
connection with existing trust-influenced models of belief change. We then consider a speculative application
where we apply our framework to reason about the correctness of trusted third party protocols. Directions for
future work are considered.
1 INTRODUCTION
There is a tradition in formal Knowledge Representa-
tion in which compact logic-based languages are used
for reasoning about the effects of actions. One influ-
ential class of such languages includes the so-called
action languages (Baral and Gelfond, 1997; Baral
et al., 1997). Such languages filled an important role
in the literature of the era, by giving declarative rep-
resentations of important problems in a formal set-
ting. Over time, action languages have become less
popular as more sophisticated action formalisms have
proven to have greater utility. However, we suggest
that this style of formalism can still be valuable to ex-
plicitly describe distinct aspects of reasoning. Specif-
ically, we propose that a simple declarative approach
can be valuable in specifying the interaction between
between trust and belief when agents exchange infor-
mative messages.
In this paper, we propose a new language that uses
the basic action language framework to describe both
similarity between worlds and trust in reports. In this
manner, we introduce a simple formal framework that
allows for representing and reasoning about commu-
nicative actions that selectively impact beliefs based
on the knowledge of a reporting agent.
This work makes several contributions to the liter-
ature on belief change and trust. The framework intro-
duced provides a mechanism for explicitly specifying
a similarity ordering on states, which can be used to
define a belief revision operator. At the same time,
the framework models knowledge-based trust, by ex-
plicitly specifying the connection between reports and
belief revision. For example, a report of φ ψ might
only cause an agent to revise by ψ in cases where the
reporting agent is not trusted to know the truth of φ.
The final contribution of this paper is really a specula-
tive position for future investigation. We propose that
the framework introduced could actually be a useful
tool for reasoning about the security of trusted third
party protocols. This application is described, but we
leave a full treatment of the problem for future work.
2 PRELIMINARIES
Throughout this paper, we are concerned with for-
mal approaches to Knowledge Representation. As
such, we assume knowledge of propositional logic as
a starting point. We review some basic terminology.
A propositional signature P is a set of propositional
variables that can be true or false. A single proposi-
tional variable is called an atomic formula. A formula
of propositional logic is defined using the usual log-
ical connectives ¬, , . A literal is either p P or
¬p where p P.
870
Hunter, A.
A Description Language for Similarity, Belief Change and Trust.
DOI: 10.5220/0012406600003636
Paper published under CC license (CC BY-NC-ND 4.0)
In Proceedings of the 16th International Conference on Agents and Artificial Intelligence (ICAART 2024) - Volume 3, pages 870-877
ISBN: 978-989-758-680-4; ISSN: 2184-433X
Proceedings Copyright © 2024 by SCITEPRESS Science and Technology Publications, Lda.
A propositional interpretation for P assigns each
variable in P a true/false value. We also use the term
state to refer to a propositional interpretation and we
let 2
P
denote the set of all propositional interpreta-
tions. We will also be concerned with beliefs and be-
lief change in this paper. We define a belief state to
be a set of states; informally the belief state K rep-
resents the beliefs of an agent who believes that the
actual state of the world must be one of the elements
of K.
Broadly, in logical approaches to Knowledge Rep-
resentation, we think of atomic formulas as represent-
ing properties of the world. So a variable like Rain
might be used as such: it is true in states where it is
raining, and it is false in states where it is not rain-
ing. In this manner, we are able to represent incom-
plete beliefs. A belief state that includes some states
where Rain is true and some states where Rain is false
will be used to capture an agent’s uncertainty about
whether or not it is raining.
Finally, as a general statement, throughout this pa-
per we will often be concerned with the notion of
trust. We will consider formal notions of trust later,
but for now we make a simple disclaimer. When we
say that an agent is trusted over φ in this paper, we
simply mean that we will believe φ if that agent tells
us φ is true. However, we informally are thinking
about this in terms of perceived knowledge. In other
words, we trust an agent on some fact just in case
we believe that they have the requisite knowledge to
know when that fact is true. This is different from
the notion of trust due to honesty. When dealing with
honesty, we need to consider the idea that a particular
agent may be intentionally deceptive. This introduces
different problems that we do address in this paper.
2.1 Action Languages
We briefly describe the action language A. We as-
sume an underlying propositional signature, as well
as an underlying set of action symbols. A sentence of
the action language A has the form:
A causes L if P
where A is an action, L is a literal and P is a propo-
sitional formula. Following the terminology used in
the area, we will sometimes refer to atomic formulas
as fluent symbols; this just reinforces the fact that the
truth value of an atomic formula can be changed as
actions are executed.
A set of sentences of A defines a transition system
(Gelfond and Lifschitz, 1998). A transition system
is simply a graph where the nodes are labelled with
states and the edges are labelled with actions. The
semantics of A dictates that for any set S of sentences,
the associated transition system will include an edge
labelled with A from s
1
to s
2
just in case
s
1
|= P ands
2
|= Ł
For all atomic formulas Q that do not occur in L,
s
1
|= Q iff s
2
|= Q.
Hence, a set of causal sentences serves a single pur-
pose: it describes how the state of the world changes
when actions are executed. One advantage of an ac-
tion language is that it gives a compact, declarative
description of action effects that is easy to read and
understand.
2.2 Belief Change
Belief revision refers to the process where an agent
receives new information, and has to incorporate it
with their current beliefs. One important approach
is the AGM approach to belief revision is the AGM
approach. In the AGM approach, a belief revision op-
erator is a function that maps a belief state K and a
formula φ to a new belief state K φ. We say that is
an AGM revision operator if it satisfies a certain set
of rationality postulates, which are normally called
the AGM postulates. We do not list the postulates
here, but instead refer the reader to (Alchourrón et al.,
1985) for a complete description of the framework.
While AGM revision operators are defined in
terms of a set of rationality postulates, it has also been
shown that there is an equivalent semantic characteri-
zation. In particular, it has been shown that an opera-
tor satisfies the AGM postulates just in case there is a
function f that maps each initial belief state K to a to-
tal pre-order
K
over states such that K φ is the set of
minimal states in
K
that satisfy φ. In the literature,
the function f is called a faithful assignment(Katsuno
and Mendelzon, 1992).
We can think of the ordering
K
as a plausibility
ordering, where a state precedes another if it is con-
sidered to be more plausible. The important point for
our purposes is that we can determine the outcome of
AGM revision by finding all states consistent with the
new information that are minimal with respect to the
total pre-order.
2.3 Trust
In the original approaches to belief revision, the new
information always had to be incorporated into the
new belief state. In other words, following revision
by φ, the underlying agent would always believe φ.
Of course, in many practical settings, this is not a rea-
sonable assumption; we should only believe the new
A Description Language for Similarity, Belief Change and Trust
871
information if the source of the information is trusted.
There has been work on integrating trust into belief
revision both in the setting of AGM revision (Booth
and Hunter, 2018) and also in the setting of modal
logics of belief (Liu and Lorini, 2017).
In the present paper, we will introduce sentences
that indicate when an action causes an agent to be-
lieve a particular formula φ. This is how we capture
trust: by explicitly specifying when the actions of an-
other agent cause us to believe certain formulas. If
we think of the actions as announcements, then we
are able to explicitly specify when we believe the in-
formation announced by another agent. This is one
key aspect of trust, which is important for many ap-
plications.
3 DESCRIBING BELIEF CHANGE
3.1 Similarity Descriptions
In this section, we present a simple action language
style approach for giving concise descriptions of the
similarity between states.
We use the term similarity description language
to refer to a language that is used to describe the sim-
ilarity between propositional interpretations. In the
following definition, we introduce the similarity de-
scription language D.
Definition 1. A proposition of the similarity descrip-
tion language D is an expression of the form
if φ then ψ adds dissimilarity i,
where φ, ψ are conjunctions of literals and i Z
1
.
In the rule presented in the definition, we refer to
φ as the head, we refer to ψ as the body, and we refer
to i as the increment. A set of propositions of D is
called a similarity description.
The semantics of D is given by associating a dis-
tance function d with every similarity description.
Definition 2. Let SD be a similarity description. The
distance function d
SD
: 2
F
× 2
F
Z
0
is defined as
follows.
1. d
SD
(w, w) = 0
2. d
SD
(w, v) =
iI
,
where I is the set of positive integers that appear
in propositions in SD of the form
if φ then ψ adds dissimilarity i
where w |= φ and v |= ψ.
Hence, the distance between w and v is calculated
by taking the sum of all the increments with heads
satisfied by w and bodies satisfied by v.
Proposition 1. Let d : 2
F
× 2
F
Z
0
such that
d(w, w) = 0 for all w. Then d = d
SD
for some simi-
larity description SD.
Proof. For each state w, let φ
w
be the unique conjunc-
tion of literals such that, for each atomic formula F:
F is a conjunct in φ
w
if F is true in w.
¬F is a conjunct in φ
w
if F is false in w.
Then define SD as follows. For each pair w, v states,
SD contains the sentence:
if φ
w
then φ
v
adds dissimilarity d(w, v).
It follows that d = d
S
D.
In general, the function d defined by a similarity
description does not satisfy the properties that we ex-
pect to hold for a distance measure. For example, it
need not be transitive and it need not satisfy the tri-
angle inequality. So it is not a distance in the usual
sense of the word; it is just a function that gives a nat-
ural number output between points. Nevertheless, we
suggest that this language does allows for the compact
representation of some natural distance functions.
We illustrate some examples that demonstrate par-
ticular distance functions that we can capture.
Example 1. Let val : F Z
0
, so val is a function
that maps every fluent symbol F to some positive in-
teger. Intuitively, we think of val as assigning some
measure of subjective importance to each fluent sym-
bol. The similarity description SD(val) is defined as
follows. For each fluent symbol F, SD(val) contains
the propositions
if F then ¬F adds dissimilarity val(F)
if ¬F then F adds dissimilarity val(F)
Note that, if val uniformly maps every fluent symbol
to 1, then the distance associated with SD(val) is the
Hamming distance between interpretations.
Example 2. Suppose that there is some distinguished
fluent symbol U F, and val is defined as follows
val(U) = 2
|F|
val(F ̸= U) = 1.
In this case, the distance function associated with
SD(val) assigns large distances between worlds that
disagree on U. We can think of U as a fluent symbol
that is very unlikely to change under revision. This
weighting captures a parametric difference operator,
where one variable is more resistant to change (Pep-
pas and Williams, 2018).
ICAART 2024 - 16th International Conference on Agents and Artificial Intelligence
872
3.2 A Report Description Language
An epistemic action signature is a pair F, R where
F is a propositional signature, and R is a designated
set of F-formulas called reports. Informally, these are
messages that can be received from some underlying
source of information.
We now define a new kind of description language
A
D
, which we will call a report description language.
Definition 3. A theory of the language A
D
is a set of
sentences of one of the following forms:
1. α causes to believe φ
2. if φ then γ adds dissimilarity i
where φ , ψ, γ are conjunctions of literals, α is a re-
port, and i Z
1
. A theory is constrained to have at
most one rule of form (1) for each report (up to logical
equivalence).
The important feature of the language A
D
is that it
describes how reported information will be incorpo-
rated into the beliefs of some underlying agent. The
following definitions demonstrate how this is done.
Definition 4. For any theory T of A
D
, let T (D) denote
the subset of T that consists of all similarity proposi-
tions in T . Let d
T
denote the distance function defined
by T (D).
Hence, d
T
is the distance function defined by the
restriction of T to just the similarity sentences. In the
following definition, we show how a theory of A
D
can
define a revision operator .
Definition 5. Let T be a theory of A
D
. The function
: 2
S
×R 2
S
is defined such that K α is defined as
follows. If (α causes to believe φ) is in T , then K α
is equal to:
{s | s |= φ and for some w K, d(s, w) is minimal}
If there is no such rule with head α in T , then is the
identity function.
Hence, K α is the set of φ-states that are mini-
mally distant from states in K according to the dis-
tance function defined by the description. This defi-
nition is similar to the definition of distance-based re-
vision found in (Delgrande, 2004), where it is shown
that the operator satisfies the AGM postulates under
some natural restrictions. But note that we are not re-
vising by α; we are revising only by the formula φ
that α causes us to believe. In this manner, we are
capturing partial trust in the information source. We
return to this point in section 4.
3.3 Basic Properties
The report description language A
D
is very flexible,
to the point that it is hard to give general properties
without restricting the form of theories. Nevertheless,
we can give some basic results. We start with the ex-
treme cases.
Proposition 2. If T is a theory that contains
(α causes to believe φ) but it contains no similarity
sentences, then K α = mod(φ) for all K S.
Proof. Since there are no similarity sentences in T ,
d
T
(w, v) = 0 for all w, v. So K α is just the set of
models of φ.
On the other hand, we can also consider the case
where there are no causal sentences.
Proposition 3. If T is a theory with no causal sen-
tences, then K φ = K for all K S.
Proof. If T contains no causal sentences, then there
is no effect for any report. Hence there is no change
in belief.
Of course, in between these extreme cases, there
are more interesting situations. For any set K of states
and any distance function d, define the relation
K,d
such that t
1
K,d
t
s
just in case the minimum distance
from t
1
to K is less than the minimum distance from
t
2
to K.
Proposition 4. Let T be a theory containing the sen-
tence
φ causes to believe φ
for all formulas φ. Let d be the function defined by
T (D). If, for each K,
K,d
is a total pre-order, then
is an AGM revision operator.
Proof. We can define to be the revision operator de-
fined by the faithful assignment K 7→⪯
(
K, d).
Hence, if every formula φ is taken as evidence
of φ, then we can define AGM revision operators by
carefully specifying the similarity relation to ensure
we have a faithful assignment.
From a high-level perspective, the important point
here is that the description language we have defined
can do two things independently. First, it can be used
to specify a similarity relation that is useful for defin-
ing revision operators. But independently, the lan-
guage can be used to specify a relationship between
reports and believed outcomes; this is done through
the causes-to-believe sentences. This allows us to
define situations where the formulas we believe fol-
lowing a report may not be identical to the reports
themselves. However, at present, we do not have any
formal restriction on this latter connection. The re-
lationship between reports and believed outcomes is
completely flexible.
A Description Language for Similarity, Belief Change and Trust
873
4 REPORTS AND TRUST
4.1 Separating Trust and Similarity
If we just look at the dissimilarity sentences, a theory
T of A
D
actually defines an idealized belief change
operator through distance-based revision. But that is
not the operator that we have associated with T . The
operator is obtained from the idealized operator, but
filtered through the trust that is explicitly specified in
the causal sentences. If we had several information
sources, we could define causal sentences for each.
This would give several trust-based operators, based
on the same underlying distance function. For now,
we stick with a single source.
We look at some examples.
Example 3. Suppose that we have a rule of this form:
Rain causes to believe Rain.
This indicates that the information source is trusted to
determine when it is raining. However, suppose that
the following rules are not included:
CarBroken causes to believe CarBroken.
¬Rain causes to believe ¬Rain.
This means that we would not trust them when they
tell us that our car is broken. Moreover, we would not
even trust them if they said it was not raining.
In the preceding example, we show how we can
filter out the part of the report on which an agent is
not trusted. The next example shows how we can rep-
resent ignorance.
Example 4. Consider rules of the form:
Rain CarBroken causes to believe Rain
Snow causes to believe Rain
In this case, we essentially are treating the reporting
agent as if they can not tell the difference between
different kinds of precipitation. Whether they report
snow or rain, we always believe it is raining. More-
over, we do not trust them with respect to the informa-
tion about our car.
These examples provide some illustrative cases,
where our description language can be used to cap-
ture interesting relationships between trust and belief
change.
4.2 Trust-Influenced Belief Revision
In the literature, several approaches to trust-
influenced belief change have appeared. In this sec-
tion, we briefly show how our work is connected to
one of these approaches. Specifically, we look at
the so-called trust-senstive belief revision operators of
(Booth and Hunter, 2018). In this framework, there is
an idealized revision operator, similar to the one de-
fined by the similarity sentences in our setting. Trust-
sensitive revision operators are defined with respect
to a partition over possible states; we only trust an
agent to distinguish between states that are in differ-
ent partition cells. Hence, there is a division between
the specification of trust and the underlying revision
operator.
Definition 6. Let Π be a partition over states. Define
B
Π
to be the set of cause-to-believe sentences that in-
cludes
α causes to believe φ
just in case α and φ are maximal conjunctions of lit-
erals specifiying states s and t where s and t are in
different cells of Π.
Hence, we can use a partition to define a set of
causal sentences that captures the division used to de-
fine trust-sensitive revision. The following result is
straightforward.
Proposition 5. Let
Π
be the trust sensitive revision
operator, defined the idealized revision operator
and the state partition Π. If we let T consist of the
union of B
Π
and the similarity description SD defin-
ing , then
Π
is the revision operator defined by T .
The point here is that the separation of the trust
partition and the revision operator allows us to do both
parts independently.
In general, this kind of manual construction can
flexibly allow us to specify a variety of trust relation-
ships with the reporting agent.
Proposition 6. Let P be a set of formulas. Then there
is a theory T such that, for each ψ P, if φ |= ψ then
K φ |= ψ.
So we can define a theory where an agent is trusted
just in case they report a formula in the set P; this
is similar to the model of trust specified in (Liu and
Lorini, 2017). In fact, the proposition ensures we will
believe ψ P whenever the agent reports something
that entails ψ.
5 APPLICATION: TRUSTED
THIRD PARTIES
5.1 Protocol Verification
We propose that our framework can be useful for pro-
tocol verification. This is an area where logics of be-
lief have been applied in the past, starting with the
ICAART 2024 - 16th International Conference on Agents and Artificial Intelligence
874
A
T
B
A
T
B
Start
Request keys Send keySend key
Initialization Steps Sharing Keys
Figure 1: Simple Key Agreement Protocol.
hightly influential work on BAN logic (Burrows et al.,
1990). The idea for logic-based protocol verification
is to formalize the protocol in terms of the beliefs of
the participants. In an authentication protocol, for ex-
ample, the goal is to prove that some agent has a par-
ticular identity.
There are two advantages to using a logic-based
approach to protocol verification. The first advantage
is that we can give a declarative description of the
problem, and then simply consider the beliefs of each
agent after the protocols is executed. Hence, the fact
that logics are easy to read and understand is a benefit
in this context.
The other advantage of a logic-based approach is
that it permits precise proofs of correctness for pro-
tocols. We know that many protocols are vulnerable
to subtle attacks that are hard to predict in advance.
As such, the best way to verify the correctness of a
protocol is through a formal proof of correctness. In
practice, of course, formalizing a protocol in a logic is
difficult. Moreover, we need to make some assump-
tions about the message passing environment that are
not always accurate. So this approach to protocol ver-
ification is not perfect; but it is one tool that has been
used to prove protocol correctness in the past.
For the present paper, we admit that the appli-
cation to protocol verification is somewhat specula-
tive. We are interested in demonstrating the power
of our description language by showing that certain
protocols can be modelled and verified. Our focus is
specifically on protocols where trust plays an explicit
role, as these protocols are very challenging to verify
through traditional methods.
5.2 Trusted Third Party Protocols
In network communication, a trusted third party
(TTP) is an agent that participates in a protocol to en-
sure the other parties that the information exchanged
is correct. We describe a simple protocol. The proto-
col involves the exchange of messages between three
parties: A, B and T . In this protocol, T is acting as a
TTP to allow A and B to establish a session key. This
notion has been discussed in (D. Zissis and Koutsaba-
sis, 2011; A. Ulrich and Carle, 2011).
We use the standard notation established in (Bur-
rows et al., 1990) to describe the protocol:
Simple Key Agreement
1. A B : N
A
, A
2. B T : N
A
, N
B
, A, B
3. T A : {K}
K
A
4. T B : {K}
K
B
In this notation, A B : M means that A sends the
message M to the agent B. An expression of the form
{M}
K
denotes the message M encrypted with the key
K. Messages of the form N
A
are nonces, which are
random numbers generated at the time of protocol ex-
ecution. In this protocol, T is a trusted party that is
responsible distributing sessions keys for communi-
cation between agents. We assume that T shares a
secret key with A which is denoted by K
A
, as well as
a secret key with B which is denoted by K
B
. The goal
of this protocol is to give A and B a new key that they
can use for secure communication. A graphical rep-
resentation of the protocol is provided in Figure 1.
Proving that this kind of protocol actually works
can be difficult. There are at least two challenges.
The first is a question of the honesty of T , which we
will not address here. Instead, we focus on the prob-
lem of knowledge. Why should A and B believe that
T has a suitable collection of keys available, which
are each secure? This problem requires an analysis
of the beliefs of A and B, and how they change when
information is exchanged on the network.
5.3 Towards a Formalization
We put aside for the moment the manner in which a
TTP might be established. In practice, this might be
done through extra-logical means. However, we can
give a precise statement of what this means for the
agents participating in a protocol.
A Description Language for Similarity, Belief Change and Trust
875
To prove that a TTP protocol is correct, we simply
need to encode the protocol as a set of logical for-
mulas. Consider the Simple Key Agreement protocol
from the previous section. In order to show that this
protocol is correct, one would need to perform the fol-
lowing steps.
Formalize the protocol as a sequence of an-
nouncements P
1
, . . . , P
n
, made respectively by
agents a
1
, . . . , a
n
.
Formalize the goal of the protocol as another for-
mula G.
Prove that the goal is believed after the protocol,
if we assume the third party is rightly trusted.
This is an established method for verifying crypto-
graphic protocols in epistemic logic. We propose that
we can define a variation of this approach for TTP
protocols, using our description language.
In order to formalize the Simple Key Agreement
protocol at a high level, we assume the proposi-
tional vocabulary includes atomic formulas of the
form init(x) for x {A, B}. These are formulas that
are true when A (resp. B) want to initialize a com-
munication session. We then assume that we have a
finite set K of keys. For each key k K and each pair
of agents x, y we have an atomic formula of the form
{sa f e(k, x, y)}. Such a formula is true when k is a safe
key for communication between x and y.
The Simple Key Agreement protocol can be rep-
resented as a sequence of messages exchanged. Each
message causes revision by some formula. In order
to formalize this protocol in A
D
, we need to do three
things:
1. Formalize the connection between messages sent
and the resulting belief change. This is done
through causal sentences. For example, sending
message (1) of the protocol should simply cause B
to belief that A would like to start a run of the pro-
tocol. This kind of arbitrary connection between
formulas and belief change can easily be specified
in A
D
.
2. The causal sentences should also assert that the
TTP is trusted on everything they say that is re-
lated to the protocol run. This is a critical compo-
nent.
3. Formalize the underlying belief revision opera-
tor using similarity sentences. This is where we
capture notions that are general to the applica-
tion. For example, the connection between agents
and keys; this impacts our idealized revision when
messages are received.
After specifying all of these components, we are able
to specify the stucture of the protocol through report
actions for each agent. For agent A, there are just two
revisions:
1. A sends M
1
to B.
2. T sends M
4
to A.
From the perspective of A, the protocol is correct if
these two revisions cause them to believe that they
have a session key that is secure for communication
with B. Of course, for this particular protocol, this is
not going to be provable. The problem is that there is
no connection between the first two messages sent and
the second two messages sent; so there is no guarantee
that agent a will believe the key they are given is sent
from a current run of the protocol.
We conclude the description of this application
with one final remark. One might object to our treat-
ment here by saying that we need nested beliefs to
reason about protocol verification. For example, we
might need A to have beliefs about the beliefs of B. In
the present framework, we do not have the capacity
to capture nested beliefs in this manner. However, for
protocol verification, we generally only require lim-
ited (finitely bounded) nesting of belief. It would be
possible to address this problem by carefully extend-
ing the propositional vocabularly to include formulas
about the beliefs of other agents. This kind of ap-
proach can be sufficient for the kind of reasoning re-
quired for concrete protocols.
We leave a complete treatment of trusted third
party protocols in our framework for future work.
6 DISCUSSION
The most closely related work to this paper is the
work on using causal rules to model trust and belief
change (Hunter, 2021b). In this work, causal rules
similar to sentences of A are used to reason about
change in modal logic. In other words, the causal
rules specify when an action impacts the truth of a
formula like φ. If we interpret the to represent
belief, then this is similar to our approach here. How-
ever, in the present work, we do not consider modal
logics at all; our work is all in the propositional set-
ting of the AGM framework. Moreover, our focus in
this paper is more broad; we are not only concerned
with how actions impact belief; we are also concerned
with giving an simple language for defining similarity
relationships.
One important feature of the description language
presented that should be emphasized is the connection
with trust-sensitive revision, as outlined in section 4.
The fact that we can capture an established approach
to trust and belief revision illustrates the potential util-
ICAART 2024 - 16th International Conference on Agents and Artificial Intelligence
876
ity of our approach. Moreover, our approach is ex-
tremely flexible. Hence, we can easily capture varia-
tions on this model of trust by simply imposing dif-
ferent restrictions on our similarity descriptions.
7 CONCLUSION
In this paper, we have introduced a description lan-
guage for belief change and trust. Our language is
based on classic action languages, but it does not in-
volve actions. Instead, it defines similarity between
states and connections between reports and belief.
The result is a simple language that can flexibly de-
scribe belief change where information comes from a
partially trusted source.
While this work is inspired by an old tradition in
reasoning about action, there has been recent work
on related topics. Notably, the language introduced
in (Hunter, 2021a) gives a model for reasoning about
changes in belief. This work also has connections
with a variety of approaches to trust, including both
those based on sets of formulas or those based on se-
mantic constraints.
In terms of future work, there are three main di-
rections. The first is characterizing how different be-
lief change postulates can be compactly captured. The
second is explicitly specifying how to encode differ-
ent trust relationships impacting belief change. The
third direction for future research is the completion of
of speculative application. We are interested in pro-
viding a complete approach to the representation and
verification of trusted third party protocols using our
description language.
REFERENCES
A. Ulrich, R. Holz, P. H. and Carle, G. (2011). Investigating
the openpgp web of trust. pages 489–507.
Alchourrón, C. E., Gärdenfors, P., and Makinson, D.
(1985). On the logic of theory change: Partial meet
functions for contraction and revision. Journal of
Symbolic Logic, 50(2):510–530.
Baral, C. and Gelfond, M. (1997). Reasoning about effects
of concurrent actions. Journal of Logic Programming,
31(1-3):85–117.
Baral, C., Gelfond, M., and Provetti, A. (1997). Repre-
senting actions: Laws, observations and hypothesis.
Journal of Logic Programming, 31(1-3):201–243.
Booth, R. and Hunter, A. (2018). Trust as a precursor to
belief revision. J. Artif. Intell. Res., 61:699–722.
Burrows, M., Abadi, M., and Needham, R. (1990). A logic
of authentication. ACM Transactions on Computer
Systems, 8(1):18–36.
D. Zissis, D. L. and Koutsabasis, P. (2011). Cryptographic
dysfunctionality-a survey on user perceptions of digi-
tal certificates.’. Global Security, Safety and Sustain-
ability and E-Democracy.
Delgrande, J. (2004). Preliminary considerations on the
modelling of belief change operators by metric spaces.
In Proceedings of the 10th International Workshop on
Non-Monotonic Reasoning (NMR 2004), pages 118–
125.
Gelfond, M. and Lifschitz, V. (1998). Action languages.
Linköping Electronic Articles in Computer and Infor-
mation Science, 3(16):1–16.
Hunter, A. (2021a). Building trust for belief revision. In
Proceedings of the Pacific Rim Conference on Artifi-
cial Intelligence (PRICAI), pages 543–555.
Hunter, A. (2021b). On the use of causal rules to specify
how trust impacts change in knowledge and belief. In
Proceedings of the 34th Canadian Conference on Ar-
tificial Intelligence, Canadian AI 2021. Canadian Ar-
tificial Intelligence Association.
Katsuno, H. and Mendelzon, A. (1992). Propositional
knowledge base revision and minimal change. Arti-
ficial Intelligence, 52(2):263–294.
Liu, F. and Lorini, E. (2017). Reasoning about belief, ev-
idence and trust in a multi-agent setting. In PRIMA
2017: Principles and Practice of Multi-Agent Systems
- 20th International Conference, volume 10621, pages
71–89.
Peppas, P. and Williams, M.-A. (2018). Parametrised dif-
ference revision. In Proceedings of the International
Conference on Principles of Knowledge Representa-
tion and Reasoning (KR), pages 277–286.
A Description Language for Similarity, Belief Change and Trust
877