Valuing Others’ Opinions: Preference, Belief and Reliability Dynamics
Sujata Ghosh
1
and Katsuhiko Sano
2
1
Indian Statistical Institute, Chennai, India
2
Department of Philosophy, Graduate School of Letters, Hokkaido University, Sapporo, Japan
Keywords:
Preference, Belief, Reliability, Hybrid Logic, Public Announcement Logic, Propositional Dynamic Logic.
Abstract:
Deliberation often leads to changes in preferences and beliefs of an agent, influenced by the opinions of
others, depending on how reliable these agents are according to the agent under consideration. Sometimes,
it also leads to changes in the opposite direction, that is, reliability over agents gets updated depending on
their preferences and/or beliefs. There are various formal studies of preference and belief change based on
reliability and/or trust, but not the other way around this work contributes to the formal study of the latter
aspect, that is, on reliability change based on agent preferences. In process, some policies of preference
change based on agent reliabilities are also discussed. A two-dimensional hybrid language is proposed to
describe such processes, and axiomatisations and decidability are discussed.
1 INTRODUCTION
Deliberation forms an important component in any
decision-making process. It is basically a conversa-
tion through which individuals provide their opinions
regarding certain issues, give preferences among pos-
sible choices, justify these preferences. This process
may lead to changes in their opinions, because they
are influenced by one another. A factor that some-
times plays a big role in enforcing such changes, is
the amount of reliability the agents have on one an-
other’s opinions. Such reliabilities may change as
well through this process of deliberation, e.g. on hear-
ing someone else’s preferences about a certain issue,
one can start or stop relying on that person’s opin-
ion. One may tend to unfriend certain friends hearing
about their preferences regarding certain issues (e.g.
Helen De Cruz’s recent remarks in her article ‘Being
Friends with a Brexiter?’ in the Philosophers On se-
ries of the Daily Nous blog
1
).
Formal studies on preferences (cf. (Arrow et al.,
2002; Endriss, 2011)) and trust (cf. (Liau, 2003; De-
molombe, 2004; Herzig et al., 2010)) abound in the
literature on logic in artificial intelligence. Recently,
there has been work on relating the notions of be-
lief and trust, e.g. about agents changing their be-
liefs based on another agent’s announcement depend-
ing on how trustworthy that agent is about the issue
1
http://dailynous.com/2016/06/28/philosophers-on-
brexit/#DeCruz
in question (e.g. see (Lorini et al., 2014)). And,
also on relating preference and reliability, e.g. about
agents changing their preferences based on another
agent’s preferences, on whom he or she relies the
most (Ghosh and Vel
´
azquez-Quesada, 2015a; Ghosh
and Vel
´
azquez-Quesada, 2015b). A pertinent issue
that arises in this context is: an agent’s assessment of
another individual’s reliability might change as well.
How would one model that? This work precisely pro-
vides a way to answer this question. We focus on re-
liability changes based on (public) announcement of
individual preferences and we provide formal frame-
works to describe such changes. In process, we also
provide some policies of preference change as well.
Note that the notion of reliability considered here is
not topic-based (in contrast to the notion of trust de-
scribed in (Lorini et al., 2014)) but deals with only
comparative judgements about agents (cf. Section 2
for details). The following provides an apt example
of the situations we would like to model:
Our Running Example: Consider three flat-
mates Isabella, John and Ken discussing about
redecorating their house and they were won-
dering whether to put a print of Monet’s pic-
ture on the left wall or on the right wall of the
living room. Isabella and Ken prefer to put it
on the right wall, while John wants to put it on
the left. Isabella has more faith in John’s taste
than on hers and Ken’s, and John has more
faith in Isabella’s taste than on his’ and Ken’s.
Ghosh S. and Sano K.
Valuing Othersâ
˘
A
´
Z Opinions: Preference, Belief and Reliability Dynamics.
DOI: 10.5220/0006204806150623
In Proceedings of the 9th International Conference on Agents and Artificial Intelligence (ICAART 2017), pages 615-623
ISBN: 978-989-758-220-2
Copyright
c
2017 by SCITEPRESS – Science and Technology Publications, Lda. All rights reserved
615
Ken has full faith in his own taste. As long
as Isabella’s and John’s preferences are dier-
ent and each think that the other’s taste is bet-
ter (by taking their preferences into consider-
ation), the three flatmates would never reach
an agreement. But it so happens that on hear-
ing about John’s and Ken’s choices, Isabella
starts relying more on Ken, whereas even af-
ter hearing about Isabella’s and Ken’s choices
John’s reliability attribution to Isabella does
not change.
To model such situations we introduce a two-
dimensional hybrid logic framework extending the
basic logic proposed in (Ghosh and Vel
´
azquez-
Quesada, 2015a; Ghosh and Vel
´
azquez-Quesada,
2015b) in the line of those developed in (Gargov et al.,
1987; Sano, 2010; Seligman et al., 2013). We add
dynamic operators to model preference and reliabil-
ity changes. The main novelty of this work is that
reliability changing policies based on agent prefer-
ences are introduced and studied formally, which has
not been dealt with before. In addition, reliabilities
are modelled (more naturally) as total pre-orders (in-
stead of total orders (Ghosh and Vel
´
azquez-Quesada,
2015a; Ghosh and Vel
´
azquez-Quesada, 2015b)), and
preference changing policies are modified accord-
ingly. The proposed logic is expressive enough to deal
with both these kinds of changes.
2 TWO-DIMENSIONAL HYBRID
LOGIC
Let us first motivate our assumptions on preference
and reliability orders that we make below in the lines
of (Ghosh and Vel
´
azquez-Quesada, 2015a). As men-
tioned earlier, we are modelling situations akin to
joint deliberation where agents announce their prefer-
ences. Each agent can change her preferences upon
getting information about the other agents’ prefer-
ences, influenced by her reliability over agents (in-
cluding herself, so she might consider herself as more
reliable than some agents but also as less reliable than
some others). Agents can also change their opinions
regarding how reliable they think the other agents are
in comparison to themselves, influenced by the an-
nounced preferences of those agents.
The agents’ preferences are represented by binary
relations (as in (Arrow et al., 2002; Gr
¨
une-Yano
and Hansson, 2009) and further references therein),
which is typically assumed to be reflexive and tran-
sitive. This paper also follows this ordinary assump-
tion, and so, we note that this assumption do allow the
possibility of incomparable worlds.
The notion of reliability is related to that of trust,
a well-studied concept (e.g., (Falcone et al., 2008)),
with several proposals for its formal representation,
e.g. an attitude of an agent who believes that an-
other agent has a given property (Falcone and Castel-
franchi, 2001). One also says that “an agent i trusts
agent js judgement about ϕ (called “trust on cred-
ibility” in (Demolombe, 2001)). Trust can also be
defined in terms of other attitudes, such as knowl-
edge, beliefs, intentions and goals (e.g., (Demolombe,
2001; Herzig et al., 2010)), or as a semantic primi-
tive, typically by means of a neighbourhood function
(Liau, 2003). Some others (e.g., (Lorini et al., 2014))
deal with graded trust.
Reliability as discussed here is closer to the notion
of trust in (Holliday, 2010), where it is understood as
an ordering among sets of sources of information (cf.
the discussion in (Goldman, 2001)). Such a notion
of reliability does not yield any absolute judgements
(“i relies on js judgement [about ϕ]”), but only com-
parative ones (“for i, agent j
0
is at least as reliable as
agent j”). For the purposes of this work, similar to
(Ghosh and Vel
´
azquez-Quesada, 2015a), such com-
parative judgements suce.
In contrast to (Ghosh and Vel
´
azquez-Quesada,
2015a), our reliability relation is assumed to be a re-
flexive, transitive and total relation. Reflexivity and
transitivity are, more often than not, natural require-
ments for an ordering and totality disallows incompa-
rability, as before. The changes in reliability for an
agent depend on the information assimilated (similar
to approaches like (Rodenh
¨
auser, 2014)), in particu-
lar, about the other agents’ preferences.
The focus of this work is joint deliberation, so let A
be a finite non-empty set of agents (|A| = n 2).
Definition 1. A PR (preference/reliability) frame F is
a tuple (W, {≤
i
, 4
i
}
iA
) where (1) W is a finite non-
empty set of worlds; (2)
i
W × W is a preorder
(i.e., a reflexive and transitive relation), agent is pref-
erence relation among worlds in W (u
i
v is read as
“world v is at least as preferable as world u for agent
i”); (3) 4
i
A × A is a total pre-order (i.e., a connected
pre-order), agent is reliability relation among agents
in A ( j 4
i
k is read as “agent k is at least as reliable
as agent j for agent i”). Let mr(i) denote the set of all
maximally reliable agents for i.
We define u <
i
v (“u is less preferred than v for
agent i”) as u
i
v and v
i
u, and u '
i
v (“u and v
are equally preferred for agent i”) as u
i
v and v
i
u.
Moreover, j
i
k (“ j is less reliable than k for agent
i”) is defined as j 4
i
k and k $
i
j, and j
i
k (“ j and k
are equally reliable for agent i”) as j 4
i
k and k 4
i
j.
Example 1. Recall the example in Section 1. Put A
= {i, j, k}, where i, j, and k represent Isabella, John,
ICAART 2017 - 9th International Conference on Agents and Artificial Intelligence
616
and Ken, respectively. By denoting with w
x
the world
where ‘Monet’s picture is at wall x (x = l, r), the ex-
ample’s situation can be represented by a PR frame
F
exp
= ({w
l
, w
r
}, {≤
y
, 4
y
}
yA
) in which the preference
orders are given by: w
l
<
i
w
r
, w
r
<
j
w
l
and w
l
<
k
w
r
,
and the reliability orders are given by: i
i
k
i
j,
j
j
k
j
i and j
k
i
k
k.
In (Ghosh and Vel
´
azquez-Quesada, 2015a),
Ghosh and Vel
´
azquez-Quesada propose a language
to talk about the preference changes and their ef-
fects. Following the semantic idea of (Seligman et al.,
2013), we extend their syntax for the static language
into a two-dimensional syntax with the help of de-
pendent product of two hybrid logics (Sano, 2010).
Let P be a countable infinite set of propositional vari-
ables, N
1
= { a, b, c, . . . } be a countable infinite set of
world-nominals (syntactic names for worlds) and let
N
2
= { i, j, k, . . . } be a countable infinite set of agent-
nominals (syntactic names for agents).
Definition 2 (Language H L). Formulas ϕ,ψ,...
(read ϕ as “the current agent satisfies the property ϕ
in the current state” or indexically as “I am ϕ in the
current state”) and relational expressions (or program
terms) π, ρ, . . . of the language HL are given by
ϕ, ψ ::= p | a | i | ¬ϕ | ϕ ψ | @
i
ϕ | @
a
ϕ | hπiϕ,
π, ρ ::= 1
W
| | | 1
A
| v
k
| w
k
| α |
π π
0
| π π
0
| (π, j) u
i
(π
0
, k)| ?(ϕ, ψ),
where p P, a N
1
, i,j, k N
2
, α { 1
W
, , }
{
1
A
, v
k
, w
k
| k N
2
}
. Propositional constants (>, ),
other Boolean connectives (, , ) and the dual
modal universal operators [π] are defined as usual,
e.g. [π] ϕ := ¬hπi¬ϕ. Moreover, we define h<iϕ as
h≤ ≥iϕ and h@
k
iϕ as hv
k
w
k
iϕ, respectively.
Finally, ?ϕ is defined as the program term ?(ϕ,ϕ).
We note that @
a
ϕ is read as “the current agent sat-
isfies ϕ in the world named by a”and @
i
ϕ as “agent
i satisfies ϕ in the current world. The set of rela-
tional expressions contains the constants 1
W
, 1
A
(the
global relations, whose corresponding operators mean
“for all states” and “for all agents”, respectively), the
preference and reliability relations (, v
k
), their re-
spective converse relations (, w
k
; cf. (Burgess, 1984;
Goldblatt, 1992)), all the complements of the atomic
relations, and an additional construct of the forms
(π, j) u
i
(π
0
, k) (needed for defining distributed prefer-
ence later in Section 3, explained below) and ?(ϕ, ψ)
(a generalization of the test operator in (Harel et al.,
2000), also explained below), and it is closed under
union and intersection operations over relations.
The formulas are interpreted in terms of world-
agent pairs below, and we may read []ϕ as “in all
states which the current agent considers as least as
good as the current state, the current agent satisfies
ϕ”. Moreover, we may read hv
k
iϕ as “there is a more
or equally reliable agent j than the current agent such
that j satisfies ϕ, from agent ks perspective. For ex-
ample, @
i
hv
k
ij can be read as “agent j is more or
equally reliable than agent i from agent ks perspec-
tive. hw
k
iϕ is read as “there is a less or equally reli-
able agent j than the current agent such that j satisfies
ϕ, from agent ks perspective.
We note that the program construction ?(ϕ,ψ)
(check if the first element of a given pair of states sat-
isfies ϕ and if the second does ψ) is a generalization
of the test operator in the standard (regular) proposi-
tional dynamic logic (Harel et al., 2000). So ?ϕ :=
?(ϕ, ϕ) enables us to check if both elements of a given
pair satisfies ϕ. Moreover, the program construction
(π, j) u
i
(π
0
, k) enables us to define, as agent is re-
lation between states, the distributed preference be-
tween agents j and k, i.e., the intersection of js pref-
erence and ks preference. Together with the other
program constructions, it is useful for providing the
axiom system for the preference and reliability chang-
ing operations to be introduced in Section 3. The fol-
lowing two definitions establish what a model is and
how formulas of HL are interpreted over them.
Definition 3 (PR model). A PR model is a tuple M =
(F, V) where F = (W, {
i
, 4
i
}
iA
) is a PR-frame and V
is a valuation function from P N
1
N
2
to P(W × A)
assigning a subset of the form { w} × A to a world-
nominal a N
1
and a subset of the form W × {i} to
an agent-nominal i N
2
. Throughout the paper, we
denote the unique element in the first coordinate of
V(a) = { w } × A and the second coordinate of V(i) =
W × { a } by a and i, respectively.
Definition 4 (Truth definition). Given a PR-model M,
a satisfaction relation M, (w, i) ϕ , and relations R
π
(W × A)
2
are defined by simultaneous induction by:
M,(w, i) p i (w, i) V(p),
M,(w, i) a i w = a,
M,(w, i) i i i = i,
M,(w, i) ¬ϕ i M,(w, i) 1 ϕ,
M,(w, i) ϕ ψ i M,(w, i) ϕ or M,(w, i) ψ,
M,(w, i) @
a
ϕ i M,(a, i) ϕ,
M,(w, i) @
i
ϕ i M,(w, i) ϕ,
M,(w, i) hπiψ i For some (v, j) W × A,
(w, i)R
π
(v, j) and M, (v, j) ψ,
(w, i)R
1
W
(v, j) i w, v W and i = j,
(w, i)R
(v, j) i w
i
v and i = j,
(w, i)R
(v, j) i v
i
w and i = j,
Valuing Othersâ
˘
A
´
Z Opinions: Preference, Belief and Reliability Dynamics
617
(w, i)R
α
(v, j) i ((w, i),(v, i)) < R
α
and i = j
(α { 1
W
, , }),
(w, i)R
1
A
(v, j) i w = v and i, j A,
(w, i)R
v
k
(v, j) i w = v and i 4
k
j,
(w, i)R
w
k
(v, j) i w = v and j 4
k
i,
(w, i)R
β
(v, j) i w = v and ((w,i), (v, i)) < R
β
(β
{
1
A
, v
k
, w
k
| k N
2
}
),
(w, i)R
πρ
0
(v, j) i (w, i)R
π
(v, j) or (w, i)R
ρ
(v, j),
(w, i)R
πρ
(v, j) i (w, i)R
π
(v, j) and (w, i)R
ρ
(v, j),
(w, i)R
(π,j)u
i
(π
0
,k)
(v, j) i i = j = i and (w,j)R
π
(v, j)
and (w, k)R
ρ
(v, k)
(w, i)R
?(ϕ,ψ)
(v, j) i M, (w,i) ϕ and M, (v, j) ψ.
We say that ϕ is valid in a PR-model M (written: M ϕ)
if M, (w,i) ϕ for all pairs (w, i) in M.
The logic HL is so expressive that we can formal-
ize the notion of belief as well as our preference and
reliability dynamics introduced in the later sections.
For example, following the idea found in (Boutilier,
1994), we can define the conditional belief opera-
tor B(ψ,ϕ) (read “under the condition that the cur-
rent agent satisfies ψ, the current agent believes that
she satisfies ϕ or “the current agent desires (or has a
goal) that she satisfies ϕ under the condition that she
satisfies ψ”) by
B
ψ
ϕ := [1
W
]((ψ ϕ) h≤i(ψ ϕ [](ψ ϕ))).
Then the unconditional belief operator Bϕ is defined
as B(>, ϕ), which read as “the current agent believes
that she satisfies ϕ” or “in the most preferred states for
the current agent, she satisfies ϕ. We can also define
the conditional reliability operator R
k
(ψ, ϕ) (read “the
most reliable ψ-agents for agent k satisfy ϕ.”) by
R
k
(ψ, ϕ) := [1
A
](ψ hv
k
i(ψ [v
k
](ψ ϕ))),
where we can simplify the clause because of connect-
edness as noted in (Boutilier, 1994). The uncondi-
tional version R
k
ϕ of R
k
(ψ, ϕ) is defined as R
k
(>, ϕ)
which read as “the most reliable agents for agent k
satisfy ϕ. We may also define the “diamond”-version
of R
k
ϕ as ¬R
k
¬ϕ to denote hR
k
iϕ. Then hR
k
ij means
that agent j is one of the most reliable agents for k.
Example 2. Let us represent “the current agent
likes to put Monet’s picture on wall x by a state-
nominal a
x
in the setting of Example 1. On the
PR-frame F
exp
of Example 1, we define V(a
x
) =
{ (w
x
, i), (w
x
, j), (w
x
, k) } where x = l or r. We use i,
j, k as syntactic names (i.e., agent nominals) for i, j
and k, where we interpret, e.g., i = i in terms of our
valuation function V. Define M
exp
:= (F
exp
, V). For
example, the preference w
l
<
i
w
r
can be formalized
as a formula @
i
@
a
l
h<ia
r
, which is valid on M
exp
.
We can formalize Isabella’s reliability of i
i
k
i
j as
@
i
hv
i
ik@
k
hv
i
ii@
k
h@
i
ij, which is valid on M
exp
.
Moreover, @
i
Ba
x
formalizes “Isabella believes that
she likes to put Monet’s picture on wall x” and, when
x = r, @
i
Ba
r
is valid on M
exp
. Similarly, @
j
Ba
l
and
@
k
Ba
r
are also valid in M
exp
. We can see that, from
Isabella’s perspective, Ken is one of the most reliable
agents who believes that a
r
. This can be formalized
as R
i
(Ba
r
, k).
The static axiom systems HPR and HPR
(m,n)
are
given as in Table 1, where uniform substitution means
a substitution that uniformly replaces propositional
variables by formulas and nominals from N
i
by nom-
inals from N
i
(i = 1 or 2).
Theorem 1 (Soundness and completeness). ϕ is valid
in all (possibly infinite) PR-models i ϕ is derivable
in HPR. Moreover, ϕ is valid in all PR-models with
fixed m worlds and fixed n agents iϕ is derivable in
HPR
(m,n)
. Therefore, HPR
(m,n)
is decidable.
We note that the, as far as the authors know, decid-
ability is still unknown for HPR, even the fragment
of HPR without program constructions (cf. (Sano,
2010)). So related computational properties of such
fragment has not been yet well-studied (for purely
bimodal logic fragment with a slightly dierent se-
mantics, the reader is referred to (Marx and Mikul
´
as,
2001)).
3 PREFERENCE DYNAMICS
Intuitively, a public announcement of the agents’ in-
dividual preferences might induce an agent i to ad-
just her own preferences according to what has been
announced and the reliability ordering she assigns
to the set of agents.
2
For example, an agent might
adopt the preferences of the set of agents on whom
she relies the most, or might use the strict prefer-
ences of her most reliable agents for ‘breaking ties’
among her equally-preferred zones. In (Ghosh and
Vel
´
azquez-Quesada, 2015a) the authors introduced
the general lexicographic upgrade operation, which
creates a preference ordering following a priority list
of orderings. We generalize those operations in the
following, where we consider the reliability orderings
to be pre-orders, rather than being total orders (that is,
also anti-symmetric and connected) as they are in the
earlier work, which was quite an artificial assumption
2
Note that this work, in line with its predecessor, (Ghosh
and Vel
´
azquez-Quesada, 2015a), also does not focus on the
formal representation of such announcement, but rather on
the formal representation of its eects.
ICAART 2017 - 9th International Conference on Agents and Artificial Intelligence
618
Table 1: Axiomatizations HPR and HPR
(m,n)
.
Bi-Hybrid Logical Axioms of HPR
All classical tautologies (Dual
π
) hπip ¬[π]¬p
(K
π
) [π](p q) ([π]p [π]q)
Let n N
i
and (n, m) N
2
i
(i = 1, 2) below in this group
(K
@
) @
n
(p q) (@
n
p @
n
q)
(SelfDual
@
) ¬@
n
p @
n
¬p (Ref) @
n
n
(Intro) n p @
n
p (Agree) @
n
@
m
p @
m
p
(Back) hπi@
a
@
i
p @
a
@
i
p
Inference Rules of HPR
Modus Ponens, Uniform Substitutions,
Necessitation Rules for [π], @
i
and @
a
(Name) From n ϕ infer ϕ,
where n N
1
N
2
is fresh in ϕ
(BG
π
) From @
a
@
i
hπi(b j) @
b
@
j
ϕ infer @
a
@
i
[π]ϕ,
where b and j are fresh in @
a
@
j
[π]ϕ
Interaction Axioms of HPR
(Com@) @
i
@
a
p @
a
@
i
p
(Red@
1
) a @
i
a (Red@
2
) i @
a
i
(DcomhWi@
2
) @
i
hαip @
i
hαi@
i
p (α { 1
W
, , })
(ComhAi@
1
) @
a
hβip hβi@
a
p (β { 1
A
, v
k
, w
k
})
Axioms for Atomic Programs of HPR
(U
W
) @
a
h1
W
ib (Cnv
) @
a
h≤ib @
b
h≥ia
(U
A
) @
i
h1
A
ij (Cnv
v
) @
i
hv
k
ij @
j
hw
k
ii
(Eq
v
) @
i
j ([v
i
]p [v
j
]p)
Axioms for Compounded Programs of HPR
() hπ ρip hπip hρip
(?) h?(ϕ, ψ)ip ϕ h1
A
ih1
W
i(ψ p)
() @
a
@
i
hπ ρi(b j) @
a
@
i
(hπi(b j) hρi(b j))
(
W
) @
a
h− αib @
a
¬hαib (α {1
W
, , })
(
A
) @
i
h− βij @
i
¬hβij (β {1
A
, v
k
, w
k
})
(u
i
) @
a
@
k
h(π, j) u
i
(π
0
, j
0
)i(b k
0
)
@
i
(k k
0
) @
a
@
j
hπi(b j) @
a
@
j
0
hπ
0
i(b j
0
))
Axioms for PR-frames of HPR
(4
) @
a
h≤ib @
b
h≤ic @
a
h≤ic
(4
v
) @
j
hv
i
ik @
k
hv
i
il @
j
hv
i
il
(Ref
) @
a
h≤ia (Cmp
v
) @
j
hv
i
ik @
k
hv
i
ij
Additional Axioms and Rules for HPR
(m,n)
(|W| m)
W
0k,lm
@
a
k
a
l
(|A| n)
W
0k,ln
@
i
k
i
l
(|W| m) From
V
1k,lm
¬@
a
k
a
l
ψ infer ψ,
where a
k
s are fresh in ψ
(|A| n) From
V
1k,ln
¬@
i
k
i
l
ψ infer ψ,
where i
k
s are fresh in ψ.
on agents’ reliabilities. Agent is preference ordering
after an announcement,
0
i
, can be defined in terms
of the just announced preferences (the agents’ prefer-
ences before the announcement,
1
, . . . ,
n
) and how
much i relied on each agent (is reliability before the
announcement, 4
i
):
0
i
:= f (
1
, . . . ,
n
, 4
i
) for some
function f . Here are some such functions inspired by
(van Benthem, 2007; Ghosh and Vel
´
azquez-Quesada,
2015a).
Definition 5. Given a set X A of agents, u <
X
v if
u <
k
v holds for all agents k X. Moreover, u
X
v
is used to mean u <
X
v or v <
X
u and dom(
X
) :=
{u A|u
X
v for some v A }.
Note that dom(
X
) allows us to specify the connected
components by the relation
X
. Recall that mr(i) de-
notes the set of all maximally reliable agents for i.
Definition 6 (Conservative Upgrade). Agent i takes
the strict preference ordering of her most reliable
agents, and leaves the rest undecided (equipreferable).
More precisely, the upgraded ordering
0
i
is defined
by: u
0
i
v i (u <
mr(i)
v or u = v) or (u, v < dom(
X
)).
Definition 7 (Radical Upgrade). Agent i takes the
strict preference ordering of her most reliable agents,
and in the remaining disjoint zones she uses her old
ordering. More precisely, the upgraded ordering
0
i
is defined by: u
0
i
v i (u <
mr(i)
v or u = v) or (u, v <
dom(
X
) and u
i
v).
Note that both the conservative and radical upgrades
preserve preorders (and thus upgraded models belong
to our class of semantic models).
3.1 Expressing the Preference Dynamics
To formalize preference dynamics from the previous
section, we add the following dynamic operators to
the static syntax H L. First of all, we regard all the
agents involved in our two preference upgrade above
as agent nominals (syntactic names of agents) and so
let us denote agent is syntactic name as i of boldface
and the set of all syntactic names in mr(i) as mr(i).
HL
{ pu }
is defined to be an expansion of HL with all
operators hpu
i
R
i, where i be an agent-nominal and R
is a list of sets of agent-nominals defined as R = mr(i)
(conservative upgrade) or R = (mr(i);{i}) (radical up-
grade).
Definition 8 (Operators). A formula Req(R), repre-
senting requirements for the list R is defined as the
conjunction
V
j,kmr(i)
¬@
j
k (mutual disjointness of
agents involved in mr(i)) and
V
jmr(i)
hR
i
ij (mr(i) is
the set of maximally reliable agents for i). Given a
Valuing Othersâ
˘
A
´
Z Opinions: Preference, Belief and Reliability Dynamics
619
PR-model M = (W, {≤
i
, 4
i
}
iA
, V), define:
M,(w, j) hpu
i
R
iϕ i M, (w, j) Req(R)
and pu
i
R
(M),(w, j) ϕ,
where pu
i
R
(M) is the same model as M except
i
is
replaced by
R
where R = mr(i) or R = (mr(i);{i})
and corresponding
R
s are given by Definitions 6 and
7, respectively.
For an axiom system for the modality hpu
i
R
i, we
will provide recursion axioms: valid formulas and
validity-preserving rules indicating how to translate a
formula with the new modality into a provably equiv-
alent one without them. In this case, the modalities
can take the form of any relational expression. So
we provide a ‘matching’ relational expression in the
original model M by defining relational transformers
similar to those in (Ghosh and Vel
´
azquez-Quesada,
2015a; Ghosh and Vel
´
azquez-Quesada, 2015b), in
spirit of the program transformers of (van Benthem
et al., 2006).
Before going into the notion of relational trans-
former, we have two observations. Firstly, when π :=
?j , we note that (w, i)R
π
(v, k) is equivalent to the
conjunction of i = k = j and w
j
v. Similarly, when
we put π
0
:= ?¬j , we remark that (w, i)R
π
0
(v, k) is
equivalent to the conjunction of w
i
v and i = k and
i , j. Secondly, to reflect the relation <
X
in Definition
5, we need our program construction (π, j) u
i
(ρ, k) to
taking the intersection of (strict) preference relations
of the possibly dierent agents than i. These observa-
tions allow us to capture the idea behind Definitions 6
and 7 syntactically in the following definition.
Definition 9 (Relational transformer). Let us
introduce the following abbreviations for re-
lational expressions: We define <
mr(i)
:=
i
{
( , j) | j mr(i)
}
. Then >
mr(i)
is similarly
defined and
mr(i)
is defined to be <
mr(i)
>
mr(i)
.
Moreover, a formula d(
mr(i)
) is defined as h
mr(i)
i>.
A relational transformer T u
i
R
is a function from re-
lational expressions to relational expressions defined
as follows. When R = mr(i) (conservative upgrade),
T u
i
R
(α) := α (α
{
1
A
, 1
W
, v
k
, v
k
| k N
2
}
),
T u
i
R
() :=
?i (<
mr(i)
1
A
?¬d(
mr(i)
)
(?¬i )
T u
i
R
() :=
?i (>
mr(i)
1
A
?¬d(
mr(i)
)
(?¬i )
T u
i
R
(π ρ) := T u
i
R
(π) T u
i
R
(ρ),
T u
i
R
(π ρ) := T u
i
R
(π) T u
i
R
(ρ),
T u
i
R
(?(ϕ, ψ)) :=?(hpu
i
R
iϕ, hpu
i
R
iψ).
T u
i
R
((π, k) u
j
(ρ, l)) := ((T u
i
R
(π), k) u
j
(T u
i
R
(ρ), l))
T u
i
R
(β) := T u
i
R
(β),
where β { 1
W
, , }
{
1
A
, v
k
, w
k
| k N
2
}
. When
R = (mr(i); { i }), we replace the occurrence of
“?¬d(
mr(i)
)” in T u
i
R
() or T u
i
R
() with
“?¬d(
mr(i)
)i>∩
00
or “?¬d(
mr(i)
) ,
00
respectively.
Theorem 2. The axioms and rules below together
with those of HPR (or, those of HPR
(m,n)
) provide
sound and complete axiom systems for HL
{ pu }
with
respect to possibly infinite PR models (or, PR models
with m worlds and n agents, respectively).
hpu
i
R
ip Req(R) p,
hpu
i
R
i(ϕ ψ) hpu
i
R
iϕ hpu
i
R
iψ,
hpu
i
R
ϕ Req(R) ¬hpu
i
R
iϕ
hpu
i
R
ij Req(R) j, hpu
i
R
ia Req(R) a,
hpu
i
R
i@
j
ϕ Req(R) @
j
hpu
i
R
iϕ,
hpu
i
R
i@
a
ϕ Req(R) @
a
hpu
i
R
iϕ
hpu
i
R
ihπiϕ Req(R) hTu
i
R
(π)ihpu
i
R
iϕ,
From ϕ ψ, we may infer hpu
i
R
iϕ hpu
i
R
iψ.
Proof. Soundness of the new axioms are straightfor-
ward. Completeness follows from the completeness
of the static system HPR (cf. Chapter 7 of (van Dit-
marsch et al., 2008), for an extensive explanation of
this technique).
Example 3. In our running example of Section 1,
each agent is regarded to employ conservative up-
grades to change his or her preference. Let us write
the corresponding upgrade operators of { i, j, k } by
hpu
i
R
i
i and hpu
j
R
j
i, hpu
k
R
k
i, respectively. Then, three
flatmates did not reach an agreement after conserva-
tive upgrades of all agents, i.e.,
@
i
Ba
r
@
j
Ba
l
@
k
Ba
r
hpu
i
R
i
ihpu
j
R
j
ihpu
k
R
k
i(@
i
Ba
l
@
j
Ba
r
@
k
Ba
r
).
is valid in M
exp
, because upgraded preferences are
given by w
r
<
0
i
w
l
, w
l
<
0
j
w
r
and w
l
<
0
k
w
r
.
4 RELIABILITY DYNAMICS
A public announcement of the agents’ individual pref-
erences may change the agents’ reliability attributions
as well: for example, an agent may consider more
reliable those agents whose preferences coincide (or,
for some reason, dier) from her own. In such cases,
agent is new reliability ordering, 4
0
i
, can be given in
terms of the agents’ current preferences,
1
, . . . ,
n
,
ICAART 2017 - 9th International Conference on Agents and Artificial Intelligence
620
and is current reliability ordering, 4
i
. Thus,
0
i
:=
g(
1
, . . . ,
n
, 4
i
) for some function g. We now provide
formal definitions of some such possibilities.
4.1 Reliability Change Operations
The notion of “matching preference orders” will form
the basis for the reliability dynamics. The idea is that
two preference orderings match each other to a certain
extent if the orderings are identical on some subset of
the state space. A full match indicates that the order-
ings coincide on the whole domain; a partial match
indicates that they coincide up to some proper subset
of the domain.
Definition 10 (Matching preferences). Let F be a PR
frame given by F = (W,{≤
i
, 4
i
}
iA
) and let i A be
an agent. If
i
is identical with
j
on W
0
W, then
W
0
is said to be a set of match for i and j (notation:
i
W
0
j
).
Preference orders
i
and
j
are said to fully match
each other i
i
W
j
.
3
FullMat(i) denotes the
set of agents in A \ {i} having full match with i.
Preference orders
i
and
j
have zero match with
each other i there is no W
0
W with |W
0
| 2
such that
i
W
0
j
.
4
ZeroMat(i) denotes the set
of agents in A \ {i} having zero match with i.
With these definitions, we can define some opera-
tions for reliability change.
Definition 11 (Full, Zero matching upgrade). Agent
i puts those agents that have full/zero match with
her own preference ordering above those that do not,
keeping her old reliability ordering within each of the
two zones. More precisely, if 4
i
is agent is current
reliability ordering, then her new reliability ordering
4
0
i
is defined by:
j 4
0
i
k i
j, k V and j 4
i
k
or
k V and j < V
or
j, k < V and j 4
i
k
.
Here V = FullMat(i) {i}, ZeroMat(i), respectively.
Once again, we can consider more generalized defi-
nitions for upgrade policies as well, but we just stick
to simple definitions to give the main idea. Note that
both the full matching and zero matching upgrades
preserve total preorders (and thus upgraded models
belong to our class of semantic models).
3
Note how, by the finiteness of W (the reflexivity of the
preference relations), there is always a maximal X W
such that
i
X
j
for every agent i, j.
4
For the same reason, there is always a minimal X W such
that
i
X
j
for every agent i, j.
4.2 Expressing the Reliablity Dynamics
To describe reliability dynamics from the previous
section, the following dynamic operators are added
to the static syntax of HL. H L
{ rc }
is defined to be an
expansion of HL with all operators hrc
i
E
i, where i is
an agent-nominal and E a pair of HL-formulas of the
form @
a
χ (recall: a is a world-nominal). An underly-
ing semantic intuition for hrc
i
E
i is: Given a PR-model
M, the pair E = (@
a
1
χ
1
, @
a
2
χ
2
) can be regarded as a
partition (i.e., an equivalence relation on agents) in the
sense that (
n
i A | M, (i, a
k
) χ
k
o
)
1k2
forms a parti-
tion of A, and the reliability ordering 4
i
of the original
PR model M is rewritten into the updated reliability
ordering 4
0
i
as in Definition 11 of the Section 4.1.
Definition 12 (Operators). Given any pair E =
(ϕ
1
, ϕ
2
) of formulas of the form @
a
χ, a formula
Eq(E) is defined as the conjunction of [1
A
](ϕ
1
ϕ
2
) (exhaustiveness for agents) and ¬h1
A
i(ϕ
1
ϕ
2
)
(pairwise disjointness for agents). Given a pair E
= (@
a
1
χ
1
, @
a
2
χ
2
) and a PR-model M = (W, {≤
i
, 4
i
}
iA
, V), define:
M,(w, j) hrc
i
E
iϕ i M, (w, j) Eq(E)
and rc
i
E
(M),(w, j) ϕ,
where rc
i
E
(M) is the same model as M except 4
i
is
replaced by 4
0
i
of Definition 11.
Definition 13 (Relational transformer). Let E =
(ϕ
1
, ϕ
2
) be a pair. A relational transformer T r
i
E
is a
function from relational expressions to relational ex-
pressions defined as follows.
T r
i
E
(α) := α (α { 1
A
, 1
W
, , }),
T r
i
E
(v
i
) :=
(
v
i
(?ϕ
1
?ϕ
2
)
)
(1
A
?(ϕ
1
, ϕ
2
)),
T r
i
E
(w
i
) :=
(
w
i
(?ϕ
1
?ϕ
2
)
(1
A
?(ϕ
1
, ϕ
2
)),
T r
i
E
(v
k
) := (?@
i
k T r
i
E
(v
i
)) (?¬@
i
k v
k
) (k , i),
T r
i
E
(w
k
) := (?@
i
k T r
i
E
(w
i
)) (?¬@
i
k w
k
) (k , i),
T r
i
E
(π ρ) := T r
i
E
(π) T r
i
E
(ρ),
T r
i
E
(π ρ) := T r
i
E
(π) T r
i
E
(ρ),
T r
i
E
(?(ϕ, ψ)) :=?(hrc
i
E
iϕ, hrc
i
E
iψ).
T r
i
E
((π, k) u
j
(ρ, k)) := (T r
i
E
(π), k) u
j
(T r
i
E
(ρ), k),
T r
i
E
(α) := T r
i
E
(α),
where α { 1
W
, , }
{
1
A
, v
k
, w
k
| k N
2
}
).
When k , i, i.e., k and i are syntactically distinct agent
nominals, the reader may wonder why we should have
generalized test operators “?@
i
k and “?¬@
i
k in the
definitions T r
i
E
(v
k
) and T r
i
E
(v
k
). This is because the
Valuing Othersâ
˘
A
´
Z Opinions: Preference, Belief and Reliability Dynamics
621
same agent might have two distinct (syntactic) names.
Based on a similar strategy for Theorem 2, we can
now prove the following theorem.
Theorem 3. The axioms and rules below together
with those of HPR (or, those of HPR
(m,n)
) provide
sound and complete axiom systems for HL
{ rc }
with
respect to possibly infinite PR models (or, PR models
with m worlds and n agents, respectively).
hrc
i
E
ip Eq(E) p, hrc
i
E
i(ϕ ψ) hrc
i
E
iϕ hrc
i
E
iψ,
hrc
i
E
ϕ Eq(E) ¬hrc
i
E
iϕ
hrc
i
E
ij Eq(E) j, hrc
i
E
ia Eq(E) a,
hrc
i
E
i@
j
ϕ Eq(E) @
j
hrc
i
E
iϕ,
hrc
i
E
i@
a
ϕ Eq(E) @
a
hrc
i
E
iϕ
hrc
i
E
ihπiϕ Eq(E) hT r
i
E
(π)ihrc
i
E
iϕ,
From ϕ ψ, we may infer hrc
i
E
iϕ hrc
i
E
iψ.
Example 4. After Isabella and John know others’
preferences, we regard, in our running example, that
Isabella uses full-match reliability change hrc
i
E
i
i and
John employs zero-match reliability change hrc
j
E
j
i.
Unlike Example 3, let us first consider reliability
changes of Isabella and John and then take the con-
servative upgrades of all agents. This process and the
resulting agreements among agents are formalized as
hrc
i
E
i
ihrc
j
E
j
ihpu
i
R
i
ihpu
j
R
j
ihpu
k
R
k
i
(@
i
Ba
r
@
j
Ba
r
@
k
Ba
r
),
which is valid in M
exp
, because Isabella’s reliability
is changed into i
0
i
j
0
i
k and John’s reliability does
not change.
We note here that while the main focus of the work
is to model joint deliberation in form of simultaneous
preference and reliability upgrades, the model opera-
tions and modalities of Sections 3.1 and 4.2 deal with
single agent upgrades. This presentation style has
been chosen in order to simplify notation and read-
ability, but the provided definitions can be easily ex-
tended in order to match our goals. In particular, the
model operations of Definitions 8 and 12 can be ex-
tended to simultaneous upgrades by asking for a list R
of lexicographic lists (with R
i
the list for agent i), and
asking for a list E of partition lists (with E
i
the list for
agent i), respectively. Then the corresponding modal-
ities, hpu
i
R
i and hrc
i
E
i can still be axiomatised by the
presented system with some simple modifications.
5 CONCLUSION
This work continues the line of study in (Ghosh and
Vel
´
azquez-Quesada, 2015a; Ghosh and Vel
´
azquez-
Quesada, 2015b) and provides a further interplay be-
tween the preferences that the agents have about the
world around and the reliability attributions they have
with respect to one another. We deal with both
preference change based on reliability, and reliabil-
ity change based on preferences, and propose two-
dimensional dynamic hybrid logics to express such
changes. The main technical results that we have
are sound and complete axiomatizations which lead
to decidability (provided the numbers of agents and
of states are fixed finite numbers) as well. In process,
we also discuss about agent beliefs in such situations,
e.g. relating reliability attributions with the notions
of belief (cf. the running example in the text). The
novel contribution of the work is the study of change
in reliability attribution of agents based on their pref-
erences.
To conclude, let us provide some pointers towards
future work: (1) What other reasonable preference
and reliability upgrade policies can there be and how
to model them? (2) How to investigate the role of
knowledge in such changes, especially if manipula-
tion comes into play? (3) What would be the char-
acterizing conditions for reaching consensus in such
deliberative processes? We endeavor to provide an-
swers to such questions in future.
5
REFERENCES
Arrow, K. J., Sen, A. K., and Suzumura, K., editors (2002).
Handbook of Social Choice and Welfare. Elsevier.
Two volumes.
Boutilier, C. (1994). Conditional logics of normality: A
modal approach. Artificial Intelligence, 68(1):87–154.
Burgess, J. P. (1984). Basic tense logic. In Gabbay, D.
and Guenthner, F., editors, Handbook of Philosophi-
cal Logic, volume II, chapter 2, pages 89–133. Reidel.
Demolombe, R. (2001). To trust information sources: A
proposal for a modal logic framework. In Castel-
franchi, C. and Tan, Y.-H., editors, Trust and De-
ception in Virtual Societies. Kluwer Academic, Dor-
drecht.
Demolombe, R. (2004). Reasoning about trust: A for-
mal logical framework. In Jensen, C. D., Poslad, S.,
5
The authors would like to thank the anonymous reviewers
for their helpful and constructive comments that greatly
contributed to improving the final version of the paper. The
work of the second author was partially supported by JSPS
KAKENHI Grant-in-Aid for Young Scientists (B) Grant
Number 15K21025 and JSPS Core-to-Core Program (A.
Advanced Research Networks).
ICAART 2017 - 9th International Conference on Agents and Artificial Intelligence
622
and Dimitrakos, T., editors, iTrust, volume 2995 of
Lecture Notes in Computer Science, pages 291–303.
Springer.
Endriss, U. (2011). Logic and social choice theory. In
Gupta, A. and van Benthem, J., editors, Logic and
Philosophy Today, volume 2, pages 333–377. College
Publications.
Falcone, R., Barber, K. S., Sabater-Mir, J., and Singh, M. P.,
editors (2008). Trust in Agent Societies, 11th Inter-
national Workshop, TRUST 2008, Estoril, Portugal,
May 12-13, 2008. Revised Selected and Invited Pa-
pers, volume 5396 of Lecture Notes in Computer Sci-
ence. Springer.
Falcone, R. and Castelfranchi, C. (2001). Social trust: A
cognitive approach. In Castelfranchi, C. and Tan, Y.-
H., editors, Trust and Deception in Virtual Societies,
pages 55–90. Kluwer Academic, Dordrecht.
Gargov, G., Passy, S., and Tinchev, T. (1987). Modal envi-
ronment for boolean speculations, preliminary report.
In Skordev, D., editor, Mathematical Logic and Its Ap-
plications, pages 253–263. Plenum Press.
Ghosh, S. and Vel
´
azquez-Quesada, F. R. (2015a). Agreeing
to agree: Reaching unanimity via preference dynam-
ics based on reliable agents. In Bordini, R., Elkind,
E., Weiss, G., and Yolum, P., editors, AAMAS 2015,
pages 1491–1499.
Ghosh, S. and Vel
´
azquez-Quesada, F. R. (2015b). A note
on reliability-based preference dynamics. In van der
Hoek, W., Holliday, W. H., and fan Wang, W., editors,
LORI 2015, pages 129 –142.
Goldblatt, R. (1992). Logics of Time and Computation.
Number 7 in CSLI Lecture Notes. Center for the Study
of Language and Information, Stanford, CA, 2nd edi-
tion.
Goldman, A. I. (2001). Experts: Which ones should you
trust? Philosophy and Phenomenological Research,
63(1):85–110.
Gr
¨
une-Yano, T. and Hansson, S. O., editors (2009). Pref-
erence Change, volume 42 of Theory and Decision
Library. Springer.
Harel, D., Kozen, D., and Tiuryn, J. (2000). Dynamic Logic.
MIT Press, Cambridge, MA.
Herzig, A., Lorini, E., H
¨
ubner, J. F., and Vercouter, L.
(2010). A logic of trust and reputation. Logic Journal
of the IGPL, 18(1):214–244.
Holliday, W. H. (2010). Trust and the dynamics of tes-
timony. In Kurzen, L., Grossi, D., and Vel
´
azquez-
Quesada, F. R., editors, Logic and Interactive RAtio-
nality. Seminar’s yearbook 2009, pages 118–142. In-
stitute for Logic, Language and Computation, Univer-
siteit van Amsterdam, Amsterdam, The Netherlands.
Liau, C.-J. (2003). Belief, information acquisition, and trust
in multi-agent systems a modal logic formulation.
Artificial Intelligence, 149(1):31–60.
Lorini, E., Jiang, G., and Perrussel, L. (2014). Trust-based
belief change. In Schaub, T., editor, ECAI 2014 – 21st
European Conference on Artificial Intelligence, 18-
22 August 2014, Prague, Czech Republic Including
Prestigious Applications of Intelligent Systems (PAIS
2014), volume 263 of Frontiers in Artificial Intelli-
gence and Applications, pages 549–554. IOS Press.
Marx, M. and Mikul
´
as, S. (2001). Products, or how to create
modal logics of high complexity. Logic Journal of
IGPL, 9(1):71–82.
Rodenh
¨
auser, B. (2014). A Matter of Trust: Dynamic
Attitudes in Epistemic Logic. PhD thesis, Institute
for Logic, Language and Computation (ILLC), Uni-
versiteit van Amsterdam (UvA), Amsterdam, The
Netherlands. ILLC Dissertation Series DS-2014-04.
Sano, K. (2010). Axiomatizing hybrid products: How can
we reason many-dimensionally in hybrid logic? Jour-
nal of Applied Logic, 8(4):459–474.
Seligman, J., Liu, F., and Girard, P. (2013). Knowledge,
friendship and social announcement. In van Benthem,
J. and Liu, F., editors, Logic Across the University:
Foundations and Applications, volume 47 of Studies
in Logic, pages 445–469. College Publications.
van Benthem, J. (2007). Dynamic logic for belief revision.
Journal of Applied Non-Classical Logics, 17(2):129–
155.
van Benthem, J., van Eijck, J., and Kooi, B. (2006). Log-
ics of communication and change. Information and
Computation, 204(11):1620–1662.
van Ditmarsch, H., van der Hoek, W., and Kooi, B. (2008).
Dynamic Epistemic Logic. Number 337 in Synthese
Library. Springer.
Valuing Othersâ
˘
A
´
Z Opinions: Preference, Belief and Reliability Dynamics
623