COOPERATIVE REPLIES TO UNBELIEVABLE ASSERTIONS
A Dialogue Protocol based on Logical Interpolation
M. Nyk¨anen
School of Computing, University of Eastern Finland, Kuopio, Finland
S. Eloranta, O. Niinivaara
Department of Computer Science, University of Helsinki, University of Helsinki, Helsinki, Finland
R. Hakli
Helsinki Institute for Information Technology, Helsinki, Finland
Keywords:
Belief revision, Dialogue protocols, Convictions, Integrity constraints, Argumentation.
Abstract: We propose a dialogue protocol for situations in which an agent makes to another agent an assertion that
the other agent nds impossible to believe. In this interaction, unbelievable assertions are rejected using
explanations formed by logical interpolation and new assertions are being made such that all previous rebuttals
are taken into account.
1 INTRODUCTION
When two agents carry out a conversation with each
other, one of them may well assert something which
the other cannot believe for some reason. Witness the
following example (Hansson, 1991):
Conversation 1.
Amy. Last summer I saw a three-toed woodpecker
just outside my window. I could clearly see its
red forehead and its red rump.
Bob. You must be mistaken. A three-toed wood-
pecker does not have a red forehead or a red rump.
Amy. You make me uncertain. Thinking about it, the
only thing I am certain of is that the bird had a red
forehead.
We study here such conversations: Both agents
have beliefs that they are certain of and that they are
not willing to give up during the conversation. Here,
Bob’s ornithological knowledge is one example, and
Amy’s certainty of seeing a bird with a red forehead
is another. When hearing an unbelievable assertion,
Bob faces the task of helping Amy by offering infor-
mative rebuttals. Amy then faces the task of generat-
ing another assertion while taking Bob’s rebuttal into
account. This interaction continues until either Amy
comes up with an assertion which Bob can consider
possible or she concludes that they have irreconcil-
able differences, at least as far as this conversation is
concerned.
We consider these conversations in the context of
belief revision (Alchourr´on et al., 1985) in the pres-
ence of what we call convictions. By these convic-
tions we mean those beliefs the agent refuses to give
up, at least during the current conversation. In several
fields there are important concepts that can be inter-
preted as convictions: In computer science, integrity
constraints (Reiter, 1988) are needed to ensure con-
sistency of databases. In philosophy, the properties
of knowledge differ from those of belief (Hintikka,
1962), and people take a different stand on what they
take to know and not merely believe. In nonpriori-
tized belief revision, core beliefs are immune to revi-
sion (Hansson, 1999, gives a survey). In theories of
argumentation, agents have dark-side commitments,
which are their fundamental commitments that they
find extremely hard to retract once stated in a conver-
sation (Walton and Krabbe, 1995, pp. 11–12).
We encounterthe problem of what an agent should
do when another agent asserts something that con-
flicts with his convictions. In this paper we propose
a solution, in which the agents carry out a conversa-
tion as an interactive preparatory phase before belief
revision. In this phase they seek together a final asser-
tion which does not conflict with either agent’s con-
victions. In Conversation 1, Amys second assertion
might serve as something which they both might be
able to believe. We do not consider what happens
after this preparatory phase, that is, we do not con-
cern ourselves whether the agents actually revise their
245
Nykänen M., Eloranta S., Niinivaara O. and Hakli R..
COOPERATIVE REPLIES TO UNBELIEVABLE ASSERTIONS - A Dialogue Protocol based on Logical Interpolation.
DOI: 10.5220/0003181702450250
In Proceedings of the 3rd International Conference on Agents and Artificial Intelligence (ICAART-2011), pages 245-250
ISBN: 978-989-8425-41-6
Copyright
c
2011 SCITEPRESS (Science and Technology Publications, Lda.)
epistemic states or not.
Focusing on these assertion-rebuttalconversations
raises immediately three questions: First, how can an
agent form his rebuttal to the unbelievable assertion?
Second, how should the other agent form her
1
next
assertion on the basis of her epistemic state while tak-
ing into account the new information in the rebuttal?
And third, how can the conversation stay focused on
its original subject?
Our answer to the first question is to use logi-
cal interpolation, since it gives a formula which is
entailed by the convictions of the agent and entails
the negation of the unbelievable assertion. Moreover,
it can be read as a description how or an explana-
tion why (Hintikka and Halonen, 1999) the assertion
conflicted with the convictions of the agent, thereby
bringing some currently relevant part of his convic-
tions into light. Our answer to the second question is
to use hypothetical thinking, as if the agent thought:
“if I were to believe this rebuttal, then I would have
this belief about my topic instead of the one I ex-
pressed before”. She will form her next assertion as
the least disbelieved alternative to the topic given the
new information. Our answer to the third question is
to require that their utterances remain relevant to it
in the letter-sharing sense (Makinson, 2009, Defini-
tion 1.1), which is guaranteed by interpolation.
In related work, there are some approaches in
which agents have convictions but do not use inter-
action for conflict resolution. These include such ap-
proaches to nonprioritized belief revision that secure
some beliefs from revision. For instance, in Accom-
modative Belief Revision (Eloranta et al., 2008), the
agent tries to guess what the other would have said,
had she had his knowledge. In our solution, the agent
does not have to guess what the other agent would be-
lieve, instead he gives her a chance to tell it.
Then there are approaches in which interaction is
used as a preprocessing step before belief revision,
but the possibility of agents having their private con-
victions is not considered. These include such merg-
ing approaches as mutual belief revision (Jin et al.,
2007) and belief negotiation (Booth, 2006) in which
all the agents’ beliefs are weakened until they no
longer contradict each other. As opposed to that, our
solution is asymmetric. We have one agent, who is ea-
ger to inform another agent about some of her beliefs,
whereas the other agent is willing to reply and share
some of his convictions in case he finds the original
assertion unbelievable. Application areas with such
1
We adhere to the convention that the asserting agent
(such as Amy in Conversation 1) is female, whereas the re-
butting agent (such as Bob) is male, and refer to them as
“she” and “he” as well as by name.
a setting include knowledge base systems in which
some agents (either human beings or software agents)
collect information and send it to one agent acting as
a knowledge base with integrity constraints.
Certain types of argumentation-based dialogues
(Walton and Krabbe, 1995; Parsons et al., 2003) can
also be viewed as preparatory phases for belief revi-
sion: They aim at finding out whether a particular as-
sertion should be believed by exchanging information
about arguments that either support or undermine it.
In our approach, however, the goal is to find out what
could be believed about the topic when the agents’
convictions are taken into account, not whether a par-
ticular proposition should be believed or not.
For example, van Veenen and Prakken (2006) in-
cluded asking “Why did you rebut my assertion?”
among the moves in their negotiation protocol as an
embedded persuasion game. However, their idea is
to bring the grounds for the rebuttal to light so that
they too can be subject to further scrutiny by the other
agent within this conversation. In contrast, the pur-
pose of our dialogues is not to persuade the other
agent to accept the original assertion, but to find an
alternative assertion that is acceptable.
Our aim is that the agents’ assertions in the dia-
logues satisfy the Cooperative Principle presented by
Grice (1989, Chapters 2 and 3) to govern conversa-
tions between cooperative agents. These maxims rule
out the naive extremities to deal with an unacceptable
input, that is, either to terminate the dialogue or to re-
ply with all one knows about the subject. Something
more is needed, thus we will propose the use of logi-
cal interpolation as a cooperative reply to an assertion
that an agent is convinced to be false.
The paper is organized as follows. In section 2,
we will introduce our notations and present the inter-
polation principle as a tool for generating cooperative
replies to unbelievable assertions. In section 3, we
will propose guidelines for generating a new modi-
fied assertion based on this reply. Section 4 presents
the conversation protocol driven by these interpolants
and shows that the conversations will always end with
a rational outcome. In section 5, we will give conclu-
sions and propose some directions for future research.
2 COOPERATIVE REPLIES
We will consider dialogues, that is, conversations be-
tween two agents. We assume that these two agents,
named A and B, such as Amy or Bob in Conversa-
tion 1, have epistemic states, which we denote with A
and B correspondingly. These states contain the be-
lief sets consisting of all the beliefs they currently
ICAART 2011 - 3rd International Conference on Agents and Artificial Intelligence
246
hold; these sets we denote with B(A ) and B(B ). In
these belief sets, beliefs are expressed with formulas
of classical propositional logic; that is, they are be-
liefs about the actual state of affairs, and not about for
instance beliefs about each other’s beliefs. On the one
hand, each agent may be willing to give up some of
these beliefs given new evidence to the contrary. On
the other hand, (s)he may regard some of them as con-
victions which (s)he will hold on to, regardless of any
such new evidence. We denote the sets of convictions
for agents A and B with C(A) and C(B ) correspond-
ingly. We assume that the sets of beliefs and the sets
of convictions are non-contradictory and deductively
closed, that is, B(A) = Cn(B(A)), etc. We also as-
sume that what an agent is convinced of, (s)he also
believes, that is, C(A ) B(A ) and C(B) B(B).
Agent A is the initiator of the conversation. We are
focusing entirely on the situation, in which agent B is
convinced that the assertion made by A is false. We
are not concerned whether agent B will givepriority to
the assertion if it does not contradict his convictions.
We use lower-case Greek letters to denote propo-
sitional formulas. Agent A initiates the conversation
with her initial assertion ϕ. Whenever the assertion is
acceptable by agent B, the dialogue ends. Therefore
we shall presume this initial assertion to be unbeliev-
able, that is, C(B) |= ¬ϕ. As already mentioned in
Section 1, we propose for agent B to use the inter-
polant to this entailment as a cooperative reply in this
situation, since (i) it is entailed by C(B) and (ii) it
entails ¬ϕ and (iii) it only employs vocabulary that
appears in the original assertion (in terms of proposi-
tional variables).
Let V denote all the propositional variables and
Voc(α) V those appearing in the formula α. Let us
recall what interpolation is:
Theorem 1 (Craig interpolation, for propositional
logic). Given two propositional formulas α and β,
if α |= β, then there is some interpolant θ such that
(i) α |= θ, (ii) θ |= β, and (iii) Voc(θ) Voc(α)
Voc(β).
In the full paper (Nyk¨anen et al., 2011), we prove
this theorem for the system G3cp of sequent calculus
(Negri and von Plato, 2001, Chapter 3.1) in a way
which provides an explicit algorithmic construction
for the interpolant θ given the formulas α and β.
In Conversation 1, α is a formula representing
(some relevant part of) Bob’s convictions C(B ), β is
the negation ¬ϕ of Amy’s initial assertion ϕ, and θ is
a suggestion for an explanation why α rules out β.
Example 1. Consider Conversation 1, and let the
propositional variable p stand for Amy saw a three-
toed woodpecker”, q stand for “Amy saw a bird with
a red forehead”, r stand for Amy saw a bird with
a red rump”, s stand for Amy saw a lark”. The
conversation starts when Amy asserts p q r. As-
sume Bob’s convictions include (ps) (¬q ¬r).
Then (p s) (¬q ¬r) entails an interpolant p
¬q ¬r, which again entails the negation of Amy’s
assertion, ¬(p q r).
Let us consider how well this suggestion fares
in light of Grice’s Maxims (Grice, 1989, Chapters 2
and 3). These maxims elaborated his general Cooper-
ative Principle into more specific conversational rules
which the participants can be expected to observe:
Maxim of Quantity. (i) Make your contribution as
informative as required (for the current purposes
of the exchange). (ii) Do not make your contribu-
tion more informative than is required.
Maxim of Quality. Try to make a contributionwhich
is true. More specifically: (i) Do not say what you
believe to be false. (ii) Do not say that for which
you lack adequate evidence.
Maxim of Relevance. Be relevant.
Maxim of Manner. Be clear.
Using an interpolant as a reply conforms to part (i)
of the maxim of Quantity, because it conveysinforma-
tion that agent A supposedly was not aware of, since it
entails the negation of what she said. In part (ii), the
amount of informativeness can be controlled by the
selection of a suitable interpolation formula.
Using an interpolant as the reply conforms also
to the maxim of Quality: not only does agent B re-
ply with a belief of his, but with a conviction, and we
assume here that a rational agent does not obtain con-
victions without proper evidence.
According to the maxim of Relevance, a reply
should somehow be related to the preceding conver-
sation. A natural syntactic concept is letter-sharing:
two formulas are relevant to each other, if they share
some propositional variable (Makinson, 2009, Defini-
tion 1.1). In this regard, an interpolant is an extremely
relevant reply, since it consists only of variables in
both C(B) and the current suggestion by agent A, by
property (iii) of Theorem 1. Makinson (2009) notes
that although letter-sharing is not wholly unproblem-
atic as a notion of relevance, it does have its uses in
computational contexts. Hence we define here the
topic of the conversation to be the variables Voc(ϕ)
about which agent A wants to have a conversation
with agent B.
Another more refined concept of relevance could
be to split the beliefs of an agent into disjoint parts,
where each part consists of the agent’s beliefs about
a particular subject matter (Parikh, 1999). Since
Theorem 1 can be extended to such split theories
COOPERATIVE REPLIES TO UNBELIEVABLE ASSERTIONS - A Dialogue Protocol based on Logical Interpolation
247
(Kourosias and Makinson, 2007, Theorem 1.1), we
anticipate our approach to apply in this setting as well.
There are also more semantic accounts of rele-
vance. In the theory of Sperber and Wilson (2004),
the main determinant of relevance of a reply is how
much positive cognitive effect (such as learning, set-
tling doubt, or correcting mistaken assumptions) it
creates in the recipient. In this respect too, a reply by
interpolant fares well since it contradicts with what
agent A supposedly believes and should thus invoke a
process of belief revision.
Regarding the maxim of Manner, one could ar-
gue that it does not concern conversations such as
ours, where the messages exchanged are formulated
in logic instead of in a natural language. We note,
however, that even in our conversations agent B can
tailor the form of his interpolant to enhance its clarity
to agent A.
3 ON GENERATING NEW
ASSERTIONS
Let us contemplate on what agent A should do when
she learns a formula θ that tells her why her previous
assertion was unbelievable.
If θ conflicts with the convictions of agent A, we
take that this dialogue should fail. If the rebuttal is not
unbelievable to agent A, then she can either (i) accept
the input as a conviction (since it was Bs conviction),
(ii) accept the input as a belief, or (iii) treat the in-
put conditionally. The treatment may depend, e.g., on
how reliable the agent considers the other agent. By
the maxim of Quality, agent A then continues the di-
alog with a new assertion ψ, which she accordingly
(i) believes (now that she is convinced that θ), (ii) be-
lieves (now that she has come to believe that θ), or
(iii) would believe, if she were to believe that θ.
Denoting the doxastic conditional “if I were to
believe θ, then I would also believe ψ as θ ψ,
our requirement of the maxim of Quality becomes
A |= θ ψ, where this entailment is defined through
the Ramsey test: if agent A were to revise her epis-
temic state A with θ, then would ψ be believed in the
resulting state A γ? Thus in all three alternatives, A
may anwer ψ if and only if ψ B(A θ). Note that
this revision might be only tentative: the actual epis-
temic state of agent A might still be A.
Now let us assume that the revision operator
that agent A uses (either when revising her epistemic
state or when evaluating conditionals) satisfies the ba-
sic rationality criteria (R1)–(R4) for belief revision
(Alchourr´on et al., 1985) and the rationality crite-
rion (IR1) for iterated belief revision (Darwiche and
Pearl, 1997). That is:
α B(A α). (R1)
If ¬α 6∈ B(A ) then
B(A α) = Cn(B(A) {α}).
(R2)
If α is satisfiable then B(A α) is consistent. (R3)
If α β then B(A α) = B(A β). (R4)
If α |= β then B(A α) = B((A β) α). (IR1)
Postulate (R1) says that the new piece of informa-
tion is accepted, that is, the insertion succeeds. Pos-
tulate (R2) says that if the new piece of information
is compatible with the old beliefs, neither is any of
them discarded nor is anything not entailed by the old
beliefs and the new information added to the belief
set. Postulate (R3) says that adding a satisfiable for-
mula to the belief set must not make it inconsistent.
Postulate (R4) calls for syntax independence. Pos-
tulate (IR1) says that if α |= β, then the beliefs in
the epistemic state obtained when learning first β and
then α are the same as when learning just α in the first
place.
By the maxim of Relevance, agent A must bear in
mind (and take into account) all the rebutting formu-
las during the dialog. Let γ denote the conjunction of
those formulas. Thus in the scenarios, we must use γ
instead of θ.
By the basic postulates for belief revision, we have
A |= θ ψ if and only if A θ |= θ ψ. Now
assume that after n rebuttals, γ
n
= θ
1
θ
2
. . . θ
n
.
Then by postulate (IR1), we have A |= γ
n
ψ if and
only if A γ
1
γ
2
. . . γ
n
|= γ
n
ψ. Thus the truth
value of the conditional does not depend on whether
the agent has actually revised her epistemic state on
the way or not: all three alternatives for As actions
remain equivalent in this respect.
Our framework allows for several methods for
constructing a formula ψ satisfying this require-
ment. Some methods are discussed in the full paper
(Nyk¨anen et al., 2011).
4 A CONVERSATION PROTOCOL
We will now give a conversation protocol in which
agent B uses interpolation to create rebuttals. In our
protocol, agent A starts with an assertion ϕ, which
also fixes the topic of the conversation. When agent B
receives an assertion which conflicts with C(B), he
answers with an interpolant θ. In the protocol, ψ con-
tains the most recent assertion made by agent A, and γ
is the conjunction of all the rebuttals made by agent B
to her previous assertions, as above. The protocol is
depicted as follows:
ICAART 2011 - 3rd International Conference on Agents and Artificial Intelligence
248
CONVERSATION PROTOCOL
1 ψ ϕ; γ
2 A asserts ψ
3 while C(B) |= ¬ψ with some interpolant θ
4 do B replies that he is convinced of θ
5 γ γ θ
6 if C(A ) |= ¬γ
7 then A says that their convictions
conflict with each other
8 return FAIL
9 ψ some formula chosen by A such
that A |= γ ψ
10 A asserts ψ
11 B replies that he too considers this ψ believable
12 return SUCCESS with ψ
Selecting the next assertion ψ on line 9 is possi-
ble if and only if C(A ) {γ} is consistent, and this
is guaranteed by line 6. If our conversation protocol
terminates successfully (on line 12) then we do have
an agreement: both agents could believe the final as-
sertion ψ. For agent B, this follows by line 3. For
agent A, this is an invariant of the while loop: it holds
before the loop by line 1 and the maxim of Quality (i),
and it continues to hold after each execution of the
loop body by line 9.
Notice the algorithm uses the current epistemic
states of the agents. As to the convictions the agents
have, this causes no problems, because the agents do
not give up their convictions. The beliefs of agent B
are not used in the protocol. As to the beliefs of
agent A, whether she has actually revised her epis-
temic state with γ or not does not affect the truth value
of the conditional, as discussed in section 3.
The protocol could generate a conversation like
the following:
Conversation 2.
Amy. I saw a bird with a red forehead and a red rump.
It was a three-toed woodpecker!
Bob. A three-toed woodpecker does not have a red
forehead.
Amy (thinking to herself). In that case I would have
to give up believing either that I saw a three-toed
woodpecker or a red forehead. I prefer to keep
believing the former. Hence I can take Bob’s re-
buttal into account by giving up the latter. I can
also keep believing in the red rump, since he did
not challenge that part.
(aloud to Bob). In that case I think I saw a three-
toed woodpecker, which had a red rump but no
red forehead.
Bob. A three-toed woodpecker does not have a red
rump either.
Amy (after similar thinking). Well, in that case I
think I saw a three-toed woodpecker, but it had
neither a red rump nor a red forehead.
Bob. Now that is something I could believe!
If our protocol terminates with failure instead (on
line 8) then this outcome is warranted as well: On the
one hand, C(B ) |= γ by property (i) of Theorem 1 and
lines 3 and 5. On the other hand, C(A ) |= ¬γ by line 6.
Agent A can even explain this conflict between their
convictions with an interpolant corresponding to the
entailment on line 6, thereby expressing her own con-
victions about the topic. This could be useful if the
agents attempt to reconcile their convictions some-
how, but we do not consider such attempts here.
If we assume that agent A keeps her assertions ψ
relevantto the topic in our chosen letter-sharing sense,
then our protocol becomes finite, albeit O(2
|Voc(ϕ)|
) in
the worst case.
Theorem 2 (Finiteness). Assume that the asser-
tions ψ by agent A in our conversation protocol sat-
isfy Voc(ψ) Voc(ϕ). Then the maximum number of
times its while loop can be executed is
2
|{w Voc(ϕ): w Mod(C(A)) \ Mod(C(B))}|.
Proof. See the full paper (Nyk¨anen et al., 2011).
However, we expect actual conversations to ter-
minate in much fewer rounds than the pessimal upper
bound in Theorem 2, since the more precise explana-
tions θ agent B gives for his rejections, the fewer as-
sertions agent A needs. And indeed, maxim of Quan-
tity (i) directs agent B towards such precise explana-
tions θ. However, we leave constructing precise inter-
polants θ for later study.
5 CONCLUSIONS AND FUTURE
WORK
We considered dialogues as a preparatory phase for
belief revision and we presented a dialog protocol for
resolving conflicts resulting from unbelievable asser-
tions. Depending on the result of the dialogue, the
agents either have found out that their convictions are
in conflict with each other, or they have found a for-
mula that neither of them finds unbelievable. During
the dialogue, the unbelievable assertions do not cause
the rebutting agent to change his epistemic state, but
the asserting agent might (or might not) change her
2
Here f D = {hx, f (x)i : x D} denotes the restriction
of the function f into the domain D.
COOPERATIVE REPLIES TO UNBELIEVABLE ASSERTIONS - A Dialogue Protocol based on Logical Interpolation
249
epistemic state due to the rebuttals. In general, we al-
low the agents to change their epistemic states during
the dialog.
Our protocol can terminate successfully even
when there is a conflict, since it may never surface
during the protocol. Suppose for instance that A as-
serts some ab, where A can believe the first disjunct
but not the second disjunct, and vice versa for B. We
leave such pseudoagreements to further study, since
avoiding them would require continuing the conver-
sation further even after finding this first mutually be-
lievable formula.
The work can be extended to several directions,
such as adding the possibility to extend the topic
with new literals, adding the possibility to agree to
restrict the topic, adding new utterance types to the
agents (for instance, for making the protocol symmet-
ric), and considering more expressive languages as is
done e.g. in cooperative query answering (Gaaster-
land et al., 1992).
REFERENCES
Alchourr´on, C. E., G¨ardenfors, P., and Makinson, D.
(1985). On the logic of theory change: Partial meet
contraction and revision functions. The Journal of
Symbolic Logic, 50(2):510–530.
Booth, R. (2006). Social contraction and belief negotiation.
Information Fusion, 7:19–34.
Darwiche, A. and Pearl, J. (1997). On the logic of iterated
belief revision. Artificial Intelligence, 89(1-2):1–29.
Eloranta, S., Hakli, R., Niinivaara, O., and Nyk¨anen,
M. (2008). Accommodative belief revision. In
H¨olldobler, S., Cutz, C., and Wansing, H., editors,
11th European Conference on Logics in Artificial In-
telligence (JELIA 2008), number 5293 in Lecture
Notes in Artificial Intelligence (LNAI), pages 180–
191. Springer.
Gaasterland, T., Godfrey, P., and Minker, J. (1992). An
overview of cooperative answering. Journal of Intel-
ligent Information Systems, 1:123–157.
Grice, P. (1989). Studies in the Way of Words. Harvard
University Press.
Hansson, S. O. (1991). Belief contraction without recovery.
Studia Logica, 50:251–260.
Hansson, S. O. (1999). A survey of non-prioritized belief
revision. Erkenntnis, 50(2-3):413–427.
Hintikka, J. (1962). Knowledge and Belief. Cornell Univer-
sity Press.
Hintikka, J. and Halonen, I. (1999). Interpola-
tion as explanation. Philosophy of Science, 66
(Proceedings):S414–S423.
Jin, Y., Thielscher, M., and Zhang, D. (2007). Mutual belief
revision: semantics and computation. In Proceedings
of the 22nd national conference on Artificial Intelli-
gence (AAAI’07), pages 440–445. AAAI Press.
Kourosias, G. and Makinson, D. (2007). Parallel interpo-
lation, splitting, and relevance in belief change. The
Journal of Symbolic Logic, 72(3):994–1002.
Makinson, D. (2009). Propositional relevance through
letter-sharing. Journal of Applied Logic, pages 377–
387.
Negri, S. and von Plato, J. (2001). Structural Proof Theory.
Cambridge University Press.
Nyk¨anen, M., Eloranta, S., Niinivaara, O., and Hakli, R.
(2011). Cooperative replies to unbelievable assertions:
A dialogue protocol based on logical interpolation.
Technical Report C-2011-1, Department Of Computer
Science, University of Helsinki, Finland. Available at
http://www.cs.helsinki.fi/group/protean/crua/.
Parikh, R. (1999). Beliefs, belief revision, and splitting
languages. In Moss, L., Ginzburg, J., and de Rijke,
M., editors, Logic, Language, and Computation, vol-
ume 2, pages 266–278. CSLI Publications.
Parsons, S., Wooldridge, M., and Amgoud, L. (2003). Prop-
erties and complexity of some formal inter-agent dia-
logues. Journal of Logic and Computation, 13:347–
376.
Reiter, R. (1988). On integrity constraints. In 2nd Con-
ference on Theoretical Aspects of Reasoning about
Knowledge, pages 97–111. Morgan Kaufmann.
Sperber, D. and Wilson, D. (2004). Relevance theory. In
The Handbook of Pragmatics, pages 607–632. Black-
well, Oxford.
van Veenen, J. and Prakken, H. (2006). A protocol for
arguing about rejections in negotiation. In Parsons,
S., Maudet, N., Moraitis, P., and Rahwan, I., editors,
Second International Workshop on Argumentation in
Multi-Agent Systems (ArgMAS 2005), number 4049 in
Lecture Notes in Artificial Intelligence (LNAI), pages
138–153. Springer.
Walton, D. N. and Krabbe, E. C. W. (1995). Commitment in
Dialogue: Basic Commitments in Interpersonal Dia-
logue. SUNY series in logic and language. State Uni-
versity of New York Press.
ICAART 2011 - 3rd International Conference on Agents and Artificial Intelligence
250