Modelling Debates on the Computer
Mare Koit
1
and Haldur Õim
2
1
Institute of Computer Science, University of Tartu, Liivi 2, Tartu, Estonia
2
Institute of Estonian and General Linguistics, University of Tartu, Ülikooli 18, Tartu, Estonia
Keywords: Debate, Argument, Dialogue Model, Knowledge Representation, Human-Human Dialogue.
Abstract: In the paper, a model of debate is developed which includes a model of argument. When starting interaction,
the participants have opposite communicative goals. They are exchanging arguments and counter-arguments
and one of them has finally to abandon his or her initial communicative goal, i.e. one participant ‘wins’ and
another ‘loses’. An analysis of human-human dialogue corpus is carried out in order to evaluate the
suitability of the model for describing natural debates. A computer implementation is introduced. Notions of
negotiation, debate and argument are discussed.
1 INTRODUCTION
Negotiation is a process where each party tries to
gain an advantage for themselves by the end of the
process (Čulo and Skendrović, 2012). The aim of
negotiation is to reach a compromise. Debate is a
negotiation between teams or individuals in which
they express different opinions about something.
The participants of a debate have conflicting
interests and they exchange arguments in order to
influence the partners’ mental states. Argument
consists of two or more assertions where the last one
is an inference (claim) and the preceding assertions
are presuppositions (Martinich, 1995).
Many researchers have been modelling
negotiation on the computer and investigating
formalization of argument (see overviews Chesñevar
et al., 2000; Amgoud and Cayrol, 2002; Besnard and
Hunter, 2008; Scheuer et al., 2010). Logical models
of argument support decision making by
participants, guide negotiation and allow to reach
agreements.
We are studying the interactions where the
participants can have conflicting interests. One
participant, A, has a communicative goal that his
partner, B, will decide to perform an action D. The
goal of the partner B is opposite: do not do D. In the
course of interaction, A tries to influence B’s
reasoning processes in such a way that B still
decides to do D (doing D can be a part of the object
of negotiation). A single way for A to do so is to
propose arguments that show to B which pleasant,
useful, etc. positive aspects will D or its
consequences have for B. We have worked out a
dialogue model which includes a reasoning model as
its part (Koit and Õim, 2014). Our reasoning model
is based on the studies in the common-sense
conception of how the human mind works in such
situations; we suppose that in communication people
start, as a rule, from this conception, not from any
consciously chosen scientific one. The reasoning
model includes some principles which represent the
interaction relations between different aspects of the
action under consideration and the causal connection
between the aspects and the decision taken.
In the current paper, we will further develop the
model and present a formal model of debate where
the participants exchange arguments for and against
of doing D. They can also ask and answer questions
in order to make choices among the arguments for
averting the partner’s counter-arguments.
The rest of the paper is structured as follows.
Section 2 gives an overview of related work. Section
3 introduces our model of debate. Section 4 gives
the results of analysis of human-human debates, in
order to justify the model. Section 5 discusses some
questions related to the concepts of debate,
negotiation and argument in human-human
interaction and in our computer model from a
somewhat more general point of view. Conclusions
are made in Section 6.
361
Koit M. and Õim H..
Modelling Debates on the Computer.
DOI: 10.5220/0005129303610368
In Proceedings of the International Conference on Knowledge Engineering and Ontology Development (KEOD-2014), pages 361-368
ISBN: 978-989-758-049-9
Copyright
c
2014 SCITEPRESS (Science and Technology Publications, Lda.)
2 RELATED WORK
Main sources of inspiration for this paper have been
presented in several studies.
Dungh (1995) introduces an argumentation
system as a pair <A, R> where A is a (finite) set of
arguments, and R an attack relation between
arguments (R A × A).
Wagner (1998) discusses the basic concepts of
argumentation and defines an argument in favour of
an assertion G in the form
<{f
1
,…,f
m
}, <r
1
,…,r
n
>>
consisting of a set of strict and defeasible assertions
{f
1
,…,f
m
}, also called base assertions or facts, and a
sequence of instantiated rules r
1
,…,r
n
, where r
n
is
called top rule, and conclusion of r
n
is G. He
considers artificial agents as transition systems
participating in disputes and negotiations.
Karacapilidis and Papadias (2001) implement an
argumentation system on the computer for
collaborative decision making through debates and
negotiations.
Rahwan et al (2004) discuss three approaches to
automated negotiation: game-theoretic, heuristic-
based and argumentation-based. A dialogue game is
a rule-based structure for conversation where
arguments are exchanged between two participants
reasoning together on a turn-taking basis aimed at a
collective goal (Yuan et al., 2008). Heuristic
methods offer approximations to the decisions made
by participants. Agents exchange proposals (i.e.
potential agreements or potential deals). Both game-
theoretic and heuristic approaches assume that
agents’ utilities or preferences are fixed. One agent
cannot directly influence another agent’s preference
model, or any of its internal mental attitudes (e.g.,
beliefs, desires, goals, etc.) that generate its
preference model. A rational agent only modifies its
preferences if it receives new information.
Argumentation-based approaches to negotiation
allow agents to ‘argue’ about their beliefs and other
mental attitudes during the negotiation process. In
negotiation, argument can be considered as a piece
of information that may allow an agent to: (a) justify
its negotiation state; or (b) influence another agent’s
negotiation state (Jennings et al., 1998). Thus, in
addition to accepting or rejecting a proposal, an
agent can offer a critique of it.
Scheuer et al (2010) consider how argumentation
has been taught to students using computer-based
systems. Argumentation systems can be beneficial
for students. Still, both on the technology side and
on the educational psychology side, there remains a
number of research challenges that need to be
addressed in order to make real progress in
understanding how to design, implement, and use
educational argumentation software.
Besnard and Hunter (2008) formalize
argumentation by using classical logic and define an
argument as a pair <, > where is a set of
formulas (a subset of the knowledge base) and is a
formula such that
1. is not contradictory
2. implies
3. is a minimal subset of the knowledge base
which satisfies 2.
If <, > is an argument, it is said that it is an
argument for and it is also said that is a support
for . Here is called the claim of the argument.
Rahwan and Larson (2011) explore the
relationships between mechanism design and formal
logic, particularly in the design of logical inference
procedures when knowledge is shared among
multiple participants.
Hadjinikolis et al (2012) provide an
argumentation-based framework for persuasion
dialogues, using a logical conception of arguments,
that an agent may undertake in a dialogue game,
based on its model of its opponents.
Our main aim is to model argumentation in
agreement negotiation processes. Because of this, as
said above, we consider as a critically important
subtask modelling of the reasoning processes that
people supposedly go through when working out a
decision whether to perform an action (Koit and
Õim, 2014). People construct folk theories, or naïve
theories, for the important fields of their experience
and they rely on these theories when acting inside of
these domains. The theories include knowledge,
belief and image structures concerning the
corresponding domains, but also certain principles
and rules that form the basis of operating with these
mental structures.
3 COMPONENTS OF DEBATE
Here we introduce our dialogue model and apply it
to debates where exchanging arguments and
counter-arguments is an important part. As
compared to the models reviewed in the preceding
section, it would be appropriate to point out two
distinctive features of our dialogue model. First,
it
includes a
model of human reasoning. Second, the
concepts of communicative strategies and tactics are
introduced.
KEOD2014-InternationalConferenceonKnowledgeEngineeringandOntologyDevelopment
362
3.1 Dialogue Model
Let us consider a dialogue between two participants
(humans or artificial agents) A and B – in a natural
language (see Koit and Õim, 2014). Let the
communicative goal of A be “B makes a decision to
do an action D”. A has a partner model – an image
about B which gives him an opportunity to believe
that B will agree to do the action. In constructing his
first turn, A must plan the dialogue acts (e.g.
proposal, request, question, proposal together with
an argument, etc. depending on his picture of B) and
determine their verbal form (i.e. utterances). The
partner B interprets A’s turn and before generating
her response, triggers a reasoning procedure in her
mind in order to make a decision – to do D or not. In
the reasoning process, B weighs her resources for
doing D, positive and negative aspects of doing D
and its consequences and finally makes a decision.
Then she in her turn plans the dialogue acts (e.g.
agreement, refusal, refusal with argument, etc.) and
their verbal form in order to inform A about her
decision. If B agrees to do D then the dialogue
finishes (A has reached his communicative goal). If
B’s response is refusal then A must change his
partner model (it did not correspond to the reality
because A supposed that B will agree to do D) and
find out new arguments in order to convince B to
make a positive decision.
B’s refusal can be supported with arguments.
These (counter-)arguments will be used by A as
giving information about the reasoning process that
brought B to the (negative) decision.
3.1.1 Reasoning Model
Our reasoning model is presented in (Koit and Õim,
2014). In general, it follows the ideas realized in the
well-known BDI (belief-desire-intention) model but
it has a certain particular feature – we want to model
a ‘naïve’ theory of reasoning that people use when
they interact with other people trying to predict and
influence their decisions.
The reasoning model consists of two parts: (1) a
model of human motivational sphere; (2) reasoning
procedures. In the motivational sphere three basic
factors are differentiated that regulate reasoning of a
subject concerning an action D. First, a subject may
wish to do D if the pleasant aspects of D for him/her
overweight the unpleasant ones; secondly, a subject
may find it reasonable to do D if D is needed to
reach some higher goal, and the useful aspects of D
overweight the harmful ones; and thirdly, a subject
can be in a situation where s/he must (is obliged) to
do D – if not doing D will lead to some kind of
punishment. We call these factors WISH-,
NEEDED- and MUST-determinants, respectively.
We represent the model of motivational sphere
of a subject by the following vector of ‘weights’
(with numerical values of its components): w =
(w(resources), w(pleasant), w(unpleasant),
w(useful), w(harmful), w(obligatory), w(prohibited),
w(punishment-do), w(punishment-not)). In the
description, w(pleasant), etc. mean the weight of
pleasant, etc. aspects of D; w(punishment-do) –
weight of punishment for doing D if it is prohibited,
and w(punishment-not) – weight of punishment for
not doing D if it is obligatory. Here w(resources) =
1, if subject has the resources necessary to do D
(otherwise 0); w(obligatory) = 1, if D is obligatory
for the reasoning subject (otherwise 0);
w(prohibited) = 1, if D is prohibited (otherwise 0).
The values of other weights can be non-negative
natural numbers.
The second part of the reasoning model consists
of reasoning procedures that supposedly regulate
human action-oriented reasoning. Every reasoning
procedure represents steps that the subject goes
through in his/her reasoning process; these consist in
computing and comparing the weights of different
aspects of D; and the result is the decision to do D or
not.
The reasoning procedure depends on the
determinant which triggers it (in our model, WISH,
NEEDED or MUST). As an example, let us present
a reasoning procedure which is triggered by the
NEEDED-determinant, that is, if the subject believes
that it would be useful to do D (the decision tree in
Fig. 1). The NEEDED-determinant gets activated
when a reasoning subject finds that the action D
itself or some of its consequences would be useful to
him/her, i.e. w(useful) > w(harmful).
Figure 1: Reasoning procedure NEEDED.
The vector w
AB
(A’s beliefs concerning B’s
evaluations, where B denotes the communication
partner) is used as a partner model while the vector
w
B
– the model of B herself – represents B’s actual
evaluations of D’s aspects (which exact values A
does not know).
ModellingDebatesontheComputer
363
3.1.2 Communicative Strategies and Tactics
A communicative strategy is an algorithm used by a
participant for achieving his/her goal in the
interaction. The initiator (participant A) can realize
his communicative strategy in different ways: stress
pleasant aspects of D (i.e. entice the partner B),
stress usefulness of D for B (i.e. persuade B), stress
punishment for not doing D if it is obligatory
(threaten B), etc. These concrete ways of realization
of a communicative strategy we call communicative
tactics. A, trying to direct B’s reasoning to the
positive decision (to do D), proposes arguments for
doing D while B, when opposing, proposes counter-
arguments, i.e. arguments for not doing D (Koit and
Õim, 2014).
The simplest tactics, which A can use, is so-
called defence. Then A does not especially stress any
positive aspects of D but only averts (down-grades)
counter-arguments presented by B. For example, in
the following dialogue excerpt, B repeatedly points
to missing resources while A tries to indicate that the
resources can be obtained (Koit et al., 2009):
A: Please prepare a potato salad for party.
B: I do not have enough time.
A: I will help you.
B: My mother is waiting for me.
A: Call home.
Every tactics implemented by A has its ‘title’
aspect which are, respectively, pleasantness for
enticing, usefulness for persuading and punishment
for not doing D for threatening. A attempts to bring
out arguments for stressing the title aspect of the
chosen tactics (which coincides with the reasoning
procedure what A is trying to trigger in B).
The simplest tactics for B is refusal without any
argument.
3.1.3 Knowledge Base
The knowledge base for the interaction participant
(agent) A includes (1) reasoning algorithms, (2)
communicative strategies and tactics, (3) the partner
model w
AB
, (4) a list of dialogue acts which A can
use (proposal, question, assertion, etc.), (5) a list of
utterances which he can use for verbalizing the
dialogue acts. We suppose here that there is a list of
ready-made utterances (sentences in natural
language) which can be used in interaction. No
morphological and syntactic analysis or generation
of texts will be made by an agent. Semantic analysis
and generation are simplified by classifying all the
utterances. For example, there are utterances
informing the partner about the communicative goal,
i.e. for expressing such dialogue acts as proposal,
request, etc. (Please prepare a potato salad),
sentences stressing/downgrading the pleasant/
unpleasant/ useful etc. aspects of the action (I help
you; Cutting potato is pleasant with my good knife,
etc.), affirming sentences (OK; I agree), etc.
It is important to mention that every utterance
has its own (in our model numerical) weight – some
of them ‘weigh’ more than others. The weights
depend on the interaction participants (A, B) and also
the action D. For example, the sentence More than
ten guests will participate in party used for stressing
the pleasantness of D can have the weight 1 for one
partner B
1
and 10 for another B
2
.
The knowledge base for B includes similar
knowledge, the only difference is that there is w
B
(the model of B herself) instead of the partner model
w
AB
.
We suppose that every utterance can be used by
a participant only once in interaction. Therefore, if
there are no utterances remained for A to stress e.g.
pleasantness of D (the title aspect of enticing) then A
has to choose new tactics instead of enticing or
abandon his communicative goal.
3.1.4 Argument Structure
When negotiating, A and B exchange arguments.
The general structure of A’s argument is as follows
(cf. Besnard and Hunter, 2008):
<{R, T, w
AB
i
, proposition
A
}, claim
A
>,
where R
is the reasoning algorithm which A is trying
to trigger in B,
T is the communicative tactics used,
w
AB
i
= (w
AB
i
(resources), w
AB
i
(pleasant), w
AB
i
(unpleasant), w
AB
i
(useful), w
AB
i
(harmful), w
AB
i
(obligatory), w
AB
i
(prohibited), w
AB
i
(punishment-do),
w
AB
i
(punishment-not)) is the current partner model
(at time i),
proposition
A
denotes the utterance chosen by A in
order to influence one of the weights in the partner
model, after what R will supposedly give B’s
positive decision (do D) on the changed model; its
weight is w(proposition
A
),
claim
A
= “B decides to do D“.
Many different propositions can be used in an
argument (not only a single one).
The proposition
A
chosen by A in interaction
yields a new partner model w
AB
i+1
(at time i+1):
if proposition
A
P
increase_resources
, then
w
AB
i+1
(resources):=1;
if proposition
A
P
increase_pleasantness
, then
w
AB
i+1
(pleasant):= w
AB
i
(pleasant) + w(proposition
A
);
if proposition
A
P
increase_usefulness
, then
w
AB
i+1
(useful):= w
AB
i
(useful) + w(proposition
A
);
KEOD2014-InternationalConferenceonKnowledgeEngineeringandOntologyDevelopment
364
if proposition
A
P
decrease_unpleasantness
, then
w
AB
i+1
(unpleasant):= w
AB
i
(unpleasant) –
w(proposition
A
);
if proposition
A
P
decrease_harmfulness
, then
w
AB
i+1
(harmful):= w
AB
i
(harmful) – w(proposition
A
);
if D is obligatory for B and proposition
A
P
increase_punishment_of_not_doing_D
, then w
AB
i+1
(punishment-
not):= w
AB
i
(punishment-not) + w(proposition
A
);
if D is prohibited for B and proposition
A
P
decrease_punishment_of_doing_D
, then w
AB
i+1
(punishment-
do):= w
AB
i
(punishment-do) – w(proposition
A
).
Here P
increase_resources
denotes the set of
propositions (utterances) that can be used by A for
indicating to B that there exist resources for doing D;
P
increase_pleasantness
denotes the set of utterances for
increasing the pleasantness of D, etc.
The structure of B’s (counter-)argument is
similar:
<{R
B
, T
B
, w
B
, proposition
B
}, claim
B
>,
where the reasoning algorithm R
B
gives the decision
“do not do D” (claim
B
) on the model w
B
,
proposition
B
indicates the aspect of D which (too
small or too big) value causes this decision, and T
B
is the current communicative tactics of B.
B’s proposition
B
is used by A as giving
information for choosing his next proposition in
interaction. For example, if proposition
B
P
missing_resources
, then the actual value of
w
AB
i
(resources) is 0 and the next utterance will be
chosen by A from the set P
increasing_resources
(supposedly, after that w
AB
i+1
(resources) = 1 will
hold) and another proposition will be chosen from
the set of propositions which correspond to the title
aspect of R which A is trying to trigger in B using
the communicative tactics T. In other words, A
responds to the counter-argument set up by B
(rebutting it) but anyway, he continues his chosen
tactics T by presenting the next proposition in order
to stress the title aspect of T.
How will B choose her next proposition? She
triggers her current reasoning procedure R
B
on her
model w
B
. (Both the reasoning procedure and B’s
model of herself can be different as compared with
the reasoning procedure R and the partner model
w
AB
fixed by A.) B implements her reasoning
procedure and at the end of the procedure she is able
to determine the aspect of D which brought her to
the negative decision. For example, she can choose
an utterance indicating harmfulness of D, e.g. I’m
afraid I can scratch my finger when cutting potato
but she also can simply say I do not do. In the last
case A cannot avert any counter-argument but he has
simply to make a choice among the utterances for
stressing the title aspect of the reasoning algorithm
R.
If A does not have any more utterances for
increasing the value of the title aspect then he will
whether (1) choose another reasoning algorithm and
corresponding communicative tactics (if there are
any remained) or (2) give up.
3.2 the Structure of Debate
Let us suppose that the participants A and B have
contradictory goals when starting interaction
(debate). A’s communicative goal is “B does D”, B’s
goal is “B does not do D”. We suppose that both of
A and B have a common set of reasoning algorithms.
We also suppose that both of A and B can use fixed
sets of dialogue acts and corresponding utterances
which are classified semantically, e.g. for A:
P
increasing_resources
for indicating that there exist
resources for doing D, P
increasing_pleasantness
for stressing
pleasantness of D, etc. and for B: P
missing_resources
,
P
decreasing_pleasantness
, etc.
Starting a debate, A fixes (or generates) a partner
model w
AB
and determines the communicative
tactics T which he will use, i.e. he accordingly fixes
a reasoning algorithm R which he will try to trigger
in B’s mind. B has her own model w
B
. She
determines a reasoning procedure R
B
which she will
use in order to make a decision about doing D.
The general structure of debate looks like
follows (the dialogue acts in parentheses can be
missed):
A: proposal (argument)
REPEAT
(
B: question
A: answer/giving information
)
B: refusal (counter-argument)
(
A: question
B: answer/giving information
)
A: argument
UNTIL finishing conditions are
fulfilled
Whether A or B can indicate that the finishing
conditions are fulfilled: 1) give up regardless of
having utterances for expressing new arguments, 2)
there are no utterances to continue the fixed tactics
but no new tactics will be chosen regardless of
having some tactics not implemented so far, 3) all
the tactics are already implemented and all the
utterances are used without achieving the
communicative goal.
ModellingDebatesontheComputer
365
If B gives up then she makes the decision to do D
and A has achieved his communicative goal (A
‘wins’ and B ‘loses’). If A gives up then he does not
achieve his communicative goal and B will not do D
(A ‘loses’ and B ‘wins’).
Questions are asked by participants in order to
make a choice between different utterances which
can be used in argumentation.
The described model is implemented in an
experimental dialogue system (DS). The DS can
optionally play two roles: (1) of the participant A
who is influencing the reasoning of the user B in
order to achieve B’s decision to do an action D, or
(2) of the participant B who is rejecting arguments
for doing the action D proposed by the user A. In the
first case, the DS does not deviate from the fixed
communicative tactics but follows them in a
systematic way. In the second case, the DS does not
deviate from the selected reasoning procedure.
Ready-made Estonian sentences are used both by the
DS and the user.
4 HUMAN-HUMAN DEBATES
In order to perform a preliminary evaluation of our
model on natural dialogues, we carried out an
analysis of human-human debates. Let us consider
two examples from the Estonian dialogue corpus.
The first example is a call of a salesman (A) of the
magazine Food to a potential subscriber (B). The
second example is a call of a sales clerk (A) of an
educational company who is proposing training
courses to a customer (B). Transcription of
Conversation Analysis is used in the examples.
Example 1. A presents different arguments in
one turn attempting to indicate that the magazine is
interesting/useful for the customer. B asks a question
in order to make a decision about subscription
(which is here the action D).
A: /---/
ta on selline ´elu´stiili ´ajakiri.
it is such a life style magazine proposition
A
1
et ei ole ´ainult need ret´septid,
vaid seal on ka igasugust ´muud
lugemist.
not only recipes are presented but different other
information proposition
A
2
.hhhhhhh uued ´tooted mis tulevad
´müüki, (0.6) siis ´hoiate ´ültse jah
ja noh ´kursis uute ´trendidega söögi
ja köögi ´maailmas.
new products and new trends in the world of food and
kitchen proposition
A
3
(0.5) ´kõik nagu ikka puudutab
´kööki seal.
it is related to kitchen proposition
A
4
/---/
B: kas see on enamvähem ´samasugune
ajakiri nagu see ´Oma Maitse=vä.
is it similar with the magazine Own Taste
/---/ question
Example 2. Here the customer B presents several
counter-arguments against the proposed training
courses (asserting that the educational company is
not able to teach what is needed for the customer).
Sales clerk A asks a question and due to B’s answer
he succeeds to choose a new argument – he indicates
that the company still has the competence what the
customer supposed to miss.
/---/
B: aga jah ei mul on see läbi
´vaadatud=ja (.) ´kahjuks ma pean
ütlema=et (.) et ´teie (.) seda meile
(.) ´ei suuda ´õpetada (.) mida
(.)´mina: (.) tahan.
but I have looked through your catalogue and
unfortunately, I have to say that you can’t teach what is
needed for us proposition
B
/---/
A: .h ja mida kon´kreetselt=ee ´teie
tahate.
and what do you want question
(0.8) mida te ´silmas ´peate.
what do you have in view question
B: noo (0.2) ´meie (.) äri´tegevus
on (.) ´ehitamine.
still, our business is house-building answer
/---/
A: nüüd kas (0.2) näiteks (0.5)
´lepingute ´saamisel (0.5) mt ee
´tegelete te ka: läbi´rääkimistega.
do you need to carry out negotiations in order to get
agreements question
B: noo ikka.
yes sure answer
(0.8)
A: mt et see=on ka üks ´valdkond
mida me: (0.2) ´käsitleme.
but that is one of our fields which we cover
/---/ proposition
A
The corpus analysis confirms our opinion that
the introduced model is in general lines suitable for
describing debate, more formalisation is not done so
far for the human-human dialogues.
5 DISCUSSION
As said in Introduction, we are interested in
dialogues where the participants have conflicting
KEOD2014-InternationalConferenceonKnowledgeEngineeringandOntologyDevelopment
366
goals and exchange arguments to further or defend
their standpoints during interaction. We are
considering such type of interaction as a kind of
debate. In Section 3 a formal model of debate was
presented. Here we would like to place this
treatment in a more general context by explaining
our understanding of the relationships between such
concepts as negotiation, debate and argumentation
(and some other concepts) as used in the paper. Of
the three types of (verbal) interaction named before,
argumentation as a process of exchanging certain
types of assertions for or against some standpoint,
decision etc. surely is the most neutral one. To
introduce a still more general concept: also a simple
discussion of some topic can have the form of
exchanging arguments. Participants of a discussion
hold and defend their views but are open to learning
and accepting alternative views; in a prototypical
discussion there are no winners and losers. At the
same time, in discussion as in every argumentative
communication event its participants must reason,
i.e. make use of their reasoning model, have and
monitor model(s) of partner(s), use certain
communicative strategies and tactics based on these
models, etc.
In the same sense argumentation constitutes a
necessary part of negotiations and debates. But there
is a critical difference as compared with discussions
in the above sense. The origin of this difference lies
in the motivational sphere of the participants and
their communicative goals: these dictate the ways in
which the reasoning processes in every participant
are directed to construct suitable arguments,
communicative strategies and tactics.
Both in negotiation and in debate there are
clearly fixed ‘sides’ with different goals as
considering the outcome of the communicative
event. But negotiation covers much more divergent
possible variants than debate. The main uniting
feature of all variants of negotiation is that the
participants start the communicative event with the
ultimate aim to reach an agreement which (at least in
theory) is seen as a compromise, that is, all sides are
ready to accept some losses. However, the ways of
reaching this aim (strategies, tactics) can be quite
different in case of different types of negotiation.
Debates, on the other hand, are adversarial
events from the start: the participants have
conflicting goals and the aim of each participant is to
promote his or her goal only. It is this feature of the
debates, first of all, because of which we chose
‘debate’ as the cover term for the type of
communicative events we were analyzing.
Let us stress that this characteristics of debate
does not free its participants from the need to carry
out active reasoning and ‘working’ with the partner
model during the event. This is well illustrated by
the Example 2 in the previous section. But since
these processes are focused on promoting the
participant’s own goals without the need to consider
the additional task of reaching a compromise, the
choices between different strategies, tactics and even
concrete dialogue acts are less restricted, the task of
building computer model of debate in this sense is
easier than doing it for negotiations in general.
At the same time, proceeding from such a model
of debate to a general model of negotiation requires
only elaboration of the acceptable communicative
strategies and tactics, and of the underlying
reasoning procedures used by the participants. Of
course, the ontological, domain-specific aspects of
treating the corresponding problems – e. g what can
be considered a compromise in a concrete situation –
become more important accordingly.
6 CONCLUSION AND FUTURE
WORK
We are studying the interactions where one
participant, A, has the communicative goal that his
partner B will make a decision “do an action D”. B’s
goal, on the contrary, is “do not do D”. When
debating, A is trying to influence the partner’s
reasoning processes in such a way that B will
abandon her initial goal and decides to do D.
We introduced a model of debate which includes
exchange of arguments and counter-arguments. A
model of argument (counter-argument) is presented
which consists of a partner model (or, respectively, a
model of herself for B), a reasoning procedure which
A tries to trigger in B (or what B is implementing),
communicative tactics and (a set of) proposition(s)
(utterances) which together would bring to B’s
conclusion “do D” (or for B, respectively, “do not do
D”). The conclusion (decision about doing D) is
interpreted as a claim in the structure of an argument
(counter-argument).
We evaluated our model on actual human-human
debates taken from a dialogue corpus. The corpus
study gives an opportunity to believe that the
introduced model can be used for the analysis and
modelling of human-human dialogues.
The natural way to proceed in developing the
conceptual abilities of our model is to elaborate it –
for certain ontological domains – to cover also
ModellingDebatesontheComputer
367
negotiation dialogues where participants try to reach
a compromise between their initially opposite
communicative goals.
ACKNOWLEDGEMENTS
This work was supported by the European Regional
Development Fund through the Estonian Centre of
Excellence in Computer Science (EXCS) and the
Estonian Research Council (grants IUT20-56,
ETF9124, and ETF8558).
REFERENCES
Amgoud, L., Cayrol, C., 2002. A Reasoning Model Based
on the Production of Acceptable Arguments. In Ann.
Math. Artif. Intell. 34(1-3): 197–215.
Besnard, P., Hunter, A., 2008. Elements of Argumentation,
MIT Press, Cambridge, MA.
Chesñevar, C., Maguitman, A., Loui, R., 2000. Logical
Models of Argument. In ACM Computing Surveys,
32(4), 337–383.
Čulo, K., Skendrović, V., 2012. Communication in the
Process of Negotiation. In INFORMATOL, 45(4),
323–327.
Dung, P.M., 1995. On the Acceptability of Arguments and
its Fundamental Role in Nonmonotonic Reasoning,
Logic Programming and n-Person Games. In Artif.
Intell. 77(2): 321–358.
Hadjinikolis, C., Modgil, S., Black, E., McBurney, P.,
Luck, M., 2012. Investigating Strategic Considerations
in Persuasion Dialogue Games. In STAIRS, 137–148.
Jennings, N. R., Parsons, S., Noriega, P., Sierra, C., 1998.
On Argumentation-Based Negotiation. In Proc. of the
International Workshop on Multi-Agent Systems,
Boston, 1–7.
Karacapilidis, N., Papadias, D., 2001. Computer
Supported Argumentation and Collaborative Decision
Making: the Hermes System. In Information Systems,
26(4), 259–277.
Koit, M., Õim, H., 2014. A Computational Model of
Argumentation in Agreement Negotiation Processes.
In Argument & Computation, 5 (2-3), 209–236, Taylor
& Francis Online.
Koit, M., Roosmaa, T., Õim, H., 2009. Knowledge
Representation for Human-Machine Interaction. In
Proc. of the International Conference on Knowledge
Engineering and Ontology Development. Jan L.G.
Dietz (Ed.), Portugal, INSTICC, 396–399.
Martinich, A. P., 1995. A Hobbes Dictionary. Blackwell.
Rahwan, I., Larson, K., 2011. Logical Mechanism Design.
In The Knowledge Engineering Review. 26(1), 61–69.
Rahwan, I., Ramchurn, S.D., Jennings, N.R., Mcburney,
P., Parsons, S., Sonenberg, L., 2004. Argumentation-
Based Negotiation. In The Knowledge Engineering
Review, 18(4), 343–375. Cambridge University Press.
DOI: 10.1017/S0269888904000098
Scheuer, O., Loll, F., Pinkwart, N., McLaren, B.M., 2010.
Computer-Supported Argumentation: A Review of the
State of the Art. In Computer-Supported Collaborative
Learning. DOI 10.1007/s11412-009-9080-x
Yuan, T., Moore, D. , Grierson, A., 2008. A Human-
Computer Dialogue System for Educational Debate, A
Computational Dialectics Approach. In International
Journal of Artificial Intelligence in Education,
18(1):3-26.
Wagner, G., 1998. Foundations of Knowledge Systems
with Applications to Databases and Agents. Kluwer
Academic Publishers.
KEOD2014-InternationalConferenceonKnowledgeEngineeringandOntologyDevelopment
368