AGENT PROGRAMMING LANGUAGE WITH INCOMPLETE
KNOWLEDGE - AGENTSPEAK(I)
Duc Vo and Aditya Ghose
Decision Systems Laboratory
School of Information Technology and Computer Science
University of Wollongong
NSW 2522, Australia
Keywords:
Agent programming language, AgentSpeak, BDI agent, default theory, incomplete knowledge, replanning
Abstract:
This paper proposes an agent programming language called AgentSpeak(I). This new language allows agent
programs (1) to effectively perform while having incomplete knowledge of the environment, (2) to detect
no-longer possible goals and re-plan these goals correspondingly, and (3) to behave reactively to changes of
environment. Specifically, AgentSpeak(I) uses default theory as agent belief theory, agent always act with
preferred default extension at current time point (i.e. preference may changes over time). A belief change
operator for default theory is also provided to assist agent program to update its belief theory. Like other
BDI agent programming languages, AgentSpeak(I) uses semantics of transitional system. It appears that the
language is well suited for intelligent applications and high level control robots, which are required to perform
in highly dynamic environment.
1 INTRODUCTION
In modelling rational agents, modelling agent’s atti-
tudes as Belief, Desire, Intention (BDI agent) has been
the best known approach. This model was first intro-
duced by a philosopher Michael Bratman (Bratman,
1987) in 1987 and 1988. There has been number of
formalizations and implementations of BDI Agents
such as (Rao, 1996; Rao and Georgeff, 1995; Rao
and Georgeff, 1991; Hindriks et al., 1999; Riemsdijk
et al., 2003; Dastani et al., 2003; Wooldridge, 2000).
Belief, desire, intention attitudes of a rational agent
represents the information that the agent has about the
environment, the motivation of what it wants to do,
and finally the plans that the agent intends to execute
to achieve its desired goals. These mental attitudes
are critical for achieving optimal performance when
deliberation is subject to resource bounds (Bratman,
1987).
Although, researchers have tackled most issues of
BDI agents from logical systems (Rao and Georgeff,
1991; Wobcke, 2002; Wooldridge, 2000) to logic pro-
gramming (D’Inverno and Luck, 1998; Rao, 1996),
the issue of acting with incomplete knowledge about
the environment has not been addressed in BDI agent
programming languages. Rational agent is desired to
work in a highly dynamic environment, while hav-
ing incomplete knowledge about the environment.
In literature, there has been much work to address
the problem of incompleteness of belief theory such
as (Reiter, 1980; Delgrande et al., 1994; Brewka
and Eiter, 2000; Alferes et al., 1996; Meyer et al.,
2001; Ghose and Goebel, 1998; MaynardReidII and
Shoham, 1998; Alchourr
´
on et al., 1985). Another
problem normally faced in open-dynamic environ-
ment is to adapt with changes of environment both
to react to changes and to adjust strategies to achieve
previously adopted goals. Again, this problem has not
been focused by available BDI agent programming
languages.
In this paper, we propose a new agent program-
ming language called AgentSpeak(I) which allows
agent programs to effectively perform with incom-
plete knowledge about the environment, and to dy-
namicly adapt with changes of the environment but
persistently commit to its goals.
This paper includes six sections. The next sec-
tion describes an scenario of rescue robot where agent
(RBot) needs to reason, act, and react with incom-
plete knowledge about its highly dynamic environ-
ment. This example will be use throughout the paper
in the subsequent sections to demonstrate presented
theory. Section three discusses definition, syntax, and
properties of agent programs in AgentSpeak(I): agent
belief theory; goals and triggers; plans; intention; and
events. Section four defines operational semantics for
356
Vo D. and Ghose A. (2004).
AGENT PROGRAMMING LANGUAGE WITH INCOMPLETE KNOWLEDGE - AGENTSPEAK(I).
In Proceedings of the Sixth International Conference on Enterprise Information Systems, pages 356-363
DOI: 10.5220/0002629803560363
Copyright
c
SciTePress
AgentSpeak(I). Section five compares AgentSpeak(I)
with existing agent programming languages: AgentS-
peak(L), 3APL, and ConGolog. Finally, conclusion
and future research section will remark what has been
done in the paper and proposes research directions
from this work.
2 EXAMPLE
Let us consider a scenario of rescue robot (named
RBot). RBot works in disaster site. RBot’s duty is to
rescue trapped person from a node to the safe node A.
The condition of the site is dynamic and unsure. The
only definite knowledge that RBot has about the area
is the map. RBot can only travel from one node to an-
other if the path between two nodes is clear. Because
of the limitation of its sensor, RBot can only sense the
conditions between its node and adjacent nodes.
RBot has knowledge of where its locate at, where
there is a trapped human, path between nodes, if a
node is on
fire, if it is carrying a person, and finally
its goal to rescue trapped human. Basic actions of
RBot are move between node, pick a person up, and
release a person. RBot can only move from one node
to another node if there is a direct path between two
nodes and this path is clear. RBot can only pick a
person up if it is located at the node where the person
is trapped. RBot can only release a carried person at
node A. Table below defines predicate symbols for
RBot.
at(x): The robot is at node x
clear(x, y): Path between nodes x and y is clear
path(x, y): There is a path between nodes x and y
trapped(p, x): There is a person p trapped at node x
carry(p): The robot is carrying person p
on fire(x): node x is on fire (i.e. danger for RBot)
rescue(p, x): Goal to rescue person p at node x
move(x, y): Move from node x to an adjacent node
y on an available path(x, y)
pick(p): Pick person p up
release(p): Release carried person p
In such high-dynamic environment, to accomplish
the rescuing task, RBot should be able to make default
assumptions when reasoning during its execution, and
RBot should also be able to adapt with changes of the
environment by modifying its plans, intentions.
3 AGENT PROGRAMS
Agent belief theory
In this section, we introduce the agent belief theory.
Agents usually have incomplete knowledge about the
world requiring a suitably expressive belief represen-
tation language. We take the usual non-monotonic
reasoning stance insisting on the ability to represent
defaults on a means for dealing with this incomplete-
ness. In the rest of this paper we explore the conse-
quences of using default logic (Reiter, 1980) as the
belief representative language. However, most of our
results would hold had some other non-monotonic
reasoning formalism had been used instead.
We augment the belief representation language in
AgentSpeak (L) with default rules. If p is a pred-
icate symbol, t
1
, ..., t
n
are terms, then an atom is
of the form p(t
1
, ..., t
n
) denoted p(
~
t) (e.g. at(x),
rescue(p, x), clear(A, B)). If b
1
(
~
t
1
) and b
2
(
~
t
2
) are
belief atoms, then b
1
(
~
t
1
) b
2
(
~
t
2
) and ¬b
1
(
~
t
1
) are
beliefs. A set S of beliefs is said to be consistent
iff S does not have both a belief b and its negation
¬b. A belief is called ground iff all terms in the be-
lief are ground terms (e.g. at(A), trapped(P, F ),
path(C, F ) clear(C, F )).
Let α(~x), β(~x), and ω(~x) be beliefs. A default is of
the form
α(~x):β(~x)
ω(~x)
where α(~x) is called the prerequi-
site; β(~x) is called the justification; and ω(~x) is called
the consequent of the default (Reiter, 1980).
Interpretation of a default depends on variants of
default logics being used (note that several exist (Re-
iter, 1980; Delgrande et al., 1994; Brewka and Eiter,
2000; Giordano and Martelli, 1994) exploring differ-
ent intuitions). Informally, a default is interpreted as
follows: “If for some set of ground terms ~c, α(~c) is
provable from what is known and β(~c) is consistent,
then conclude by default that ω(~c)”.
A default theory is a pair (W, D), where W is a
consistent set of beliefs and D is a set of default rules.
Initial default theory of RBot is presented in table 1.
Example 1 Initial default theory
RBot
=
(W
RBot
, D
RBot
) of RBot
W
RBot
=
path(A, B), path(B, C),
path(C, D), path(C, E),
path(C, F ), path(D, F ),
path(E, F ), at(x) at(y)
x = y
D
RBot
=
:at(A
at(A)
,
:clear(A,B
clear(A,B)
,
:¬carry(p)
¬carry(p)
,
:trapped(P,F )
trapped(P,F )
,
path(x,y):clear(x,y)
clear(x,y)
,
:¬trapped(p,y)
¬trapped(p,y)
,
:¬path(x,y)
¬path(x,y)
,
:¬clear(x,y)
¬clear(x,y)
,
Much of out discussion is independent of the spe-
cific variant being used. We only require that at least
an extension must exist. This is not generally true
for Reiter’s default logic (Reiter, 1980). However, if
one restricts attention to semi- normal default theo-
ries, there is a guarantee that an extension will always
AGENT PROGRAMMING LANGUAGE WITH INCOMPLETE KNOWLEDGE - AGENTSPEAK(I)
357
exist. We make this assumption here. i.e belief theo-
ries must necessarily only be semi-normal default the-
ories.
Example 2 With Reiter semantics, default extensions
of
RBot
would be
E
RBot1
= Cn({ at(A), path(A, B),
clear(A, B), path(B, C), path(C, F ), ¬carry(p),
trapped(P, F ), clear(B, C), ¬clear(B, D),
¬clear(D, F ), clear(C, F ), ...})’
E
RBot2
= Cn({ at(A), path(A, B),
clear(A, B), path(B, C), path(C, F ), ¬carry(p),
trapped(P, F ), clear(B, D), clear(D, F ),
¬clear(C, F ), ...})
etc.
To operate, an agent program needs to commit with
one extension of its default theory. There is an exten-
sion selection function for agent to select most pre-
ferred extension from set of extensions of its default
theory for further execution. Let S
E
be a extension
selection function, if B = S
E
(∆), then (1) B is a de-
fault extension of and (2) B is the most preferred
extension by the agent at the time where S
E
is ap-
plied. In the rest of this paper, the current agent belief
set will be denoted by B = S
E
(∆) given an agent
belief theory B = h, S
E
i.
Belief Change Operators
Belief change is a complicated issue. There have
been several well known work on belief change such
as (Alchourr
´
on et al., 1985; Ghose et al., 1998; Meyer
et al., 2001; Darwiche and Pearl, 1997; Ghose and
Goebel, 1998). In this paper, we do not discuss this
issue in detail. However for the completeness of out
system, we adopt belief change framework of (Ghose
et al., 1998). We denote
g
(respectively
g
) as
Ghose’s revision (respectively contraction) operator.
When updating agent belief theory, we assumes
that (1) the belief to be revised must be a consistent
belief, (2) the belief to be revised must be consistent
with the set of the base facts of belief theory, (3) the
belief to be contracted must not be a tautology and (4)
the belief to be contracted must not be entailed by the
base facts.
Goals, Triggers, Plans and Intentions
We follow the original definitions in (Rao, 1996) to
define goals and triggering events. Two types of goals
are of interest: achievement goals and test goals. An
achievement goal, denoted !g(
~
t), indicates an agent’s
desire to achieve a state of affairs in which g(
~
t) is true.
A test goal, denoted ?g(
~
t), indicates an agent’s desire
to determine if g(
~
t) is true relative to its current be-
liefs. Test goals are typically used to identify unifiers
that make the test goal true, which are then used to
instantiate the rest of the plan. If b(
~
t) is a belief and
!g(
~
t) is an achievement goal, then +b(
~
t) (add a be-
lief b(
~
t)), b(
~
t) (remove a belief b(
~
t)), +!g(
~
t) (add
an achievement goal !g(
~
t)), and !g(
~
t) (remove the
achievement goal g(
~
t)) are triggering events.
An agent program includes a plan library. The
original AgentSpeak (Rao, 1996) definition views a
plan as a triple consisting of a trigger, a context (a
set of pre-conditions that must be entailed by the cur-
rent set of beliefs) and a body (consisting of a se-
quence of atomic actions and sub-goals). We extend
this notion to distinguish between an invocation con-
text (the pre-conditions that must hold at the time that
the plan in invoked) and an invariant context (condi-
tions that must hold both at the time of plan invocation
and at the invocation of every plan to achieve sub-
goals in the body of the plan and their sub-goals). We
view both kinds of contexts to involve both hard pre-
conditions (sentences that must be true relative to the
current set of beliefs) and soft pre- conditions (sen-
tences which must be consistent with the current set
of beliefs). Soft pre-conditions are akin to assump-
tions, justifications in default rules (Reiter, 1980) or
constraints in hypothetical reasoning systems (Poole,
1988).
Definition 1 A plan is a 4-tuple hτ, χ, χ
, πi where
τ is a trigger, χ is the invocation context, χ
is
the invariant context and π is the body of the plan.
Both χ and χ
are pairs of the form (β, α) where
β denotes the set of hard pre-conditions while α de-
notes the set of soft pre-conditions. A plan p is
written as hτ, χ, χ
, πi where χ = (β, α) (also re-
ferred to as InvocationContext(p)), χ
= (β
, α
)
(also referred to as InvariantContext(p)), π =<
h
1
, . . . , h
n
> (also referred to as Body(p)) and each
h
i
is either an atomic action or a goal. We will also
use T rigger(p) to refer to the trigger τ of plan p.
Example 3 RBot’s plan library:
p
1
= h+!at(y),({at(x)},{∅}),
({∅},{clear(x, y)}), hmove(x, y)ii
p
2
= h+!at(y),(at(x),path(x, y)},{∅}),
({∅},{clear(x, y)}),
h!at(x),?clear(x, y),move(x, y)ii
p
3
= h+!rescue(p, x),({∅}, {∅}),
({∅}, {trapped(p, x) carry(p)}),
h!at(x), pick(p), !at(A), release(p)ii
p
4
= h+on
fire(x),({at(x), ¬on fire(y),
path(x, y)}, {clear(x, y)}),
({∅}, {∅}), hmove(x, y)ii
p
5
= h+trapped(p, x),({∅}, {∅}),
({∅}, {∅}), h!rescue(p, x)ii
P
RBot
= {p
1
, p
2
, p
3
, p
4
, p
5
}
In example 3, plan p
1
and p
2
are RBot strategies to
get to a specific node on the map. Plan p
3
is the strat-
egy assisting RBot to decide how to rescue a person
trapped in a node. Plan p
4
is a reactive-plan for RBot
to get out of on-fire node. Plan p
5
is another reactive-
plan for RBot to try rescue a person, when RBot adds
a new belief that person is trapped at some node.
ICEIS 2004 - SOFTWARE AGENTS AND INTERNET COMPUTING
358
As with (Rao, 1996), a plan p is deemed to be a
relevant plan relative to a triggering event τ if and
only if there exists a most general unifier σ such that
= τσ, where T rigger(p) = τ . σ if referred to as
the relevant unifier for p given τ.
Definition 2 A plan p of the form hτ, χ, χ
, πi is
deemed to be an applicable plan relative to a trigger-
ing event τ
0
and a current belief set B iff:
(1) There exists a relevant unifier σ for p given τ
0
.
(2) There exists a substitution θ such that βσθ
β
σθ T h(B)
(3) ασθ α
σθ B is satisfiable.
σθ is referred to as the applicable unifier for τ
0
and θ
is called its correct answer substitution.
Thus, a relevant plan is applicable if the hard pre- con-
ditions (both in the invocation and invariant contexts)
are entailed by its current set of beliefs and the soft
pre- conditions (both in the invocation and invariant
contexts) are consistent with its current set of beliefs.
Example 4 Plan p
3
is intended by trigger
+!rescue(P, F ) and agent belief set E
1
with
correct answer substitution {p/P, x/F }
(p
3
)σ{p/P, x/F } =
h+!rescue(P, F ), ({∅}, {∅}), ({∅}, {trapped(P, F )
carry(p)}), h!at(F ), pick(P ), !at(A), release(P )
ii
Definition 3 An intended (partially instantiated)
plan hτ, χ, χ
, πi σθ is deemed to be executable with
respect to belief set B iff
(1) β
σθ T h(B)
(2) α
σθ B is satisfiable
Example 5 Plan (p
3
)σ{p/P, x/F } is executable
with respect to belief set E
2
.
Syntactically, our plan is not much different to the
one in (Rao, 1996), however, the partition of context
to four parts will give the agent more flexible when
applying and executing a plan. This way of presenting
a plan also provides agent’s ability to discover when
thing goes wrong or turns out not as expected (i.e.
when invariant context is violated).
An intention is a state of intending to act that a ra-
tional agent is going to do to achieve its goal (Brat-
man, 1987).
Formally, an intention is defined as
Definition 4 Let p
1
, ..., p
n
be partially instantiated
plans (i.e. instantiate by some applicable sub-
stitution), an intention ι is a pre- ordered tuple
hp
1
, ..., p
n
i. Where
(1) T rg
P
(p
1
) is called original trigger of ι,
denote T rg
I
(ι) = T rg
P
(p
1
)
(2) An intention ι = hp
1
, ..., p
n
i is said to be valid
with respect to current belief set B iff i p
i
is ex-
ecutable with respect to B (definition 3).
(3) An intention is said to be true intention if it is of
the form hi (empty). A true intention is always
valid.
(4) An intention ι = hp
1
, ..., p
n
i is said to be in-
valid with respect to current belief set B if it is
not valid.
(5) Another way of writing an intention ι is hι
0
, p
n
i
with ι
0
= hp
1
, ..., p
n1
i is also an intention.
Example 6 At node A, RBot may have an intention
to rescue a trapped person at node F as
ι
1
= hp
3
σ{p/P, x/F }i
Events
We adopt notion of agent events in AgentSpeak(L)
(Rao, 1996). An events is a special attribute of agent
internal state, an event can be either external (i.e.
events are originated by environment e.g. users or
other agents) or internal (i.e. evens are originated by
internal processes of agent).
Definition 5 Let τ be a ground trigger, let ι be an
intention, an event is a pair hτ, ιi.
(1) An event is call external event if its intention is
a true intention, otherwise it is called internal
event.
(2) An event is valid with respect to agent current be-
lief set B if its intention is valid with respect to
B, otherwise it is invalid event.
(3) Let e = hτ, ιi be an event, denote
T rg
E
(e) =
½
τ if e is external
T rg
I
(ι) if e is internal
¾
Example 7 External Event:
e
1
= h+!rescue(P, F ), hii
Internal Event:
e
2
= h+!at(F ), hp
0
3
ii
where
p
0
3
=h+!rescue(P, F ), ({∅}, {∅}), ({∅},
{trapped(P, F ) carry(p)}), hpick(P ), !at(A),
release(P ) ii
Corollary 1 All external events of an agent are valid
in respects of its current belief set.
4 OPERATIONAL SEMANTICS
Like other BDI agent programming languages (Rao,
1996; Hindriks et al., 1999; Levesque et al., 1997),
we use transitional semantics for our system.
Informally, an agent program in AgentSpeak(I)
consists of a belief theory B, a set of events E, a set
of intention I, a plan library P , and three selection
functions S
E
, S
P
, S
I
to select an event, a plan, an
intention (respectively) to process.
AGENT PROGRAMMING LANGUAGE WITH INCOMPLETE KNOWLEDGE - AGENTSPEAK(I)
359
Definition 6 An agent program is a tuple
hB, P, E, I, S
P
, S
E
, S
I
i
where
(1) B = h, S
E
i is agent belief theory of the agent.
(2) At any time denote B = S
E
(∆) is current belief
set of the agent.
(3) E is set of events (including external events and
internal events).
(4) P is agent plan repository, a library of agent
plans.
(5) I is a set of intentions.
(6) S
E
is a selection function which selects an event
to process from set E of events.
(7) S
P
is a selection function which selects an appli-
cable plan to a trigger τ from set P of plans.
(8) S
I
is a selection function which selects an inten-
tion to execute from set I of intentions.
(9) S
E
/S
P
/S
I
returns null value if it fails to select an
extension/intention/plan
Example 8 The initial agent program of RBot at
node a would be like
hB
RBot
, P
RBot
, {h+!rescue(person, f), hii}, hi,
S
E
, S
P
, S
I
i
Where S
E
, S
P
, S
I
are some valid selection functions
(e.g. select the first valid option).
There are two types of transits in AgentSpeak(I):
Transit to process events and Transit to execute in-
tention. These transitions may run in sequent or in
parallel. The choice of using which method depends
on specific domain.
As shown in definitions 4 and 5, an agent inten-
tion or an event can be sometime invalid. A good
agent program should appropriately response to such
cases. We propose two functions Repair
I
and V al
E
for the purposes repairing invalid intentions and vali-
date events when executing AgentSpeak(I) agent pro-
grams. The Repair
I
function takes an invalid inten-
tion as its input and outputs an event which is then
valid with respect to agent current belief set. The
V al
E
function is slightly different. It takes an event
as its input and outputs an valid event.
Definition 7 Let I be set of all intentions, E be set of
all events, B be set of all belief sets. Let Repair
I
:
I, B E be a repairing function which modify an
invalid intention ι with respect to belief set B to return
an valid event ² with respect to belief set B.
The function Rep
i
must satisfy following conditions
RI-1. if hτ, ι
0
i = Rep
i
(ι, B), then (1) ι
0
and
ι are in the forms of hp
1
, ..., p
k
i and
hp
1
, ..., p
k
, p
k+1
, ...p
n
i respectively where
(k < n), and (2) ι
0
is valid with respect to
belief set B.
RI-2. if if hτ, ι
0
i = Rep
i
(ι, B), where ι
0
=
hp
1
, ..., p
k
i and ι = hp
1
, ..., p
k
, p
k+1
...p
n
i,
then τ = T rg
P
(p
k+1
).
Definition 8 Let E be set of all events, B be set of all
belief sets. If B is current agent belief set, e = hτ, ιi
is an event, then function V al
E
:: E, B E is
defined as
V al
E
(e, B) =
½
e if e is valid wrt B
Rep
i
(ι, B) if e is invalid wrt B
¾
When an event e = hτ, ιi is selected by event selec-
tion function S
E
. V al
E
function helps to make sure
that e valid with respect to agent current belief set B,
and ready to be processed.
Event Processing
Transition of agent program to process events de-
pends on the event triggers. An event trigger can be
1. Add an achievement goal +!g(
~
t)
2. Remove an achievement goal !g(
~
t)
3. Add a belief +b
4. Remove a belief b
In case of adding an achievement goal +!g(
~
t), our
agent (1) selects an applicable plan p from the plan
library, (2) partially instantiate p with applicable sub-
stitution θ by unifier σ, (3) appends pσθ to intention
part of the event, and (4) adds that intention into the
agent’s intention set.
Definition 9 Let V al
E
(S
E
(E)) = h+!g(
~
t), ιi, let
S
P
(P ) = p, and where partially instantiated plan p is
an applicable plan relative to triggering event +!g(
~
t)
and current belief set B. e is said to be processed iff
hι, pi I
In case of removing an achievement goal !g(
~
t)
(remind that g(
~
t) is grounded), our agent (1) re-
moves any plans which are triggered by ground trig-
ger +!g(
~
t) from agent sets of intentions and events,
(2) removes any sub-plans of those plans (i.e. plans
which have higher priorities in the same intention).
Definition 10 Let V al
E
(S
E
(E)) = e where e =
h−!g(
~
t), ιi. e is said to be processed iff for any in-
tention
ι I {ι
0
|hτ, ι
0
i E}
if (1) ι is in the form of hp
1
, ..., p
k
, ..., p
n
i, (2) p
k
=
h+!g(
~
t), χ, χ
, πi and (3) for any i < k p
i
=
hτ
i
, χ
i
, χ
i
, π
i
i τ does not unify with +!g(
~
t), then ι
is cut to be hp
1
, ..., p
k1
(note that ι may become an
empty intention).
In case of adding a belief, the agent revises its agent
belief theory with this belief and adopts an applicable
plan from its plan library to react to this change of its
beliefs (reactive characteristic).
ICEIS 2004 - SOFTWARE AGENTS AND INTERNET COMPUTING
360
Definition 11 Let V al
E
(S
E
(E)) = e where e =
h+b(
~
t), ιi, let S
P
(P ) = p where partially instan-
tiated p is an applicable plan relative to triggering
event +!g(
~
t) and current belief set B. e is said to be
processed iff (1) B = B
G
b and (2) hι, pi I
Finally, in case of removing a belief, the agent
contracts that belief from its agent belief theory and
adopts an applicable plan from its plan library to react
to this change of its beliefs (reactive characteristic).
Definition 12 Let V al
E
(S
E
(E)) = e where e =
h−b(
~
t), ιi, let S
P
(P ) = p where partially instan-
tiated p is an applicable plan relative to triggering
event +!g(
~
t) and current belief set B. e is said to be
processed iff (1) B = B
G
b and (2) hι, pi I
We have following algorithm for event processing
1: e = S
E
(E)
2: if e is not null then
3: E = E \ {e}
4: B = S
E
(∆)
5: hτ, ιi = V al
E
(e, B)
6: if τ is of the form +!g(
~
t) then
7: p = S
P
(τ, B, P )
8: if p is null then
9: E = E {hτ, ιi}
10: else
11: I = I {hι, pi}
12: else if τ is of the form !g(
~
t) then
13: for all ι I do
14: if ι = hp
1
, ..., p
k
, h+!g(
~
t), ...i, ..., p
n
i
then
15: ι = hp
1
, ..., p
k
i
16: for all hτ, ιi E do
17: if ι = hp
1
, ..., p
k
, h+!g(
~
t), ...i, ..., p
n
i
then
18: ι
0
= hp
1
, ..., p
k1
i
19: E = E \ hτ, ιi htrigger(p
k
), ι
0
i
20: else if τ is of the form +b(
~
t) then
21: = b(
~
t)
22: p = S
P
(τ, B, P )
23: if p is null then
24: I = I {ι}
25: else
26: I = I {hι, pi}
27: else if τ is of the form b(
~
t) then
28: = b(
~
t)
29: p = S
P
(τ, B, P )
30: if p is null then
31: I = I {ι}
32: else
33: I = I {hι, pi}
In our example, at node a there is only one external
event for the robot to select, which is +!rescue(P, F )
to rescue P at node F . There is also only one plan
intended with this event and selected belief set E
2
,
which is p
3
instantiated by substitution {P/p, F/x}.
Hence, an intention will be generated to put into set
of intentions, which now is
h+!rescue(P, F ), ({∅}, {∅}), ({∅}, {trapped(P, F )
carry(p)}), h!at(F ), pick(P ), !at(A), release(P )
ii
After this execution, the set of event is an empty
set.
Intention Executing
Executing an intention is an important transition of
AgentSpeak(I) agent program, it is the process when
the agent acts pro-actively towards its environment to
achieve some environmental stage. An intention will
be executed based on the first formula in the body of
the highest priority plan of the intention. This formula
can be: an achievement goal, a test goal, or an action.
In case of an achievement goal, an internal event will
be generated. In case of a test goal, the agent will test
if there exists a set of ground terms ~c that makes the
test goal an element of the current belief set, and use
this set for subsequential execution of the intention.
Finally, in case of an action, the agent will perform
that action, which may result in changing of environ-
mental state. These executions are formalized below.
Definition 13 Let S
I
(I) = ι, where
ι = hι
0
, hτ, χ, χ
, h!g(
~
t), h
2
, ..., h
n
iii
The intention ι is said to be executed iff
h+!g(
~
t), hι
0
, hτ, χ, χ
, hh
2
, ..., h
n
iiii E
Definition 14 Let S
I
(I) = ι, where
ι = hι
0
, hτ, χ, χ
, h?g(
~
t), h
2
, ..., h
n
iii
The intention ι is said to be executed iff
(1) there exists an substitution θ such that g(
~
t)θ B
and
(2) ι is replaced by
ι = hι
0
, hτ, χ, χ
, hh
2
θ, ..., h
n
θiii
Definition 15 Let S
I
(I) = ι, where
ι = hι
0
, hτ, χ, χ
, ha(
~
t), h
2
, ..., h
n
iii
The intention ι is said to be executed iff (1) ι is re-
placed by
ι = hι
0
, hτ, χ, χ
, hh
2
, ..., h
n
iii
and (2) a(
~
t) is sent to agent action processor.
Definition 16 Let S
I
(I) = ι, where
ι = hι
0
, hτ, χ, χ
, hiii
The intention ι is said to be executed iff ι is replaced
by ι
0
AGENT PROGRAMMING LANGUAGE WITH INCOMPLETE KNOWLEDGE - AGENTSPEAK(I)
361
Like processing an event, before executing any in-
tention, the agent need to verify if the intention is still
valid in its current belief set B, and repairs that inten-
tion if necessary.
We have following algorithm for intention execut-
ing
1: ι = S
I
(I)
2: if ι is not null then
3: I = I \ {ι}
4: B = S
E
(∆)
5: if ι is invalid with respect to B then
6: e = Rep
i
(ι)
7: E = E {e}
8: else
9: present ι as hι
0
, pi
10: if body(p) is empty then
11: I = I {ι
0
}
12: else
13: present p as hτ, χ, χ
, hh, h
2
, ..., h
n
iii
14: if h is an action then
15: perform h
16: p
0
= hτ, χ, χ
, hh
2
, ..., h
n
iii
17: I = I {hι
0
, p
0
i}
18: else if h is of the form !g(
~
t) then
19: p
0
= hτ, χ, χ
, hh
2
, ..., h
n
iii
20: e = h+h, hι
0
, p
0
ii
21: E = E {e}
22: else if h is of the form ?g(
~
t) then
23: find a substitution θ s.t. g(
~
t)θ B
24: if no substitution is found then
25: I = I {ι}
26: else
27: p
0
= hτ, χ, χ
, hh
2
θ, ..., h
n
θiii
28: I = I {hι
0
, p
0
i}
Again, in RBot mind now, there is only one
intention to execute. This intention is currently valid.
The first element of the intention is a goal !at(f).
Hence, an internal event is generated, which is
h+!at(F ), hp
0
3
ii
where
p
0
3
=h+!rescue(P, F ), ({∅}, {∅}), ({∅},
{trapped(P, F ) carry(p)}), hpick(P ), !at(A),
release(P ) ii
This event then is added into set of events. The
original intention is removed from set of intentions.
5 COMPARISON WITH OTHER
LANGUAGES
There have been several well known agent pro-
gramming languages from very first language like
Agent0 in 1993 (Shoham, 1993), to AgentSpeak(L)
(Rao, 1996), Golog/ConGolog (Levesque et al., 1997;
de Giacomo et al., 2000), 3APL (Hindriks et al.,
1999) in late 1990s. In this section, we will compare
AgentSpeak(I) with three later agent programming
languages (AgentSpeak(L) , ConGolog, and 3APL).
The extensive advantage of AgentSpeak(I) in compar-
ing with these languages is that AgentSpeak(I) allows
agents to act with incomplete knowledge about en-
vironment. Furthermore, others agent programming
languages leave a gap in how agents propagating their
own beliefs during agents life, which is reasonably
covered in AgentSpeak(I).
AgentSpeak(L): AgentSpeak(L) aims to imple-
ment of BDI agents in a logic programming style. It is
an attempt to bridge the gap between logical theories
of BDI and implementations. Syntactically, AgentS-
peak(L) and AgentSpeak(I) are similar. However, the
differences in plans, belief theory, and the semantics
make AgentSpeak(I) programs extensively more ca-
pable than AgentSpeak(L) programs, especially when
acting with incomplete knowledge about the environ-
ment.
3APL: 3APL is a rule-based language which is
similar to AgentSpeak(I). A configuration of 3APL
agent consists of a belief base, a goal base, and a set
of practical reasoning (PR) rules, which are likely cor-
responding to belief theory, set of events and inten-
tions, and plan library in AgentSpeak(I) respectively.
The advantage that 3APL has over AgentSpeak(I) is
the supporting of compound goals. Classification of
PR rules in 3APL is only a special way of patition
AgentSpeak(I) plan library, where we consider all
plan with equal priority. Nevertheless, AgentSpeak(I)
provides stronger support to agent abilities to perform
in highly dynamic environment.
ConGolog: ConGolog is an extension of situation
calculus. It is a concurrent language for high-level
agent programming. ConGolog uses formal model
semantics. ConGolog provides a logical perspective
on agent programming in comparison with AgentS-
peak(I) providing operational semantics that show
how agent program propagates its internal states of
beliefs, intentions, and events. Agent programs in
ConGolog plan its actions from initial point of exe-
cution to achieve a goal. An assumption in ConGolog
is that the environment changes only if a agent per-
forms an action. This strong assumption is not re-
quired in AgentSpeak(I). Hence, ConGolog provides
weeker supports to agent performance in compera-
tion with AgentSpeak(I) when dealing with incom-
plete knowledge about the environment.
6 CONCLUSION & FUTURE
RESEARCH
We have introduced a new agent programming lan-
guage to deal with incomplete knowledge about envi-
ICEIS 2004 - SOFTWARE AGENTS AND INTERNET COMPUTING
362
ronment. Syntax and operational semantics of the lan-
guages has been presented. Comparison of AgentS-
peak(I) and current agent programming languages has
been discussed. In short, agent programs in AgentS-
peak(I) can effectively perform in a highly dynamic
environment by making assumptions at two levels:
when computing belief set and when planning or re-
planning; detect planning problems raised by changes
of the environment and re-plan when necessary at ex-
ecution time; and finally propageting internal beliefs
during execution time.
There are several directions that would be extended
from this work. First, there would be an extension of
background belief theory on belief change operators
and temporal reasoning with beliefs and actions. Sec-
ond, there would be an extension of this framework
with multi-agent environment, where agent’s inten-
tions are influenced by its beliefs of others’ beliefs
and intentions. Third, there would be work on declar-
ative goals on extension of AgentSpeak(I). And fi-
nally, there would be an extension which uses action
theory to update agent beliefs during execution time.
REFERENCES
Alchourr
´
on, C. E., G
¨
ardenfors, P., and Makinson, D.
(1985). On the logic of theory change: Partial meet
contraction and revision functions. Journal of Sym-
bolic Logic, 50:510–530.
Alferes, J. J., Pereira, L. M., and Przymusinski, T. C.
(1996). Belief revision in non-monotonic reasoning
and logic programming. Fundamenta Informaticae,
28(1-2):1–22.
Bratman, M. E. (1987). Intentions, Plans, and Practical
Reason. Harvard University Press, Cambridge, MA.
Brewka, G. and Eiter, T. (2000). Prioritizing default
logic. Intellectics and Computational Logic, Papers
in Honor of Wolfgang Bibel, Kluwer Academic Pub-
lishers, Applied Logic Series, 19:27–45.
Darwiche, A. and Pearl, J. (1997). On the logic of iterated
belief revision. Artificial Intelligence, 97(1-2):45–82.
Dastani, M., Boer, F., Dignum, F., and Meyer, J. (2003).
Programming agent deliberation. In Proceedings of
the Autonomous Agents and Multi Agent Systems Con-
ference 2003, pages 97–104.
de Giacomo, G., , Y. L., and Levesque, H. (2000). Con-
golog, a concurrent programming language based
on the situation calculus. Artificial Intelligence,
121:109–169.
Delgrande, J., Schaub, T., and Jackson, W. (1994). Alterna-
tive approaches to default logic. Artificial Intelligence,
70:167–237.
D’Inverno, M. and Luck, M. (1998). Engineering agents-
peak(l): A formal computational model. Journal of
Logic and Computation, 8(3):233–260.
Ghose, A. K. and Goebel, R. G. (1998). Belief states
as default theories: Studies in non-prioritized belief
change. In proceedings of the 13th European Con-
ference on Artificial Intelligence (ECAI98), Brighton,
UK.
Ghose, A. K., Hadjinian, P. O., Sattar, A., You, J., and
Goebel, R. G. (1998). Iterated belief change. Compu-
tational Intelligence. Conditionally accepted for pub-
lication.
Giordano, L. and Martelli, A. (1994). On cumulative default
reasoning. Artificial Intelligence Journal, 66:161–
180.
Hindriks, K., de Boer, F., van der Hoek, W., and Meyer, J.-J.
(1999). Agent programming in 3apl. In Proceedings
of the Autonomous Agents and Multi-Agent Systems
Conference 1999, pages 357–401.
Levesque, H., R., R., Lesperance, Y., F., L., and R., S.
(1997). Golog: A logic programming language for
dynamic domains. Journal of Logic Programming,
31:59–84.
MaynardReidII, P. and Shoham, Y. (1998). From belief re-
vision to belief fusion. In Proceedings of the Third
Conference on Logic and the Foundations of Game
and Decision Theory (LOFT3).
Meyer, T., Ghose, A., and Chopra, S. (2001). Non-
prioritized ranked belief change. In Proceedings of
the Eighth Conference on Theoretical Aspects of Ra-
tionality and Knowledge (TARK2001), Italy.
Poole, D. (1988). A logical framework for default reason-
ing. Artificial Intelligence, 36:27–47.
Rao, A. S. (1996). Agentspeak(l): Bdi agents speak out in a
logical computable language. Agents Breaking Away,
Lecture Notes in Artificial Intelligence.
Rao, A. S. and Georgeff, M. P. (1991). Modeling rational
agents within a bdi-architecture. In Proceedings of
the Second International Conference on Principles of
Knowledge Repersentation and Reasoning (KR’91),
pages 473–484.
Rao, A. S. and Georgeff, M. P. (1995). Bdi agents: From
theory to practice. In Proceedings of the First Inter-
national Conference on Multi-Agent Systems (ICMAS-
95), San Francisco, USA.
Reiter, R. (1980). A logic for default reasoning. Artificial
Intelligence, 13(1-2):81–132.
Riemsdijk, B., Hoek, W., and Meyer, J. (2003). Agent
programming in dribble: from beliefs to goals using
plans. In Proceedings of the Autonomous Agents and
Multi Agent Systems Conference 2003, pages 393–
400.
Shoham, Y. (1993). Agent-oriented programming. Artificial
Intelligence, 60:51–93.
Wobcke, W. (2002). Intention and rationality for prs-like
agents. In Proceedings of the 15 Australian Joint Con-
ference on Artificial Intelligence (AJAI02).
Wooldridge, M. (2000). Reasoning about Rational Agent.
The MIT Press, London, England.
AGENT PROGRAMMING LANGUAGE WITH INCOMPLETE KNOWLEDGE - AGENTSPEAK(I)
363