
 
architecture and evaluation phase. At the present 
time, the first version of an electronic informer has 
been studied and developed (Trabelsi, 2006). 
However, it is not a generic tool but only a specific 
tool to evaluate a specific agent-oriented applicative 
system that is intended to supervise the passenger 
information on a public transport system. It cannot 
be used to evaluate other agent-oriented systems 
because it depends on the number of agents, the 
structure and the contents of such systems. 
Furthermore, it shows some inconveniences and 
shortcomings. We solve such problems by a generic 
and configurable model of an “electronic informer”. 
It is made up of 7 main modules (Fig. 2). 
Module 1 (M1): collecting events in user 
interface and service level from all agents and users 
of the concerned interactive system. 
M2: associating events in intermediate level 
(user interface and service events) with each 
application task. Several events in intermediate level 
can be realized to obtain a certain application task. 
For ex., 3 user interface events TabDriver_click, 
TextBoxMessage_OnChange and buttonOK_Click 
and 2 services with the same name “Send a message 
to the driver” of the agent interface Vehicule and of 
the agent application Vehicule associated with the 
application task “Send a message to the driver” of a 
system intended to supervise the passenger 
information on a public transport system. 
M3: processing collected data of a chosen agent 
in a certain period of time and showing results in 
comprehensible forms. Here are examples of 
calculations and statistics: response time for 
interactions between services; time for a certain user 
interface event (time for loading an interface agent 
or for typing an text box…); time for completing a 
service and furthermore, an application task; time 
for consulting help or unproductive time (help time 
+ snag time + search time (Bevan and Macleod, 
1994; Chang and Dillon, 2006)) that user takes to 
complete a certain application task, the percentage 
of services accomplished and furthermore, of 
application tasks accomplished, the error’s 
percentage, the help’s use frequency, the percentage 
of services and furthermore, of application tasks 
achieved per unit of time, the ration of failure or 
success for each interaction between services, the 
ration of appearance of each user interface event of a 
certain interface agent, the percentage of use for 
each service of a certain agent, the average number 
of user interface events per unit of time, and so on. 
M4: generating the Petri Nets (PN) to describe 
activity process of agents and users in the system 
from collected data and BSA (Specification Base of 
Agents). Indeed, it describes process of interactions 
between services of different agents as well as 
process of activity of user to complete application 
tasks. We call them “observed” PN. Generating PN 
facilitates evaluators because it provides them with 
the visual views of all the activities of the user and 
the concerned system. 
M5: comparing observed PN created above with 
the PN that system designer has intended before to 
complete application tasks. This comparison assists 
the evaluators in detecting use errors; for ex., the 
evaluator can perceive that the user has passed 
redundant state, has realized useless manipulations 
or takes more time than the one predicted by 
designer to complete an application task. M5 can 
also be used to assist the evaluator in comparing the 
ability of different users to use a system. 
M6: using results of processes from M3, the PN 
generated by M4, the comparison of two PN from 
M5 and usability characteristics as well as 
ergonomic criteria as a basis, M6 is responsible of 
assisting the evaluator in criticising concerned 
system and advising the designer to improve it. 
Although the term “usability” has not been defined 
homogeneously, it exists several definitions (Dix et 
al., 1993; Nielsen, 1993; ISO/IEC 9126-1); in 
general, it refers to a set of multiple concepts, such 
as execution time, performance, user satisfaction and 
ease of learning (“learnability”), effectiveness, 
efficiency, taken together (Abran et al., 2003). There 
are also several sources from different authors and 
organisations. M6 assists the evaluator in evaluating 
concerned system on the basis of criteria from 
several different sources such as the ergonomic 
criteria of (Bastien and Scapin, 1995), the quality 
attributes of (Lee and Hwang, 2004) and the 
characteristics of the consolidated usability model of 
(Abran et al., 2003). The results of processes 
(calculations and statistics) from M3 provide the 
necessary measures for the evaluation of these 
ergonomic criteria, quality attributes and 
characteristics. M6 is not yet realized. 
 M7: configuring electronic members to evaluate 
different agent-oriented systems. It allows entering 
the BSA (Specification Base of Agents) that 
describes the evaluated system, the PN that system 
designer has intended and some configuration 
parameters of evaluated system. 
5  CONCLUSION - PERSPECTIVE 
We have presented a brief state of the art concerning 
interactive system architectures, and proposed a 
mixed architecture as well as a generic and 
configurable model for assisting the evaluation for 
ICEIS 2007 - International Conference on Enterprise Information Systems
292