
 
proposed to resolve specific situations in online 
commerce, the ratings are stored centrally and the 
reputation value is computed as the sum of those 
ratings over six months. Thus, reputation in these 
models is a global single value. However, these   
models are too simple (in terms of their trust values 
and the way they are aggregated) to be applied in 
open multi-agent systems. For instance, in (Zacharia, 
1999) the authors present the Sporas model, a 
reputation mechanism for loosely connected online 
communities where, among other features, new 
users start with a minimum reputation value, the 
reputation value of a user never falls below the 
reputation of a new user and users with very high 
reputation values experience much smaller rating 
changes after each update. The problem in this 
approach is that when somebody has a high 
reputation value it is difficult to change this 
reputation or the system needs a high amount of 
interactions. A further approach of the Sporas 
authors is Histos which is a more personalized 
system than Sporas and is orientated towards highly 
connected online communities. In (Sabater, 2002) 
the authors present another reputation model called 
REGRET in which the reputation values depend on 
time: the most recent rates are more important than 
previous rates. (Carbó, 2003) presents the AFRAS 
model, which is based on Sporas but uses fuzzy 
logic. The authors presents a complex computing 
reputation mechanism that handles reputation as a 
fuzzy set while decision making is inspired in a 
cognitive human-like approach. In (Abdul-Rahman, 
2000) the  authors propose a model which allows 
agents to decide which agents’ opinions they trust 
more and to propose a protocol based on 
recommendations. This model is based on a 
reputation or word-of-mouth mechanism. The main 
problem with this approach is that every agent must 
keep rather complex data structures which represent 
a kind of global knowledge about the whole 
network.  
Barber and Kim present a multi-agent belief  
revision algorithm based on belief networks (Barber, 
2004). In their model the agent is able to evaluate 
incoming information, to generate a consistent 
knowledge base, and to avoid fraudulent information 
from unreliable or deceptive information sources or 
agents. This work has a similar goal to ours. 
However, the means of attaining it are different. In 
Barber and Kim’s case they define reputation as a 
probability measure, since the information source is 
assigned a reputation value of between 0 and 1. 
Moreover, every time a source sends knowledge that 
source should indicate the certainty factor that the 
source has of that knowledge. In our case, the focus 
is very different since it is the receiver who 
evaluates the relevance of a piece of knowledge 
rather than the provider as in Barber and Kim’s 
proposal.  
 In (Huynh, 2004) the authors present a trust and 
reputation model which integrates a number of 
information sources in order to produce a 
comprehensive assessment of an agent’s likely 
performance. In this case the model uses four 
parameters to calculate trust values: interaction trust, 
role-based trust, witness reputation and certified 
reputation. We use a certified reputation when an 
agent wants to join a new community and uses a 
trust value obtained in other communities but in our 
case this certified reputation is composed of the four 
previously explained factors and is not only a single 
factor. 
The main differences between these reputation 
models  and our approach are that these models need 
an initial number of interactions to obtain a good 
reputation value and it is not possible to use them 
discover whether or not a new user can be trusted. A 
further difference is that our approach is orientated 
towards collaboration between users in CoPs. Other 
approaches are more orientated towards competition, 
and most of them are tested in auctions. 
6 CONCLUSIONS AND FUTURE 
WORK 
This paper describes a trust model which can be 
used in CoPs. The goal of this model is to help 
members to estimate how trustworthy a person or a 
knowledge source is since when a community is 
spread geographically, the advantages of face-to-
face communication often disappear and therefore 
other techniques, such as our trust model, should be 
used to obtain information about other members.  
One contribution of our model is that it takes 
into account objective and subjective parameters 
since the degree of trust that one person has in 
another is frequently influenced by both types of 
parameters. We therefore try to emulate social 
behaviour in CoPs.  
We are testing our model in a prototype into 
which CoPs members can introduce documents, and 
software agents should decide how trustworthy these 
documents are for the user that they represent. 
    As future work, we are planning to add new 
functions to the prototype such as for instance, 
expert detection and recognition of fraudulent 
A MODEL TO RATE TRUST IN COMMUNITIES OF PRACTICE
197