INTERPRETATION OF COLLABORATIVE DECISIONS
BY META-METRICS
Norbert Gronau, Edzard Weber and Priscilla Heinze
Potsdam University, August-Bebel-Straße 89, 14482, Potsdam, Germany
Keywords: Evaluation, Metrics, Meta-metrics, Analytic hierarchy process, Group decision.
Abstract: Knowledge is bound to person. It originates in persons and is used by persons. Knowledge can be based on
data and information. It also represents a combination of classified experiences, values, context and
expertise, which provides a framework for the evaluation of these experiences and information.
Consolidated knowledge from multiple persons can, however, result in false outcomes, especially when
values are transformed into metrics. Due to the occurring aggregation, particular information about person-
specific differences in determining the overall assessment of a community is lost. Two similar assessments
can be based on entirely different single evaluations, expertises or totalities. Hence, the assessment
regarding their quality, balance and stability should be performed differently. Metrics about the initial data
basis are necessary in order to provide interpretation aid. This paper introduces the meta-metrics for the
interpretation of collaborative decision makings in communities of practice.
1 INTRODUCTION
Nowadays, collaborative decision making has
become a common practice. Organizations are
incorporating opinions from employees, customers,
partners and other external actors in order to make
the best possible decision. The concept of open
innovation (Chesbrough, 2003) changes
organizations’ mindset by integrating impulses
coming from the environment external to the
organizations. Solutions to specific problems can
now be found by means of the “wisdom of the
crowd” (Surowiecki, 2004), which is mainly pushed
by Web 2.0 sites such as wikipedia.org, ebay or
amazon.com. At the stage of team building and
development within an organization, new collective
knowledge emerges from team discussions. This
knowledge is named by Konda et al (1992) as
“shared memory”.
In making individual decisions, a person
connects the decision situation with his personal
experiences, values and abilities (from what he has
learned), builds a momentary assumption and
construct decision premisses under the influence of
his personality (Kirsch, 1997). This decision results
are therefore subjective to the existing knowledge
that the individual already owns.
In a collaborative decision making many other
aspects have to be considered. The difficulties lies
on one hand on the “stickiness” of the exchanged
knowledge to its bearers (Von Hippel, 1994). This
knowledge exchange process is often impeded by
the fact that the participants have varying knowledge
constructs due to their field of expertise, working
and private experiences, personal perception,
cultural backgrounds, and many others. This
disparity influences the way the exchanged
knowledge is interpreted, how problems are
perceived, how motivations and interests are formed
and also how decisions are made. In other words, it
affects the process of problem evaluation as well as
solution assessment in a collaborative decision
making.
On the other hand, the challange lies in the
allocation of priorities or the weighing of single sub-
aspects. How do changes of these prioritisation
impact the entire evaluation? For example,
individuals evaluate processes differently. Some
focus on the rapid implementation, while others
prefer a proper after-sale care.
Based on this phenomenon, it is also of interest
to examine the compensation between single
assessment aspects. What happens when one tries to
compare the two different evaluation aspects? How
would other aspects and prioritization change as a
respond to this?
158
Gronau N., Weber E. and Heinze P..
INTERPRETATION OF COLLABORATIVE DECISIONS BY META-METRICS.
DOI: 10.5220/0003639501580166
In Proceedings of the International Conference on Knowledge Management and Information Sharing (KMIS-2011), pages 158-166
ISBN: 978-989-8425-81-2
Copyright
c
2011 SCITEPRESS (Science and Technology Publications, Lda.)
This paper introduces the meta-evaluation
approach. We begin by describing the environment
where this approach can be applied, which is the
Communities of Practice. The paper initially gives
an overview about the approach’s conceptual
fundamentals and CoP-based requirements, followed
by brief description of a meta-model for evaluation
and the evolvable meta-metrics. We conclude by
introducing a tool-based meta-evaluation and its
application recommendation for the CoP.
2 COMMUNITIES OF PRACTICE
This section describes the characteristics of
communities of practice (CoP). After a general
understanding of the importance and relevance of
CoP is obtained, we discuss the knowledge
exchange and decision making process within the
CoP.
2.1 Characteristics
We can find communities of practice (CoP)
everywhere. Wenger et al. (2002, p. 4) defines them
as “groups of people who share a concern, a set of
problems, or a passion about a topic, and who
deepen their knowledge and expertise in this area by
interacting on an ongoing basis”. They can be
students forming a rock band, members of a cultural
society or online game players.
Members of CoPs share and accumulate
knowledge. They seek for and provide solutions. In
organizations, CoPs can be triggered (and even
institutionalized) by the management or they can be
independent. There is no boundary to describe the
affiliation of a CoP, it can be within or across
business units as well as organizations. What it does
have is a structural model (Wenger et al., 2002) and
it is divided into domain, community and practice.
A domain defines the shared understanding of
the community’s goal. It sets the foundation of all
the activities performed within the community. A
domain is the reason why the community is built at
the first place.
The exchange of knowledge is the core element
in a CoP. This is perceived through regular
interactions between its members, also called the
community. The community needs to have a
common repertoire of terms and object, which can
include cases, theories, frameworks, principles,
lessons learned, etc. So a practice can be defined as
the guideline to do specific things in a specific
domain. It includes not only tacit knowledge but also
shareable explicit knowledge.
One can say that CoP’s virtue and flaw at the
same time is its voluntary nature. On one hand, the
strongest and most robust motivation that a person
could have is his own interest. On the other hand, it
is uncontrollable and can fade with time. A self-
functioning CoP should therefore be supported and
nourished to sustain its lifetime.
2.2 Knowledge Exchange and Decision
Making
Hara (2009) categorizes three types of knowledge
being shared in the CoP. Cultural knowledge is the
kind of tacit knowledge being adopted from the
community environment. The other two types, the
practical and book knowledge, are further
categorized into subject-matter knowledge.
Book knowledge refers to explicit knowledge
provided by written artefacts, while practical
knowledge refers to “real-world application of book
knowledge” (Hara 2009, p. 114). In this case,
aspects of the practical knowledge are also tacit,
since “the best way to learn […] practical
knowledge is to observe others” (p. 116). In other
words, practical knowledge is best transferred using
socialization, which is the transformation of tacit
knowledge from one person to the other (Nonaka
and Takeuchi, 1995).
To exchange also means to share, and it always
takes at least two to share. Lesser and Fontaine
(2004) differentiate the actors as knowledge seekers,
which are people who are looking for knowledge,
and knowledge sources, which are people who
provide either the sought knowledge or the direction
to another knowledge source.
An aspect that has been ignored by researches in
this field is the fact that knowledge seekers and
knowledge sources do not only exchange new
knowledge. Knowledge seekers can only come up
with a subject-related question when they already
possess the ground knowledge needed to construct
the question. Knowledge sources have to relate the
question to their own tacit knowledge in order to
understand and provide its solution. Although these
individuals share a common interest, this does not
guarantee that their perception and interpretation of
all matters is also shared.
3 MEANING OF EVALUATION
This section defines the terms of evaluation and
explains the meaning of evaluation and evaluation
INTERPRETATION OF COLLABORATIVE DECISIONS BY META-METRICS
159
system in regard to knowledge management.
Subsequently, it introduces the requirements and
challenges of collective evaluation systems.
3.1 Terms and Definition
An evaluation procedure is a systematic process of
classifying the value judgement of an evaluating
system and a system to be evaluated (Bechmann,
1991, Bechmann, 1998). The evaluated system can
be represented as a model or it can arise as a value
system of the evaluating system (the evaluating
subject). In this case there is no limitation, whether
only experts or also ordinary persons are allowed to
participate in the evaluation process
The terms evaluation or assessment of objects
should be interpreted differently from other terms.
The description of objects is always based on
informative, factual, cognitive or indicative
statements. They are objectively comprehensible and
claim to be the description of the reality (Iwin,
1975).
Evaluations indicate what a particular person
count for as valuable, bad or indifferent. They
express convictions. Thus, every evaluation should
be put into perspective through an indication on the
evaluation person (Iwin, 1975).
Assessment or calculation defines the description
of an item, which recognizes a pure quantitative
relationship of a measurement entity. There exists a
clearly defined, mono-causal relation between the
objective of the description and the recognition of
the concrete characteristics for the actors. In this
case no value is determined but size. An evaluation
that aims to cover the result of an assessment or a
calculation is estimation (Keilhau, 1923).
Already the selection of the comparison or
evaluation criteria represents a subjective activity
and expresses the value judgment of the particular
evaluating person. Thus, evaluation and the
evaluation system lie in the focus of the operational
knowledge management.
3.2 Knowledge Management
and Evaluations
Operative knowledge management can be
characterized through the motivation of performing
a sustainable and efficient transformation of
knowledge, focusing on the company’s and process’
objectives and through the dissemination of
information through the accessibility to knowledge
(Gronau 2009; Gronau, 2010).
Knowledge is bound to person. It originates in
persons and is used by persons. Knowledge can be
based on data and information. It also represents a
combination of classified experiences, values,
context and expertise, which provides a framework
for the evaluation of these experiences and
information (Davenport and Prusak, 1998, p. 5).
Knowledge management should not be limited to
the content of knowledge. Each single actor or group
of knowledge workers contemplate the account of
knowledge for the completion of their tasks more or
less with awareness. Due to the high complexity and
dynamic of the application context, this
consideration does not emerge as an objective
assessment, calculation or estimation. It is always an
evaluation that is based on individual experiences,
insights and value judgements. The same thing
applies to performing the tasks of knowledge
management.
Knowledge management tasks include:
knowledge acquisition, knowledge preservation,
knowledge transfer, knowledge processing,
knowledge identification, knowledge evaluation,
knowledge sorting, making knowledge transparent
for others, supporting knowledge application,
determining knowledge needs and assignment of
knowledge strategies (Gronau, 2009).
Although the assessment of the result may not
always represent a subjectively performed
evaluation, it is as such for the basic evaluation
system and the implementation decision.
Objective assessments and calculations can be
helpful for certain fields of application in order to
preserve or reach competitive advantages. More
important are the many minor knowledge-based
value judgements in the day-to-day work and the
knowledge-based evaluation of complex situations.
In order to understand the decisions, we need not
only to process the knowledge content and
infrastructure but also the performed evaluation and
the applied evaluation systems. Subjective
evaluations are crucial, especially in decision
situations without sufficient knowledge basis. These
are also available for documentation.
Collective evaluations are a similar case. The
variety of information that results from a group
evaluation is even more enormous, as shown in the
next sections of this paper. This information is
available for an evaluation process controlling as
well as for a description of a collective knowledge basis.
3.3 Common Challenges
of Evaluation Systems
The evaluation describes a system and its characteristics
KMIS 2011 - International Conference on Knowledge Management and Information Sharing
160
according to certain criteria. A meta-evaluation,
about which this paper is written, describes the
results of a system assessment and the characteristics
of the data basis according to certain criteria and
metrics.
The evaluation systems used in the practice are
usually accepted as a given. Concerns of stability
and interpretability of the results are rarely
expressed. This is also due to the lack of a
systematic approach.
A single value for the specification of a system
characteristic or decision making in a community is
not enough. The characteristics of both the
evaluating and the evaluated system are too strongly
aggregated. For example: Two students of different
courses have gotten a grade of C in mathematics.
While the one grade could have emerged from an A
grade essay and a D grade oral test, the other grade
could have been composed by two C grade
achievements (an essay and an oral test). Whether
both students had to do the same amount of tests or
how competent the lecturers in the pedagogical and
technical aspects are or whether both courses have
the same size or how the achievement of both
students in comparison to their own courses is or
whether both assessments are done in the same year,
etc remains concealed.
All this extra information presents the
aggregated final grade differently. It can explain
why the students, despite their identical
mathematical grades, are strongly different from
each other. Based on pragmatic reasons, strongly
aggregated assessment systems, i.e. university
grades, are often used. For far-ranging decisions
based on an evaluation, the structure of the
evaluation system itself can be incorporated into the
evaluation.
Evaluation standards can be very subjective,
indirect and comprehensive. The informative value
of the obtained results is influenced by the
prioritized evaluation aspects. However, these
standards should be flexible for many different
conditional frameworks, since evaluation point of
views can also be vary.
Due to the subjective differences of single
evaluating individuals, the quality of the overall
evaluation is also affected. Various knowledge level
of the evaluated object impedes a qualitatively fair
allocation of the gathered data. Some evaluation
aspects are more crucial for some individuals than
for others, who focus on other minor aspects.
However, the evaluation itself is not quantitative.
Moreover, all evaluations are to be qualitatively
considered and dealt with. It is not our purpose to
create an application that deals with simple
questionnaire. It is more of an attempt to create a
tool to address to research questions.
It is therefore important to weigh evaluations and
prioritize the importance of certain indicators.
Furthermore, indicators that generally make an
evaluation possible should be developed. Various
metrics can be used to evaluate quality
characteristics. Metrics are functions that assign
numerical values to the particular characteristics of
the assessed object (Globke, 2005). Meta-metrics
describe the metrics characteristics. They do not
directly describe the real evaluated system since they
only serve to interpret the values delivered by
metrics.
These requirements are set for the following
types of system assessments:
The evaluation has to be realizable by a certain
amount of actors without losing the user-specific
details. These details have to be kept as single,
cumulative and aggregated forms.
The evaluation system has to be dynamically
extensible. Actors should be able to add more
evaluation criteria and classify system elements
in sub-elements.
Evolvable metrics has to be generated out of the
database and has to be assigned (visually) to the
affected areas of the system in a comprehensible
way.
An important non-functional requirement is that the
interface has to be web-based and user-friendly.
These requirements form the basic of the conceptual
assessment model and its technical realization as a
metric-cockpit.
4 METRICS
The following section gives a short introduction to
the conceptual meta-model of evaluation. It clarifies
the location of data which is used by the mining of
metrics. It also defines which data can be used to
attain metrics.
4.1 Evaluation Tree
The evaluation is based on a tree structure tree.
Every branching of the evaluation tree corresponds
to its logical anatomy of the evaluated object and the
evaluation criteria. Sub-aspects can then be extracted
rapidly and examined individually. Comparing
single branches can be made possible using a
dynamic structure.
INTERPRETATION OF COLLABORATIVE DECISIONS BY META-METRICS
161
{}|}{|, with }{ ttttttree ==
Facing depth and width, only pragmatics and
efficiency is considered. An evaluation tree T can be
represented as a nested term with braces or as a
branched term with braces.


1
13
132131
12
122121
11
112111
{}
...
{}
...{}{}
{}
...{}{}
{}
...{}{}
.}{}{}...}..{{}{}...}{{{{}{}...}
T
T
TT
T
TT
T
TT
T =
Every evaluating person creates an evaluation
q and a weighting p for the given tree. Only leaves
of the tree get an explicit evaluation. The values of
branching are calculated by those leaves.
=
=
+
else
{} if with
...
...
1
1
null
TZaa
q
n
n
xx
T
xx
The evaluation of the weighting is optionally done
analogous to the AHP (Analytic Hierarchy Process)
(Saaty, 2005; Meixner and Haas, 2008). There is a
paired comparison of all adjacent branches. By
calculating the eigenvector the sum of all weighting
values is 1. Furthermore it is possible to integrate
recommendations for more branches. These are
initially not known by other users and get a minimal
weight automatically.
=
=
else
if und ]1...0[ with
...
#
1
...
...
1
1
...
1
11
1
null
nullTpaa
p
n
n
xx
in
n
xx
T
i
xxx
T
xx
The total evaluation V of a user is a recursive
formula. The value V is calculated by the sum of
each weighted values of the adjacent branches.
[]
+
=
=
=
else}){(
{} if
)(
#
...
1
#1
#1#1
#
1
T
...
......
S
xx
iS
SS
T
i
T
xxx
xx
T
xx
T
iSVp
Tq
SV
The variable S is a vector which stores the path to
the particular branching. The request parameter is
the path to the particular branching which is
calculated, eg. V
T ({1}) for the root node.
4.2 Meta-metrics
The meta-metrics concept was introduced to
investigate the alteration of the (partial) tree under
certain point of views. Since evaluations should be
collectively performed for an object, the particular
weighting of the evaluation aspects also differs. The
data basis for a collective evaluation consists of the
following elements:
A set of users A = [a1 . . . an] who can be
assigned to different teams of competence or
groups.
A decision tree T
The evaluation Q
T
a
by the user a concerning the
decision tree T
The weighting P
T
a
by the user a concerning the
decision tree T
The time D
TP
a
and D
TQ
a
when the user a did the
particular evaluation or weighting
Although the variety of this database is very low
there are emerging a number of metrics to interpret
the proper target value V and its reliability.
Generally it has to be differed every aspect whether
its calculation is only based on the data of one user
or it is based on the data of all participating users. A
further option is the specification of costs.
On this basis, other data can be constituted. They
are the tree structure, weighting, evaluation and
creation date as based on the amount of each
individual data (data should be compiled as bulk) as
well as on the aggregated individual data (data
should be aggregated into one value).
These data can be expanded through indicating
the costs. The allocation of costs should be relevant
to the allocation of the weighting. Aspects with
higher weight get higher cost value than the others.
Metrics for Evaluation Q
T
Here we consider the relations of special evaluations
of individual elements or fragment trees by the
evaluating users. E.g. a total evaluation of grade C
which consists of values A and D features a higher
variance than an evaluation of the fractional values
C and C. On the contrary, the meta-variance
considers these variance values. An evaluation with
a high variance of the particular evaluations can
result in a low-level meta-variance. Such
considerations of variances are necessary for
integrating the existence of superior or substandard
evaluated fragment branches into the interpretation
of the total evaluation.
Variance on level if single user. As explained
above, collective evaluation can consist of
different partial values. The variance provides the
allocation of these differences for a single user.
Meta-variance if single user. The meta-variance
provides the variance of the variance of single
evaluations for a single user.
KMIS 2011 - International Conference on Knowledge Management and Information Sharing
162
Variance on level if user groups. Variances
happen often on the user group level and can be
shown through this metrics.
Meta-variance if user groups. The allocation of
the variance in a user group is shown through this
metrics.
Variance in branching if user groups. Variances
cannot be directly evaluated. The calculation
occurs on the basis of the gathered data and the
weighting. The variance of the calculation is
based on the variance on the user level.
Meta-variance in branching if user groups. A
variance of the variance in the calculated
evaluation for sub systems can be displayed
through this metrics.
Homogenity test
Quantile comparison
The homogenity test, U-Test or also called Mann-
Whitney-test detects significant differences in the
evaluation by two groups of users. For example do
practitioners and scientists evaluate a process model
or particular aspects of it similar or one of the
groups is more optimistic? The quantile comparison
for example shows the position of special users in a
set of users. This provides individual user profiles
with data about the frequency of the user evaluating
like the 10%-quantile of positive or negative
evaluators.
Metrics for Weighting P
T
The allocation of the weighting is different,
according to the examination aspect. Meta-metrics
describe in this case the variance of the weightings.
These are analogous to the meta-metrics of
evaluation Q
T
.
Metrics for Weighted Evaluation Q
T
P
T
The variance of the evaluations and weightings can
also be assessed using meta-metrics. These are
analogous to the meta-metrics of evaluation Q
T
.
Weighted Evaluation of Orthogonal Sets of
Nodes
Meta-metrics do not have to align to the structure
given by the evaluation tree. Particular elements and
its evaluations can be arbitrarily recombined. Thus it
can consider other characteristics of classification
which can not be represented in the given structure
of the tree.
Variance in group of elements if single user
Variance in group of elements if user group
Homogenity test
Quantile comparison
The non-weighted evaluations and weightings
can be considered separately as well.
Metrics for Sensitivity
With the analysis of sensitivity it is possible to prove
the stability of a total evaluation based on its
weighted particular evaluations. How big may
fluctuations of values be in order to affect the total
evaluation?
Sensitivity of individual evaluations
Sensitivity of aggregated evaluations
Sensitivity of individual weightings
Sensitivity of aggregated weightings
Sensitivity of individual weighted evaluations
Sensitivity of aggregated weighted evaluations
Variance and meta-variance can also be
considered.
Metrics for Consistency
The pairing comparisons of elements are proved for
consistency. These elements are calculating the
weighting values. Complete consistency exists when
there are no conflicts between each of the
comparisons to one another. A conflict for example
is stated when a > b and b > c but also c > a is
evaluated.
Consistency of individual weighting of
convergent branches
Consistency of aggregated weighting of
convergent branches
Variance of individual consistency values
Meta-variance of the individual consistency
values
Variance of aggregated consistency values
Meta-variance of aggregated consistency values
Variance in the set of individual consistency
values
Meta-variance in the set of individual consistency
values
Metrics for Data Quality
The quality of data can be evaluated by different
aspects.
There exists a maturity of data. With its help one
can assume that new data is more reliable than
old data.
INTERPRETATION OF COLLABORATIVE DECISIONS BY META-METRICS
163
The level of detail in depth of the evaluation tree
is another aspect. It is not only interesting to
know the total size but its vertical balance as
well.
Analogous to the above the level of detail in
width shows the horizontal balance.
The number of the participating users which
evaluations will be aggregated can differ in some
particular branches of the tree. Absolute values
and balance values (variances) can be calculated.
The competence of users can differ in certain
evaluation branches. Decisions made by experts
have a different meaning than those made
ordinary people.
All of these aspects can be described by
minimum, maximum and average values as well as
by variance and meta-variance.
Metrics for Aggregation Paths
In case there is an evaluation tree available to
multiple individual evaluations, then it is important
to observe the level in the tree on which the
aggregation of single evaluations is chosen. It is
possible that the evaluations on the leaves are
already aggregated. The total value is established
from the weighted and aggregated leaf values. It is
also possible that only the individual trees are first
evaluated. The total value is established based on
individual total evaluations.
In an n-layered evaluation tree there are n
different approaches from which the aggregation of
individual evaluations can be chosen (aggregation
paths).
Various aggregation paths can deliver many
different total values, despite of their identical data
basis. This can happen through a truncation error, or
individual evaluations are weighted differently after
the aggregation than during the evaluation in
individual trees.
Metrics on aggregation paths can be applied on
evaluations and weightings.
Metrics for Profitability
In particular it can be shown the costs for the
preservation of a as-is state, upgrade or a
downgrade. The Standard-AHP already regards the
calculation of profitability. Through collaborative
evaluation and explicit consideration of subsystems
some indicators are applicable for this context:
Individual approximation of costs
Collaborative approximation of costs
Variance of individual approximation of costs
Variance of aggregated approximation of costs
Variance in the set of individual approximation of
costs
Meta-variance of individual approximation of
costs
Meta-variance of aggregated approximation of
costs
Meta-variance in the set of individual
approximation of costs
Individual cost-weighted elasticity
Collaborative cost-weighted elasticity
Homogenity test
Quantile comparison
Metrics for Temporal Change
Evaluated systems can change over time. The
evaluation system can change. And the set of
evaluators and their opinion can change over time as
well. It is possible to represent the strength of
constancy with further indicators. There is a change
frequency and a changing regularity, the existence of
tendencies, the range of fluctuation (min/max) or the
variance. These six types of indicators are generally
applicable on every indicators mentioned before.
There is also variance and meta-variance which can
be proven for each of these six types of indicators.
5 THE METRIC COCKPIT
In the prior sections more than 100 different (meta-)
metrics were introduced. Every metric could be
extended by 18 further metrics if the temporal
progress is included. The complexity is huge and
cannot be handled pragmatically or intuitively in its
totality.
For example, what is a meta-variance of variance
of the temporal variance of the meta-variance of
variance of an individual weight? This bizarre and
complicated sentence describes if there are
fluctuations of evaluation between fragments of
branches of the evaluation-tree and how much it
differs in the evaluation-system. In special situations
this information may be helpful. Therefore a metric
should not be stamped as senseless as long as its
senselessness is not proven generally. But this will
not be possible at all. Thus an approach has to be
chosen for establishing those metrics, whose
changes in value have a significant and interpretable
effect on the evaluation of an object. In order to
identify those effects and the interdependences
KMIS 2011 - International Conference on Knowledge Management and Information Sharing
164
between these, it is indispensible to use a tool-based
approach for real applications.
An appropriate metric-cockpit is currently being
developed. It is a web-based tool, which allows
distributed groups of experts to evaluate objects.
However, they evaluate only one system or state in
each particular session at a time. Results of
individual sessions will be merged. Collecting data
becomes a simple procedure that is done via the
intuitive user interface so that values and weights
can be easily assigned and metrics can be mapped.
The tool solves tasks like collaborative
evaluation and comprehensibly shows the
consequences of input values on new (meta-)
metrics. Interpretation of meta-metrics can possibly
only be done context-specifically. The meaning of
high meta-variance as positive, negative or irrelevant
should initially be interpreted by domain experts. A
possible existing systematic depending on the
intention of evaluation could be derived. The
evaluation system is based on a dynamic tree
structure. An important fundament is the
development of a dynamic data model which can
represent this tree structure. Additional experts can
create new branches for additional indicators or
system elements at any time. This dynamics gets
more interesting when weights are redistributed
based on the already obtained values. Results can be
interpreted differently and feature a wide range of
calculation and analysis possibilities.
The set of evaluation patterns is very dynamic
and can be adjusted to several considerations.
Furthermore, the tool consists of a management
version for repeating evaluation sessions and allows
for observing evaluation over time. Otherwise
evaluation patterns once created can be reused and
recombined. On the one hand, this tool addresses
experts with the task of evaluating systems. On the
other hand, academics have the possibility to prove
new indicators or to analyse the significance of a
special indicator with the help of its values or to
research the values itself. Combined with visual
instruments of software maps (Lankes et al. 2005)
the tool helps to ease the benchmark of enterprise
architectures or other visualised patterns.
Interpretations are developed faster and can be
illustrated objectively.
6 CONCLUSIONS
AND OUTLOOK
Aspects of collective evaluation are the complexity
driver in the conceptual as well as technical
evaluation system. However, these aspects are
becoming more and more important. Web-based
communities are getting evermore established,
(distributed) group works have become common
practices, increasing complexity of decisions has
become unmanageable for single users, the amount
of consent-based decisions, for whom no optimal
solution exists, have been increasing and electronical
participation has been taking many robust forms.
Along with them, the responsibilities and tasks
that are to be collectively carried out also increase.
Collective evaluations belong to them. In order to
obtain trust in the evaluation data, we need every
single metrics. The single actor that does not know
his collective partner and the total characteristics of
all participants can make impressions of the result
reliability.
Meta-metrics are also an important instrument
for small communities in organizational context.
They are useful for the documentation of a collective
expert decision and can disclose and quantify the
advantages and disadvantages in the evaluation.
They also can be used to identify improvement
potentials for future evaluation activities.
Evaluations are a form of specification of
knowledge. They imply subjective value judgements
based on expertise insights and experiences. The
interdependency that exists during the evaluation
and in evaluation systems are also a field of topic,
which is relevant for knowledge management.
REFERENCES
Bechmann A (1991) Bewertungsverfahren - der
handlungsbezogene Kern von Umweltverträglich-
keitsprüfungen. In: Hübler K-H, Otto-Zimmermann K
(Hrsg.) Bewertung der Umweltverträglichkeit -
Bewertungsmaßstäbe und Bewertungsverfahren für
die Umweltverträglichkeitsprüfung, 2. Auflage,
Eberhard Blottner Verlag, Taunusstein
Bechmann A (1998) Anforderungen an Bewertungs-
verfahren im Umweltmanagement -dargestellt am
Beispiel der Bewertung für die UVP. Bericht 20,
Institut für Synergetikund Ökologie (SYNÖK),
Barsinghausen
Chesbrough, H. W. (2003) Open Innovation: The new
imperative for creating and profiting from technology.
Boston: Harvard Business School Press.
Davenport, T., Prusak, L., 1998. Wenn Ihr Unternehmen
wüßte, was es alles weiß. Das Praxisbuch zum
Wissensmanagement. Moderne Industrie, Landsberg.
Globke W (2005) Software-Metriken. Moderne
Softwareentwicklung, Universität Karlsruhe,
Karlsruhe.
INTERPRETATION OF COLLABORATIVE DECISIONS BY META-METRICS
165
Gronau, N., (2009) Wissen prozessorientiert managen.
Methoden und Werkzeuge für die Nutzung des
Wettbewerbsfaktors Wissen in Unternehmen.
Oldenbourg, München.
Gronau, N. (2010) Potsdamer Wissensmanagement-
Modell. In Enzyklopädie der Wirtschaftsinformatik.
Oldenbourg, München, 4th edition. http://www.
enzyklopaedie-der-wirtschaftsinformatik.de (Abruf:
8.10.2010).
Hara, N. (2009) Communities of Practice. Fostering Peer-
to-Peer Learning and Informal Knowledge Sharing in
the Work Place. Springer Verlag, Berlin.
Iwin, A. A., (1975) Grundlage der Logik von Wertungen.
Akademie Verlag, Berlin.
Keilhau, W. (1923) Die Wertungslehre – Versuch einer
exakten Beschreibung der ökonomischen
Grundbeziehungen. Verlag Gustav Fischer, Jena.
Kirsch, W. (1997) Die Handhabung von
Entscheidungsproblemen : Einführung in d. Theorie
der Entscheidungsprozesse 5th ed., Herrsching.
Konda, S., Monarch, I., Sargent, P., Subrahmanian, E.
(1992) Shared memory in design: A unifying theme
for research and practice. Research in Engineering
Design, 4(1), p.23–42.
Lankes J, Matthes F, Wittenburg A (2005)
Softwarekartographie als Beitragzum
Architekturmanagement. In: Aier S, Schönherr M:
Unternehmensarchitekturen und Systemintegration.
GITO-Verlag, Berlin
Lesser, E. L., Fontaine, M. A. (2004) Overcoming
Knowledge Barriers with Communities of Practice:
Lessons learned through practical experience. In:
Knowledge Networks – Innovation through
communities of practice, Idea Group Publishing,
London, pp. 14-23.
Mann H. B., Whitney D. R. (1947) On a Test of Whether
one of Two Random Variables is Stochastically Larger
than the Other, Ann. Math. Statist. Volume 18,
Number 1 1947, 50-60, Ohio State University,
Columbus
Meixner O, Haas R. (2008) Wissensmanagement und
Entscheidungsunterstützung. Eigenverlag Institut für
Marketing und Innovation, Univ. f. Bodenkultur Wien,
Wien
Nonaka, I., Takeuchi, H. (1995) The knowledge-creating
company: how Japanese companies create the
dynamics of innovation, Oxford University Express,
New York.
Saaty T. L. (2005) Theory and Applications of the
Analytic Network Process: DecisionMaking with
Benefits, Opportunities, Costs, and Risks. RWS
Publications, Pittsburgh
Surowiecki, J. (2004) The Wisdom of Crowds: Why the
Many Are Smarter Than the Few and How Collective
Wisdom Shapes Business, Economies, Societies and
Nations. Doubleday.
Von Hippel, E. (1994) “Sticky information” and the locus
of problem solving: Implications for innovation.
Management science, 40(4), p.429–439.
Wenger, E., McDermott, R., Snyder, W.M. (2002)
Cultivating Communities of Practice. Harvard
Business School Press. Boston, Massachusetts.
KMIS 2011 - International Conference on Knowledge Management and Information Sharing
166