Decision Making with Clustered Majority Judgment
Emanuele D’ajello
1
, Davide Formica
2
, Elio Masciari
1
, Gaia Mattia
1
, Arianna Anniciello
1
,
Cristina Moscariello
1
, Stefano Quintarelli
1
and Davide Zaccarella
1
1
University of Napoli Federico II, Napoli, Italy
2
Copernicani, Milano, Italy
Keywords:
Decision Making, Social Choice, Cluster, Majority Judgement, K-Medoids.
Abstract:
In order to make a decision process that can best represent the will of a group of people who express them-
selves about something, like the election of a president or any other situation where people make judgements
about more than two possibilities, this paper wants to propose the usage of unsupervised learning techniques,
in particular cluster techniques, to extend a single-winner voting system Majority Judgement to a multi-winner
system which aggregate the preferences of subsets of voters. After an introduction about Majority Judgement,
the algorithm used for its clustered version is presented. In the end, a case study will be reported to highlight
the differences with the classic Majority Judgment, since sometimes it could be preferable based on the con-
tingencies of the particular election, especially when there is a desire not to neglect minority groups with the
same preferences.
1 INTRODUCTION
In this work we first describe general behiavour and
advantages of Majority Judgement.
Limits of this model are shown in the case of mul-
tiwinner elections as it can lead to scenarios in which
minorities, albeit numerous, are not adequately repre-
sented.
For this reason our aim is to implement a clus-
tered version of this algorithm, in order to mitigate
these disadvantages: it creates clusters taking into
account the similarity between the expressed prefer-
ences and then for, each of these created groups, Ma-
jority Judgement rule is applied to return a ranking
over the set of candidates. These traits make the al-
gorithm available for applications in different areas
of interest in which a decisional process is involved.
Different voting rules provides different results. Their
use depends on the main characteristics we would like
to have during a decision process. For example, we
could be more interested in avoiding tactical strategy,
while accepting some limits about how representa-
tive judgements are. We want to explore an example
of this trade-off and then describe a ’more inclusive’
strategy, using clustering applied to majority judge-
ment. Consider three agents who express their binary
judgement (”Yes” or ”No”) for four statements A, B,
A B and A B, comparing outcomes from two
different rules. Premised-based rule first take major-
ity decisions on A and B and then infers conclusions
on the other two propositions.
As shown in the table 1, results are quite different
considering the used role.
We now focus on Agent 2 case: he’s represented
in just one of the propositions (A), and his judgement
doesn’t agree with the outcome, in all the other cases.
Here appears clear that Agent 2 could think about
manipulating the outcome, pretending a disagreement
for A. As consequence, the premised model reacts
by providing as final outcome on 3 agents’ votation
a ”No” for both A B and A B, as originally
expressed by Agent 2.
Table 1: Three agent case of voting.
A B A B A B
Agent 1 Yes Yes Yes Yes
Agent 2 Yes No No No
Agent 3 No Yes No No
Premised rule Yes Yes Yes Yes
Majority Yes Yes No No
In such a way, by strategically voting, Agent 2
could manipulate final results. This is the major draw-
back of using premised voting as rule.
On the other hand, we can highlight a paradoxal
aspect if we consider the majority rule: outcomes of
the latest two propositions are inconsistent with ”Yes”
value assigned to both A and B.
This is known as discursive dilemma and deals
134
D’ajello, E., Formica, D., Masciari, E., Mattia, G., Anniciello, A., Moscariello, C., Quintarelli, S. and Zaccarella, D.
Decision Making with Clustered Majority Judgment.
DOI: 10.5220/0011524600003335
In Proceedings of the 14th International Joint Conference on Knowledge Discovery, Knowledge Engineering and Knowledge Management (IC3K 2022) - Volume 3: KMIS, pages 134-140
ISBN: 978-989-758-614-9; ISSN: 2184-3228
Copyright
c
2022 by SCITEPRESS Science and Technology Publications, Lda. All rights reserved
with inconsistency problem in judgement aggregation
based on majority rule (G. Bellec, 2020).
Both premised and majority rule present draw-
backs, but the latter has one important feature: it
doesn’t suffer from deficiency shown by the first, so
that, if an Agent care about the number of proposi-
tions agreeing with his own judgement, then it is al-
ways in his best interest to report his true preference.
For this reason we focus our attention on majority rule
as a transparent asset in decisional process, while try-
ing to deal with its intrinsic problems related to judge-
ment aggregation (Kleinberg, 2002).
Our attempt is not aimed to solve above-
mentioned dilemma, rather joining a more refined
majority rule (Majority Judgement) with cluster ap-
proach’s advantages in aggregating similar patterns.
2 STRATEGIES OF DECISION
MAKING
2.1 Collective Decision Process and
Majority Judgement
Business meetings are often perceived as useless
and unproductive. Moreover, strong difficulties arise
when there is an important decision to take: when
complexity and effort increase, it’s more likely to cre-
ate ’clusters’ representing opposite opinions. More
often, final decision becomes leader’s task. The
biggest difficulty is to decide for the best alternative:
in many contexts, more than maximizing the number
of people in agreement with the taken decision, it’s
about making all different groups of people feel in-
cluded in the decision process. This can be achieved
both with leadership, that makes subordinates feel im-
portant, and with an inclusive criterion we identify in
this paper as the clustered version of Majority Judge-
ment.
Social choice theory studies methods to consoli-
date the different views of many individuals into a sin-
gle outcome. The main applications of social choice
theory are voting and jury decisions(Brandt et al.,
2016).
During voting, electors in a democracy choose one
candidate among a list of many candidates, while in
a jury decision the individual judges evaluate com-
petitors in a competition, ranking them. Social choice
theory’s fundamental problem is to find a social deci-
sion function that elaborates the preference of judges
or voters converging into a jury or electoral decisions
while adhering to the main principles of fair voting
procedures such as non-dictatorship, universality, in-
dependence of irrelevant alternative. Arrow’s impos-
sibility theorem shows that the fundamental problem
has no acceptable solution in the traditional model
(Arrow, 2012). In (Serafini, 2019) Condorcet and
Borda methods and limits, Arrow’s impossibility the-
orem and Majority Judgement are illustrated. The
results of the general elections have shown that vot-
ing systems can run into the Arrow’s paradox. A fa-
mous example is the 2000 US presidential election.
The presence of a minor candidate, Ralph Nader, who
had no chance of winning, made Bush the winner in
Gore’s place. Given the political positions of Nader
and Gore, it is very likely that Nader votes they would
have gone to Gore if Nader hadn’t shown up. It is
also likely that Nader supporters preferred Gore to
Bush. The American first past the post electoral sys-
tem, whereby only one vote can be cast and the can-
didate who gets the most votes wins, has not allowed
voters to fully express their preferences.
Majority Judgement (MJ) is a voting technique
proposed by two mathematicians in 2007, Michel
Balinski and Rida Laraki, aiming to overcome tradi-
tional voting methods’ paradoxes and inconsistencies.
In (Balinski and Laraki, 2007), Balinski and Laraki
briefly describe MJ, moving from a social choice the-
ory analysis which highlights traditional voting meth-
ods failures. Arrow’s impossibility theorem shows
that the fundamental problem has no acceptable so-
lution in the traditional model. Practice, otherwise,
suggests a different input formulation, traditionally
expressed as a preference ranking. Measuring and
voting is used during sports competitions such as ice-
skating, gymnastics, or wine competitions.
Measuring occurs when a common language is de-
fined, either if this is a quantitative or qualitative lan-
guage. In this perspective, Arrow’s theorem can be
interpreted as follows: in absence of a common lan-
guage a coherent collective decision cannot be made.
Hence the need for a voting method where voters eval-
uate candidates in terms of a common language rather
than simply ranking them. MJ makes it possible,
since this method asks for electors/judges to express
a judgment on all the candidates/competitors, using
a known common language. Theorems and experi-
ments confirm that, while there is no method which
can completely overcome strategic voting, majority
judgment strongly resists manipulation. Balinski and
Laraki present MJ as a method both for evaluation and
ranking of competitors, candidates or alternatives. In
(Balinski and Laraki, 2014) authors underline that the
assumption of traditional methods that electors don’t
really make a personal ranking of candidates, is false
and the reason behind the inadequacy of traditional
voting models. Forcing electors to rank candidates
Decision Making with Clustered Majority Judgment
135
leads to incoherence, impossibility and incompatibil-
ity. Balinski and Laraki (Balinski and Laraki, 2011)
present the case of the French presidential elections
of 2002 and the results experiment related to the MJ
conducted on the occasion of the French presidential
elections of 2007, that is a perfect example of Arrow’s
paradox: the winner depends on the presence or ab-
sence of candidates, including those who have abso-
lutely no chance of winning.
2.2 Social Theory’s Requirements
To introduce social choice theory formally, consider
a simple decision problem: a collective choice be-
tween two alternatives. The first involves impos-
ing some ‘procedural’ requirements on the relation-
ship between individual votes and social decisions
and showing that majority rule is the only aggrega-
tion rule satisfying them. May (1952) (May, 1952)
(Caroprese and Zumpano, 2020) introduced four such
requirements for majority voting rule must satisfies:
Universal Domain: the domain of admissible in-
puts of the aggregation rule consists of all logi-
cally possible profiles of votes < v
1
, v
2
, ...,v
n
>,
where each v
i
[1, 1] (to cope with any level of
‘pluralism’ in its inputs);
Anonimity: applying any kind of permutation on
individual preferences does not affect the outcome
(to treat all voters equally), i.e.,
f (v
1
, v
2
, ..., v
n
) = f (w
1
, w
2
, ..., w
n
) (1)
Neutrality: each alternative has the same weight
and for any admissible profile < v 1, v
2
, ..., v
n
>,
if the votes for the two alternatives are reversed,
the social decision is reversed too (to treat all al-
ternatives equally), i.e.
f (v
1
, v
2
, ..., v
n
) = f (v
1
, v
2
, ..., v
n
) (2)
Positive Responsiveness: For any admissible
profile < v
1
, v
2
, ..., v
n
>, if some voters change
their votes in favour of one alternative (say the
first) and all other votes remain the same, the so-
cial decision does not change in the opposite di-
rection; if the social decision was a tie prior to
the change, the tie is broken in the direction of
the change, i.e., if w
i
> v
i
for some i and w
j
= v
j
for all other j] and f (v
1
, v
2
, ..., v
n
) = 0 or 1, then
f (w
1
, w
2
, ..., w
n
) = 1.
A multi-winner election (V,C,F,k) is defined by a set
of voters V expressing preferences over a number of
candidates C, and then a voting rule F returns a sub-
set of size k winning candidates. A voting rule can
perform its role on different types of ordered prefer-
ences, even though the most common refers to a pre-
fixed linear order on the alternatives. In most of cases,
these are chosen a priori.
Formally we denote set of judgements performed
by the i-th voter as profile preferences P
i
. Each profile
contains information about the grade of candidates by
voters. The voting rule F associates with every profile
P a non-empty subset of winning candidates.
In multi-winner elections more precise traits are
required, compared to the ones stated in May’s theory
(Fabre, 2018). Indeed:
Representation: for each subset of voters
V
i
V (with
|
V
i
|
j
n
k
k
(3)
at least one successful candidate is elected from
that partition;
Proportionality: for each subset of voters
V
i
V (with
|
V
i
|
j
n
k
k
(4)
number of elected candidate is proportional to the
subset’s size.
An implicit assumption so far has been that pref-
erences are ordinal and not interpersonally compa-
rable: preference orderings contain no information
about each individual’s strength or about how to com-
pare different individuals’ preferences with one an-
other. Statements such as ‘Individual 1 prefers alter-
native x more than Individual 2 prefers alternative y’
or ‘Individual l prefers a switch from x to y more than
Individual 2 prefers a switch from x* to y*’ are con-
sidered meaningless. In voting contexts, this assump-
tion may be plausible, but in welfare-evaluation con-
texts - when a social planner seeks to rank different
social alternatives in an order of social welfare - the
use of richer information may be justified.
2.3 Single-winner Majority Judgement
In order to describe the majority judgement, we need
to use a table that refers to ranking for all the candi-
dates C, by using tuples (Balinski, 2006). Suppose
having six possible choices we may use the words:
excellent, very good, good, discrete, bad, very bad.
So each candidate is described by a bounded set of
vote.
It is a single winner system, found comparing recur-
sively median grade between candidates: first, grades
are ordered in columns from the highest to the lowest
according to the order relation, then the middle col-
umn (lower middle if number of grades are even) with
the highest grade between candidates’row is selected.
KMIS 2022 - 14th International Conference on Knowledge Management and Information Systems
136
If there’s a tie, algorithm keeps on discarding grades
equal in value to the shared median, until one of the
tied candidate is found to have the highest median.
It’s possible to generalize this system to a multi-
winner strategy. The use of a particular cluster in such
a contest is crucial, so we first describe the possible
options and then we motivate our choice, explaining
how K-medoids work.
3 CLUSTERS
3.1 Categories of Clusters
We can state that different types of cluster share the
ability to divide data into groups with some common
features. We can distinguish:
1. Connectivity Models: distance between data
points is computed and according to this they show
similarity.
Two approaches are equally valid: bottom-up
where each observation constitutes a group and then
pairs of clusters are merged; top-down, where obser-
vation are included in one cluster and then it’s segre-
gated; this kind of model is not flexible as there is no
chance to modify cluster once created;
2. Distribution Models: in this case, probabilities
are computed. They refer to the belonging of a partic-
ular distribution once the cluster is created. Applying
distribution methods sometimes can be risky as they
are prone to overfit data if a precise constraint on com-
plexity is given;
3. Density Models: areas of higher density are identi-
fied and local cluster are there created, while remain-
ing data can be grouped into arbitrary shaped region,
with no assumption about da ta distribution; for their
flexibility, these models are fit to handle noise better
than organizing data on fixed required body.
Since we would like to model clusters that satisfy re-
quirements expressed before, based on pretty fixed
structure with no assumption about distribution fol-
lowed by data, it seems more accurate considering a
different class of clustering algorithm known as cen-
troid models.
3.2 K-Medoids
For our goal, namely selecting winners from a group
of candidates, K-medoids clustering are used, because
medoids are the representative objects that are consid-
ered, in order to have a result that belongs to the group
of candidates: it is based on the most centrally located
object in a cluster, so it is less sensitive to outliers
in comparison with the K-means clustering, which is
not the best model in our case since it could result in
something that is not present in the candidate list due
to the fact that is an average-based method rather than
median. In fact, the medoid is a data point (unlike
the centroid) which has the least total distance to the
other members of its cluster (Fazzinga et al., 2013).
Another advantage for this choice is that the mean
of the data points is a measure that gets highly af-
fected by the extreme points; so, in K-Means algo-
rithm, the centroid may get shifted to a wrong po-
sition and hence result in incorrect clustering if the
data has outliers because then other points will move
away from. On the contrary, the K-Medoids algo-
rithm is the most central element of the cluster, such
that its distance from other points is minimum. Thus,
K-Medoids algorithm is more robust to outliers and
noise than K-Means algorithm (Ceci et al., 2015).
The used K-medoid algorithm is part of the python
sklearn library (Pedregosa et al., 2011), which is
oriented to machine learning. This library supports
partitioning around medoids (PAM) (Leonard Kauf-
man, 2015) proposed by Kaufman and Rousseeuw
(1990). The workflow of PAM is described below
(Hae-Sang Park, 2008).
The PAM procedure consists of two phases: BUILD
and SWAP:
In the BUILD phase, primary clustering is per-
formed, during which k objects are successively
selected as medoids.
The SWAP phase is an iterative process in which
the algorithm makes attempts to improve some of
the medoids. At each iteration of the algorithm,
a pair is selected (medoid and non-medoid) such
that replacing the medoid with a non-medoid ob-
ject gives the best value of the objective function
(the sum of the distances from each object to the
nearest medoid). The procedure for changing the
set of medoids is repeated as long as there is a
possibility of improving the value of the objective
function.
Suppose that n objects having p variables each should
be grouped into k (k < n) clusters, where k is known.
Let us define j-th variable of object i as X
i j
(i = 1, ..., n;
j = 1, ..., p). As a dissimilarity measure is used the
Euclidean distance, that is defined, between object i
and object j, by:
d
i j
=
s
p
a=1
(X
ia
X
ja
)
2
(5)
where i and j range from 1 to n. The medoids is se-
lected in this way:
calculate the Euclidean distance between every
pair of all objects;
Decision Making with Clustered Majority Judgment
137
calculate v
j
=
n
i=1
d
i j
n
l=1
d
il
;
sort all v
j
for j = 1, ..., n in ascending order and
select the first k object that have smallest initial
medoids value;
from each object to the nearest medoid we can ob-
tain the initial cluster result;
calculate the sum of distances from all objects to
their medoids;
update the current medoid in each cluster by re-
placing with the new medoid, selected minimiz-
ing the total distance from a certain object to other
objects in its cluster;
assign each object to the nearest medoid and ob-
tain the cluster result;
calculate the sum of distance from all objects to
their medoids, so if the sum is equal to the pre-
vious one, then stop the algorithm; otherwise, go
back to the update step.
In our case, prior knowledge about the number of win-
ners is required, and identified clusters are restricted
in minimum size that is number of voters on the num-
ber of candidates (
n
k
).
3.3 Clustered Majority Judgement
For each cluster majority judgement is applied, and a
final ranking of candidates is returned (Andrea Loreg-
gia, 2020). Given k the number of candidates to be
elected, algorithm seeks the optimal number of clus-
ter to create.
This ranges from 1 to k and has to satisfy an im-
portant additional requirement: once selected a num-
ber of clusters, if a tie occurs and so k’ vacant seats are
left, algorithm is repeated k’ times until tie’s broken.
In case there’s no broken tie, fixed number of cluster
is changed.
In order to explain how the algorithm deals with
polarization problem, most relevant steps are de-
scribed in pseudocode and in annotated strides:
1. set the number of winners as maximum number of
clusters;
2. cluster are created decreasing the maximum num-
ber of clusters until the optimal number is not
achieved. This number is bound by the size
of cluster, that satisfies the following proportion:
number of voters : number of winners = number
of voters in one cluster : one winner;
3. the function winners calculates the median for ev-
ery created cluster;
4. check that winners from cluster are different be-
tween each other ; in case it’s not true (condi-
tion=”ko” on pseudocode) algorithm goes back to
step 2 with a maximum number of cluster equal
to number of vacant seats and the proceedings are
held until all seats have been filled.
Algorithm 1.
Require: k 0
Ensure: n winners = (n
1
, ..., n
k
), k > 1
k number winners
max cluster k
condition ko
while condition = kodo
cluster list cluster(vote list)
for all list cluster do
winners per cluster
compute winners(cluster)
all winners
list o f all winners(winners per cluster)
end for
list winner distinct =
list o f all distinct winners(all winners)
option remaining number winners
len(list winner distinct)
if option remaining = 0 then
condition =
ok
else
k option remaining
condition
ko
end if
end while
3.4 Case Study: Using Clustered
Majority Judgement to Maximize
Agreement
In this section, we describe an interesting compar-
isons of majority judgement (MJ) and clustered ma-
jority judgement (CMJ).
In order to test our algorithm, we asked a group
of people about their preferences on food. If the pre-
sented dish is not considered at least acceptable by
people, it would be discarded. So the aim is to max-
imize the number of not discarded dishes, with two
common choices for all the participants. Input param-
eters of Clustered Majority Judgement test are Excel-
lent, Very Good, Good, Acceptable, Poor, To Reject,
No Opinion and the number of winners is set a priori
equal to 2. 63 voters took part into this experiment
and the algorithm form two clusters, exactly like the
number of winners.
KMIS 2022 - 14th International Conference on Knowledge Management and Information Systems
138
Table 2: CMJ results.
Cluster Cluster size Winner
Cluster 1 33 Bovine meat
Cluster 2
30 Tuna
Table 3: Top 2 of single-winner Majority Judgement ap-
plied to voters.
Ranking MJ Candidate
1 Bovine meat
2 Chicken
We can compare CMJ results with single-winner
MJ ranking, comparing Table 3 3 and Table 2 2.
Expressed judgements are very polarizing and the
two formed cluster seems in opposition between each
other, since the most preferred dishes for one are the
most negatively judged by the other. For this rea-
son, we notice both for Majority Judgement and Clus-
tered Majority Judgement the tendency to avoid the
favourite dishes, focusing on the moderate ones.
In case of Majority Judgement, the solution is
Bovine meat and Chicken, where both alternatives
are considered not acceptable for cluster 2 and for
this reason, 29 dishes would be discarded. With clus-
tered approach, the solution takes into account cluster
2’s preferences, and provides a lower number of dis-
carded dishes.
4 CONCLUSIONS
In section 1, we dealt with logical issues involved in
voting rules and judgement aggregation, highlighting
majority rule’s resistance to strategical vote.
In section 2, a more fined model of majority rule,
Majority Judgement, has been presented as an option
to better estimate the most shared candidate.
In section 3, the related works have been shown
and in section 4, all possible categories of cluster-
ing approach has been reported in order to choose the
fittest one for our generalization of Majority Judge-
ment as a multi-winner strategy. After that, a case
study is reported, with a particular attention to the
comparison between MJ and CMJ results.
The CMJ, as shown, represents the optimal com-
promise in case of polarized groups (clusters), so, in
this situations, this method could be the preferable
choice in order to take into account minorities judge-
ments.
In spite of non-deterministic nature of K-Medoids,
Clustered Majority Judgement is thought to be used
in high populated disputes. For these reasons, we
feel confident about clustering’s role of taking into
account all different perspectives could be shown in
such situation.
Moreover, our implementation is not strictly
linked to political field, as is clearly shown in the
case studies (except the first one), mostly because it
requires only some fixed parameters: number of win-
ners, number of grades and grades themselves.
An important future challenge could be speeding
up the algorithm or making a more flexible structure,
even though all the constraints already explained in
previous sections need to be satisfied.
REFERENCES
Andrea Loreggia, Nicholas Mattei, S. Q. (2020). Artificial
intelligence research for fighting. In Political Polari-
sation: A Research Agenda.
Arrow, K. J. (2012). Social choice and individual values.
Yale university press.
Balinski, M. (2006). Fair majority voting (or how to elimi-
nate gerrymandering). In The American Mathematical
Monthly, Vol. 115, No. 2. Mathematical Association of
America.
Balinski, M. and Laraki, R. (2007). A theory of measuring,
electing, and ranking. National Acad Sciences.
Balinski, M. and Laraki, R. (2011). Election by major-
ity judgment: experimental evidence. In In Situ and
Laboratory Experiments on Electoral Law Reform.
Springer.
Balinski, M. and Laraki, R. (2014). Judge: Don’t vote!
volume 62, pages 483–511. INFORMS.
Brandt, F., Conitzer, V., Endriss, U., Lang, J., and Procac-
cia, A. D. (2016). Handbook of computational social
choice. Cambridge University Press.
Caroprese, L. and Zumpano, E. (2020). Declarative seman-
tics for P2P data management system. J. Data Se-
mant., 9(4):101–122.
Ceci, M., Corizzo, R., Fumarola, F., Ianni, M., Malerba,
D., Maria, G., Masciari, E., Oliverio, M., and
Rashkovska, A. (2015). Big data techniques for
supporting accurate predictions of energy production
from renewable sources. In Desai, B. C. and Toyama,
M., editors, Proceedings of the 19th International
Database Engineering & Applications Symposium,
Yokohama, Japan, July 13-15, 2015, pages 62–71.
ACM.
Fabre, A. (2018). Tie-breaking the highest median: Alter-
natives to the majority judgment. In Paris School of
Economics.
Fazzinga, B., Flesca, S., Furfaro, F., and Masciari, E.
(2013). Rfid-data compression for supporting aggre-
gate queries. ACM Trans. Database Syst., 38(2):11.
G. Bellec, F. Scherr, A. S. (2020). A solution to the learning
dilemma for recurrent networks of spiking neurons. In
Nat Commun.
Hae-Sang Park, C.-H. J. (2008). A simple and fast algorithm
for k-medoids clustering. In POSTECH. Elsevier.
Decision Making with Clustered Majority Judgment
139
Kleinberg, J. (2002). An impossibility theorem for cluster-
ing. In Cornell University.
Leonard Kaufman, P. J. R. (2015). Partitioning around
medoids. In Finding Groups in Data: An Introduc-
tion to Cluster Analysis. John Wiley & Sons.
May, K. O. (1952). A set of indipendent necessary and suffi-
cient conditions for simple majority decision. In Car-
leton College.
Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V.,
Thirion, B., Grisel, O., Blondel, M., Prettenhofer, P.,
Weiss, R., Dubourg, V., Vanderplas, J., Passos, A.,
Cournapeau, D., Brucher, M., Perrot, M., and Duch-
esnay, E. (2011). Scikit-learn: Machine learning in
Python. In Journal of Machine Learning Research.
Serafini, P. (2019). La matematica in soccorso della
democrazia: Cosa significa votare e come si pu
`
o
migliorare il voto. Independently Published.
KMIS 2022 - 14th International Conference on Knowledge Management and Information Systems
140