Surveillance based Persuasion: The Good, the Bad and the Ugly
Sanju Ahuja
a
and Jyoti Kumar
b
Department of Design, Indian Institute of Technology Delhi, Hauz Khas, New Delhi, India
Keywords: Persuasive Technology, Surveillance, Ethics, Freedom, Autonomy, Policy.
Abstract: Surveillance-based persuasive technologies have become ubiquitous in the form of fitness trackers,
advertisement engines, recommendation systems and birthday reminder applications. They are also being
integrated into socio-economic systems such as insurance, health and education. In reported literature,
surveillance has raised significant ethical concerns about privacy and persuasive intentions of technology
have come under scrutiny for undermining human autonomy. This paper discusses the ethical implications of
persuasive technologies from the perspective of human autonomy and freedom. It begins by acknowledging
the reported and possible future advantages of surveillance-based persuasive technologies, with an emphasis
on the conditions which make them beneficial (the good). It then discusses the ethical trade-offs involved and
the problems with how those trade-offs are designed and implemented in technology (the bad). Lastly, the
paper discusses severe ethical concerns which involve coercion or manipulation of users into being persuaded
for economic or even paternalistic needs of the technology (the ugly). This paper has argued for designers and
businesses to employ an ethical approach to persuasive technology design and has presented possible
suggestions for such an approach. These suggestions can help design technologies in a manner more
conducive to autonomous decision making and freedom of choice for the users.
1 INTRODUCTION
Persuasive technology refers to technologies
explicitly designed to influence attitudes or behaviour
of their users (Fogg, 2003). The role of computers as
persuasive agents is still in a nascent stage of
understanding, but certain differences between
humans and computers as persuaders are noteworthy,
especially for their ethical implications. Because of
their novelty, positive reputation and lack of emotion,
computers can be especially persuasive without
shouldering any responsibility (Fogg, 2003). At the
same time, they can be persistent in their attempts to
persuade and they can control the interactive
possibilities, making some actions easier and others
impossible. In recent decades, surveillance has
become an integral aspect of many persuasive
technologies (Desclaux et al., 2019; Hadjimatheou,
2017; Orji & Moffatt, 2018). Surveillance has been
defined as “any collection and processing of personal
data, whether identifiable or not, for the purposes of
influencing or managing those whose data have been
a
https://orcid.org/0000-0002-5178-5077
b
https://orcid.org/0000-0002-3810-3262
garnered” (Lyon, 2001). While surveillance by itself
raises significant individual and collective privacy
issues (Bernal, 2016), the use of surveillance systems
for influencing or managing people raises concerns
about human freedom and autonomy in the
information age (Nagenborg, 2014). In the last few
decades, surveillance technologies have become
cheaper and persuasive technologies have become
readily available in the market. Surveillance-based
persuasion has been normalized by being positioned
in the market as a solution to humanity’s mental
shortcomings, both for individual users who
voluntarily adopt these technologies as well as for
corporations and governments seeking compliance
from their users, employees, clients and citizens
(Lemieux, 2018; Zuboff, 2019).
This paper aims to argue that surveillance-based
persuasive technologies create serious ethical
concerns that have not been adequately addressed
through technology design or policy. The act of
persuasion, even with the purest of intentions, has
always been in tension with the values of autonomy
and freedom (Maclean, 2006; Spahn, 2012; Strauss,
120
Ahuja, S. and Kumar, J.
Surveillance based Persuasion: The Good, the Bad and the Ugly.
DOI: 10.5220/0010121401200127
In Proceedings of the 4th International Conference on Computer-Human Interaction Research and Applications (CHIRA 2020), pages 120-127
ISBN: 978-989-758-480-0
Copyright
c
2020 by SCITEPRESS Science and Technology Publications, Lda. All rights reserved
2018). These concerns are aggravated when
persuasive technologies are designed not with the sole
intention of beneficence but are intermingled with
economic motives. This paper discusses the ethical
implications of surveillance-based persuasion for
human autonomy and freedom, and outlines the
aspects of persuasion that may not be ethically
acceptable because of their excessively coercive,
manipulative or paternalistic nature (Susser et al.,
2019). The paper begins by acknowledging the
reported and possible future advantages of
surveillance-based persuasive technologies, with an
emphasis on the conditions which make them
beneficial (the good). It then discusses the ethical
trade-offs involved and the problems with how those
trade-offs are designed and implemented in
technology (the bad). Lastly, the paper discusses
severe ethical concerns which involve coercion or
manipulation of users into being persuaded for
economic or even paternalistic needs of the
technology (the ugly). Through these arguments, this
paper aims to promote an ethical approach in the
design of persuasive technologies, especially those
that rely on surveillance which is intrinsically
violating of users’ privacy. Implications for economic
arrangements mediated by surveillance are also
discussed.
2 AUTONOMY AND FREEDOM
In philosophical literature, personal or individual
autonomy refers to an individual’s psychological
capacity to be self-governing (Buss & Westlund,
2018). Autonomy is thought to consist of two aspects:
authenticity and competence (Friedrich et al., 2018;
Levy, 2007). Authenticity roughly consists in one’s
actions being true to oneself, rather than being
influenced by social roles and conventional ways of
living (Levy, 2007). Competence refers to the mental
ability or the capacity to critically reflect on the
mental states that underlie one’s actions. Accounts of
personal autonomy in philosophy evoke competence
in multiple ways (Buss & Westlund, 2018): as the
ability to be sensitive to external reasons and as the
ability to reason about one’s beliefs and desires.
Friedrich et al. (2018) proposed a three-component
account of competence: 1) ability to use information
and knowledge to produce reasons, 2) ability to
ensure that intended actions are realized effectively
1
https://www.inc.com/jeremy-goldman/6-apps-to-stop-
your-smartphone-addiction.html
(control) and, 3) ability to enact intentions within
concrete relationships and contexts.
Freedom consists in the availability of choices of
action that an individual has. It has the function of
keeping doors ‘open’ and it consists in having options
that the individual may not ultimately choose but
whose existence is still valued (Nagenborg, 2014).
3 THE GOOD
In favour of persuasive technologies, one argument is
that they can be designed with beneficial intentions,
which has been made frequently in reference to
nudges designed for the public good (Thaler &
Sunstein, 2008). Persuasive technologies can be
valuable when intended to help overcome human
irrationality, which leads to systematic errors in
judgment (Tversky & Kahneman, 1974). People who
are cognizant of their own biases can choose to be
voluntarily persuaded, even when the persuasive
methods undermine their autonomy through
‘trickery’. For example, mobile applications that are
designed to help users quit smoking or alcohol track
their behaviour and utilize persuasive techniques
from psychology to modify it (Bascur et al., 2018;
Nagenborg, 2014). If the users endorse the methods
and outcomes of such technologies, their overall
autonomy to pursue their authentic goals is enhanced.
This is possible even when the technology functions
by reducing the freedom of users. For example,
certain smartphone applications can be designed to
lock users out of their own phones after a pre-
specified amount of phone usage
1
. This enables the
users to manage their device addiction, enhancing
their autonomy by restricting their freedom.
Another autonomy enhancing aspect of
surveillance technologies may lie in their ability to
provide more self-relevant information to users than
they would generally have access to (Friedrich et al.,
2018). For example, a fitness band may provide users
with information about their body and their
behaviour, like the number of steps they walk daily,
their heart rate, etc. The mere availability of such
information may enhance their autonomy by helping
them make better informed decisions about their
fitness. However, this argument is valid only when
the information is comprehensible, and it does not
confuse or debilitate the user, in which case it may
actually undermine their autonomy (Friedrich et al.,
2018).
Surveillance based Persuasion: The Good, the Bad and the Ugly
121
Lastly, there may be situations or scenarios in
which even coercion may be deemed ethically
appropriate, subject to certain conditions. There are
vehicles which are programmed to not start or sound
a loud, annoying alarm unless the driver is wearing a
seatbelt. In the wake of the recent COVID-19 spread,
cab companies have built facial monitoring systems
to ensure that drivers wear face masks
2
and the same
could be monitored, say, at the beginning of every
ride. In such contexts, these surveillance systems do
not merely remain persuasive, they become coercive.
However, they are designed to function for the
collective good by restricting individual freedoms. It
is argued here that such designs need democratic
procedures of acceptance to prevent them from
becoming overly paternalistic or excessively punitive
(Verbeek, 2009).
4 THE BAD
In the last three decades, with the way the information
economy has evolved, persuasive technologies have
become integrated into several economic products
and services, such as social media, e-commerce and
financial systems. There is an inherent conflict of
interest between monetization and beneficence. This
conflict has led to private companies making implicit
value and outcome trade-offs on behalf of the users,
without their awareness or informed consent. Even
when users endorse the intended functional outcome
of a technology, they may not be aware of the costs,
undermining their autonomy, or their decision to
adopt the technology may be reflective of a lack of
freedom because of a lack of acceptable alternatives
for the services that they seek.
Revisiting the case of a fitness band, which can
enhance users’ autonomy by providing them access to
information they would otherwise not have. A user
may voluntarily consent to the capture of data about
their walking steps or heart rate to persuade themselves
into following a fitness regimen. However, if a user is
not aware that the product aggregates their data to
target them with advertisements, or that the company
which makes the fitness band engages in the
commercial trade of their data, they lack the relevant
information to make an informed choice about the
adoption of the fitness band. Such deliberate or
indeliberate omission of the information about value
trade-offs involved in the design of persuasive
technologies undermines user autonomy.
2
https://techcrunch.com/2020/05/07/uber-may-use-its-
selfie-tech-to-verify-drivers-are-wearing-masks/
Nagenborg (2014) discusses the trade-offs
between macrosuasion and microsuasion. He argues
that even when users consent to the ‘macro’ or the
functional aspects of persuasive technologies, their
autonomy may be undermined by the ‘micro’ aspects.
For example, a technology may be designed to make
users more dependent on the technology for achieving
their goal, rather than enabling them to achieve it by
themselves. This conflict between the endorsed
functionality and possibly unethical microsuasive
elements has also been demonstrated by Rughinis et
al. (2016) for smoking cessation applications. In some
technologies, this conflict may be a side effect or an
unintended consequence of the functionality.
However, it may also be intentional on the part of the
persuading agents, because of the blurred boundaries
between economic motives and motives of
beneficence.
Consider the example of birthday reminder
applications. These applications collect the birthday
information of users’ contacts to send them reminders
for the same. Users may voluntarily adopt such a
technology to compensate for their forgetfulness or
their busy schedule. However, these applications do
not explicitly verbalize the value trade-offs involved.
In a social relationship, remembering a birthday is
normally considered a signifier of personal
importance or worth, not a task to be completed. The
effort that goes into remembering a birthday is what
might make the exercise valuable in the first place,
not the actual accomplishment. Therefore, even if this
form of surveillance-based persuasion can achieve a
desirable outcome, the outcome loses its intrinsic
value because of technology. The technology itself
might have persuasive functionalities other than the
birthday reminders. It may persuade users to purchase
a gift from a specific website in exchange for a
portion of the profits. This functionality may be
integral for the economic survival of the company
which makes the application, but it is not a
functionality that the user has consented to.
There are many such examples of technologies
which mis-represent or completely omit information
about the privacy, value and outcome trade-offs
involved in surveillance-based persuasion during the
process of technology adoption. The conflict between
the designers’ economic incentives and the users’
authentic goals has led to the creation of many
technologies which have been normalized into
acceptance without the informed consent of the user.
The evolving information economy has also made
CHIRA 2020 - 4th International Conference on Computer-Human Interaction Research and Applications
122
these value trade-offs impossible to avoid. Most
technologies available in the market require users to
make value trade-offs even if they explicitly wish to
avoid them, restricting their freedom of choice.
5 THE UGLY
The ugly side of surveillance-based persuasion
involves technologies that the users are not aware of,
whose primary intentions they do not endorse, or that
they have been coerced into adopting. Severe
negative effects on individuals or society also
constitute the ugly side of these technologies. These
conditions can be illustrated with the following
examples, along with how they impact human
freedom, autonomy and dignity.
When news reports surfaced about the Facebook
advertisement engine being used to influence the
2016 presidential elections in the United States
3
, they
brought the harms of surveillance-based persuasion at
the forefront of public discourse. The personal data of
voters was allegedly used to build their psychological
profiles and target their vulnerabilities to influence
their voting behaviour through advertisements
sabotaging a particular candidate. The users were
unaware of this intervention, and it is reasonable to
assume that they would not reflectively endorse its
intentions. This case became the focal point of public
discourse on surveillance-based persuasion, not least
because the effects of this technology were analysed
to be severely damaging for election integrity, which
lies at the heart of the democratic process. This kind
of stealth deployment of technology without user
awareness and endorsement seriously undermines
human autonomy (Burkell & Regan, 2019).
As discussed in Section 4, the boundaries between
motives of monetization and beneficence are blurring.
When economic motives are taken to the extreme, it
is possible for beneficence to be treated as a fortunate
by-product of a technology, not the primary intended
outcome. With the rise of pervasive computing, it has
become extremely easy to bundle persuasive
technologies that have negative consequences with
valuable products and services, bypassing awareness
and consent altogether. Pervasive computing has
normalized surveillance-based persuasion as a part of
digital economic activity. There is a thin line between
what constitutes service from a product (that the user
has consented to) and what constitutes persuasion that
3
https://www.theguardian.com/news/series/cambridge-
analytica-files
the user has grudgingly accepted to live with. For
example, the primary revenue sources of many digital
products such as social media, video streaming, e-
commerce and search engines are surveillance-based
persuasive technologies (advertisement and
recommendation engines). There are little or no
mechanisms to bypass this surveillance, even if the
user is willing to pay for the cost of service, restricting
their freedom of choice. Avoiding these services
altogether may impose significant social costs on
users. In these and similar cases, surveillance-based
persuasion is not a business model, it is the business
model.
The monetization of persuasive technologies has
begun to enter the dangerous territory of coercive
adoption, in which individuals have no choice but to
accept these technologies as a part of significant
social systems such as employment, banking,
insurance, health and education (Timmer et al., 2015).
Reconsider the case of a fitness band or an application
which monitors the number of steps a user takes,
motivating them to complete their voluntarily pre-set
daily exercise goals. Imagine that this data, along
with some other predetermined metrics for positive
fitness behaviour, is linked to health insurance
premiums. This economic arrangement is initially
popularized into acceptance by claiming that this
system incentivizes customers who put an effort into
their fitness. This technological application, which
has begun to be implemented by some companies
4
,
has significant problems that remain unaddressed by
design or policy. It is not difficult to understand how
a design which incentivizes a specific fitness goal
punishes users who do not wish to live by that goal,
or simply do not wish to externalize their motivation.
Fitness is, for a significant number of people, a
personal endeavour for wellness which they may not
wish to monetize or even capture. So, if a person
refuses to share their data with their insurance
company, they are punished not for a lack of
motivation towards their health and fitness, but for a
lack of a desire to capture and share it. Such an
economic arrangement also threatens to push society
towards a paternalistic template for what healthy
behaviour constitutes, reducing the incentives for
non-quantifiable or unmonitored fitness behaviours.
This template may be insensitive towards
demographic factors or user vulnerabilities, such as
gender, race, occupation or literacy (Jacobs, 2019). It
may not have enough room for individual
4
https://www.theverge.com/2018/9/26/17905390/john-
hancock-life-insurance-fitness-tracker-wearables-
science-health
Surveillance based Persuasion: The Good, the Bad and the Ugly
123
manoeuvres in terms of letting people decide what
constitutes fitness for their own bodies. In short, this
economic arrangement is coercive by design.
These coercive technologies mediating
arrangements between users and private companies
(and even governments) are being widely developed
in multiple industries. Theoretically, it could be
possible to coercively monitor vehicle drivers if they
wish to purchase vehicle insurance, universities could
make it mandatory for students to be monitored for
their health through physiological sensors, employers
could deploy eye and gait recognition cameras in the
workplace to monitor employees’ laziness or
distracted behaviour and link it with their salaries,
medical insurance could require for patients to track
their medicine intake through medication adherence
systems (Jacobs, 2019; Lupton, 2012), and
governments could track citizen behaviour through
facial recognition systems. As of now, even though
these technologies are being marketed as voluntary, it
is not difficult to imagine them stepping into the zone
of mandatory, changing the social and economic
order to be more conducive towards compliance and
less towards freedom, even if it is the freedom to be
wrong, lazy, distracted or unhealthy.
6 DISCUSSION
In the previous sections, the paper discussed the
positive and negative aspects of surveillance-based
persuasive technologies. Persuasive technologies can
prove to be beneficial, especially when they are
voluntarily adopted by users to pursue their authentic
goals. However, they can be ethically problematic
when they undermine human autonomy and freedom
through misrepresentation or omission of value trade-
offs, or when they are coercive, excessively
paternalistic and punitive. In this section, the authors
have surmised certain suggestions which emerged
from the above discussions to make the design of
persuasive technologies conducive to autonomous
decision making and freedom of choice for the users.
Some of these suggestions have also emerged from
previous literature, such as in the discussion by
Jacobs (2019) on the ethical design of persuasive
technologies for vulnerable populations. These
suggestions support the authors’ argument for the
need of an ethical approach to the design of
persuasive technologies. They are meant to aid the
design process with explicit consideration of the
ethical factors which pertain to human autonomy.
6.1 Surveillance Awareness
Users are often not aware that they are in a persuasive
digital environment in which their behaviour is under
surveillance. This lack of awareness is widely
prevalent on the web, and what may be partially
responsible for this phenomenon is a complete lack of
information, cues or indicators to communicate this
knowledge explicitly within the persuasive
environment. Instead, this information is typically
embedded in the long, vague legalese in terms and
conditions of service, which most users do not read
(Obar & Oeldorf-Hirsch, 2020). There is an ethical
obligation for persuasive surveillance technologies to
create an awareness of their existence in digital and
smart environments. Failure to do so undermines
users’ autonomy by depriving them of the
information required to give consent for any
persuasive interactions.
6.2 Minimal Surveillance
From the perspective of surveillance ethics, a
straightforward suggestion that emerges is that of
minimal surveillance. In privacy literature, this is
known as the data minimization principle
(Alshammari & Simpson, 2017), which proposes to
minimize the types of data that a technology can
collect based on its purpose or function. Persuasive
technologies should ethically collect only as much
data as is required to achieve the objectives that the
user has voluntarily endorsed. When users typically
adopt persuasive technologies, it is not in their
capacity to analyse whether all the data that the
technology claims to collect for an objective is indeed
required from a technological perspective. Users tend
to voluntary endorse the intentions of technology, not
the individual parameters of surveillance. With the
complexity of technologies increasing drastically, it
should not even be expected of the users to
understand this complexity. However, it is reasonable
to assume that the users would not consent for the
surveillance of information that does not contribute to
the fulfilment of their endorsed objectives. Therefore,
the data minimization principle is relevant to users’
autonomy in the adoption of persuasive technologies.
For technologies which collect additional data for the
purposes of system monetization, this distinction
needs to be made clear during the process of
technology adoption, as discussed further in
Section 6.3.
CHIRA 2020 - 4th International Conference on Computer-Human Interaction Research and Applications
124
6.3 Intention Disclosure
While users may consent to surveillance in exchange
for a free service, they may not always be aware or
knowledgeable of the intentions behind such
surveillance. Revisiting the case of the United States
elections discussed in Section 5, at least some users
might have been aware of Facebook’s surveillance
practices before the election scandal news reports
surfaced. However, despite their consent to
surveillance in exchange for a free account on
Facebook, they had not consented for their data being
used to be targeted as voters. In privacy literature, this
is covered under the ‘purpose limitation’ principle
(Forgó et al., 2017), which states that users’ data
cannot be processed for purposes beyond those
disclosed at the time of data collection. It is argued
here that the purpose limitation principle also holds
relevance for users’ autonomy because it
encompasses the element of intention disclosure,
which constitutes any user’s voluntary consent to
being persuaded.
6.4 Foreseeable Side Effects Disclosure
It is almost inevitable for persuasive technologies to
have consequences other than those primarily
intended by design. The birthday reminder
applications discussed in Section 4 may not only
contribute to an increased remembrance of birthdays
with the help of technology (intended) but also to the
increased forgetfulness of birthdays without the help
of technology (unintended side effect). It has been
argued previously that designers should be held
ethically responsible for the predictably unintended
consequences of technology (Berdichevsky &
Neuenschwander, 1999). In an extension of this
suggestion, it is argued here that if any such
foreseeable or predictable side effects are known to
designers or businesses, they need to be disclosed
during the process of persuasive technology adoption.
Foreseeable consequences are not just relevant to the
concept of ethical responsibility but also to the
autonomous decision of technology adoption by its
users. To enhance the autonomy of users, designers
and businesses need to make these trade-offs
explicitly known.
6.5 Freedom to Opt-out
In Section 5, the integration of persuasive
technologies with significant social and economic
systems was discussed. This integration has already
materialized in the case of services like social media,
video streaming and e-commerce, where surveillance
for advertisements and recommendations is the only
form of economic exchange available to users. This
form of surveillance is coercive in a society in which
opting out of these systems carries a significant social
cost. It is argued here that no social and economic
systems should be surveillance-based by design. Each
technology, product or socio-economic system
should provide users with the freedom to opt-out of
surveillance-based persuasion at a fair cost.
6.6 Democratic Coercion
It was discussed in Section 3, that in rarest of
circumstances, even surveillance-based coercion may
be justified in the interest of public good. However,
any coercive technologies, especially those that are
punitive by design, need to be defended through the
democratic process (Verbeek, 2009). Therefore,
contrary to popular market opinion, economic
systems like fitness information based health
insurance premiums should not be allowed to operate
without significant public and policy discourse. They
punish users who do not wish for their privacy to be
intruded with higher insurance costs, severely
infringing upon their freedom without any reasonable
justification of the public good.
7 CONCLUSIONS
This paper discussed the ethical concerns about
human autonomy and freedom in the context of
surveillance-based persuasive technologies.
Autonomy and freedom are highly valued ideals in
modern societies, making it necessary to investigate
the implications of rapidly evolving persuasive
technologies on human autonomy and freedom.
Persuasive technologies have the potential to enhance
human autonomy by providing users access to new
information and by helping them overcome biases in
their own judgment. With the voluntary use of
persuasive technologies such as fitness bands or
technologies designed to help quit smoking, users can
work towards their own authentic goals. On the other
hand, the paper also discussed how persuasive
technologies can undermine users’ autonomy through
practices like covert surveillance, not providing
relevant information about value trade-offs and
omitting knowledge about their vested interests in the
persuasive interaction. Moreover, when surveillance-
based persuasion is integrated into socio-economic
systems, it can become punitive and coercive,
severely infringing upon people’s freedom. Coercive
Surveillance based Persuasion: The Good, the Bad and the Ugly
125
surveillance values compliance over freedom, and it
undermines human dignity.
In light of these concerns, this paper has argued
for designers and businesses to employ an ethical
approach to persuasion design. The arguments
provided in this paper can help design technologies in
a manner more conducive to autonomous decision
making and freedom of choice for the users. The
proposed ethical arguments include the principle of
minimal surveillance and an explicit creation of
awareness mechanisms such that the users have real-
time awareness of being under surveillance. Other
arguments include the explicit disclosure of the
intentions by the persuasive technology as well as the
disclosure of its side effects or foreseeable unintended
consequences. For technologies bordering on the
coercive, the paper suggests that digital products
always provide users with the freedom to opt-out of
persuasion, and that a democratic process is used for
coercive technologies being integrated into
significant socio-economic systems.
The aim of this paper was to highlight that
surveillance-based persuasive technologies can be
used to both enhance human autonomy and freedom
or to reduce it. The long-term social consequences of
these technologies will significantly depend upon
how they are integrated into socio-economic systems
and how policymakers design technology policies
with explicit consideration for these factors. There is
a potential for misuse of these technologies, which
can be used by private companies as well as
governments to create power imbalances and to evoke
compliance from their users, clients, employees or
citizens. Therefore, there is a need for designers to
take an ethical approach to technology design, as well
as for policymakers to incorporate these insights into
emerging policies in the domain.
REFERENCES
Alshammari, M., & Simpson, A. (2017). Towards a
Principled Approach for Engineering Privacy by
Design. Privacy Technologies and Policy: 5th Annual
Privacy Forum. https://doi.org/10.1007/978-3-319-
67280-9_9
Bascur, A., Rossel, P., Herskovic, V., & Martínez-
Carrasco, C. (2018). Evitapp: Persuasive Application
for Physical Activity and Smoking Cessation.
Proceedings of the 12th International Conference on
Ubiquitous Computing and Ambient Intelligence.
https://doi.org/10.3390/proceedings2191208
Berdichevsky, D., & Neuenschwander, E. (1999).
Toward an ethics of persuasive technology.
Communications of the ACM, 42(5), 51–58.
https://doi.org/10.1145/301353.301410
Bernal, P. (2016). Data gathering, surveillance and human
rights: recasting the debate. Journal of Cyber Policy,
1(2), 243–264. https://doi.org/10.1080/23738871.
2016.1228990
Burkell, J., & Regan, P. M. (2019). Voter preferences, voter
manipulation, voter analytics: policy options for less
surveillance and more autonomy. Internet Policy
Review, 8(4), 1-24. https://doi.org/10.14763/2019.4.
1438
Buss, S., & Westlund, A. (2018). Personal Autonomy. The
Stanford Encyclopedia of Philosophy. Edward N.
Zalta (ed.), URL = <https://plato.stanford.edu/archives/
spr2018/entries/personal-autonomy/>.
Desclaux, A., Malan, M. S., Egrot, M., Sow, K., for EBSEN
Study Group, & Akindès, F., for EBO-CI Study Group.
(2019). Surveillance in the field: Over-identification of
Ebola suspect cases and its contributing factors in West
African at-risk contexts. Global Public Health, 14(5),
709–721. https://doi.org/10.1080/17441692.2018.153
4255
Fogg, B. J. (2003). Persuasive technology: Using
computers to change what we think and do. Morgan
Kaufmann Publishers, Amsterdam.
Forgó, N., Hänold, S., & Schütze, B. (2017). The Principle
of Purpose Limitation and Big Data. In New
Technology, Big Data and the Law (pp. 17–42).
https://doi.org/10.1007/978-981-10-5038-1_2
Friedrich, O., Racine, E., Steinert, S., Pömsl, J., &
Jox, R. J. (2018). An Analysis of the Impact of Brain-
Computer Interfaces on Autonomy. Neuroethics.
https://doi.org/10.1007/s12152-018-9364-9
Hadjimatheou, K. (2017). Surveillance Technologies,
Wrongful Criminalisation, and the Presumption of
Innocence. Philosophy & Technology, 30(1), 39–54.
https://doi.org/10.1007/s13347-016-0218-2
Jacobs, M. (2019). Two ethical concerns about the use of
persuasive technology for vulnerable people. Bioethics,
34(5), 519526. https://doi.org/10.1111/
bioe.12683
Lemieux, F. (2018). Intelligence and State Surveillance in
Modern Societies: An International Perspective.
Emerald Group Publishing, Bingley, UK.
Levy, N. (2007). Neuroethics: Challenges for the 21st
Century. Cambridge University Press, Cambridge, UK.
Lupton, D. (2012). M-health and health promotion: The
digital cyborg and surveillance society. Social Theory
& Health, 10(3), 229-244. https://doi.org/10.1057/sth.
2012.6
Lyon, D. (2001). Surveillance Society: Monitoring
Everyday Life. McGraw-Hill Education, UK.
Maclean, A. (2006). Autonomy, Consent and Persuasion.
European Journal of Health Law, 13(4), 321-338.
https://doi.org/10.1163/157180906779160274
Nagenborg, M. (2014). Surveillance and persuasion. Ethics
and Information Technology, 16(1), 43–49.
https://doi.org/10.1007/s10676-014-9339-4
Obar, J. A., & Oeldorf-Hirsch, A. (2020). The biggest lie
on the Internet: ignoring the privacy policies and terms
of service policies of social networking services.
CHIRA 2020 - 4th International Conference on Computer-Human Interaction Research and Applications
126
Information, Communication & Society, 23(1), 128–
147. https://doi.org/10.1080/1369118x.2018.1486870
Orji, R., & Moffatt, K. (2018). Persuasive technology for
health and wellness: State-of-the-art and emerging
trends. Health Informatics Journal, 24(1), 66–91.
https://doi.org/10.1177/1460458216650979
Rughinis, C., Rughinis, R., & Matei, S. (2015). A touching
app voice thinking about ethics of persuasive
technology through an analysis of mobile
smokingcessation apps. Ethics and Information
Technology, 17, 295-309. https://doi.org/10.1007/s10
676-016-9385-1
Spahn, A. (2012). And Lead Us (Not) into Persuasion...?
Persuasive Technology and the Ethics of
Communication. Science and Engineering Ethics, 18,
633-650. https://doi.org/10.1007/s11948-011-9278-y
Strauss, D. A. (2018). Persuasion, Autonomy, and Freedom
of Expression. In Freedom of Speech (pp. 37–74).
https://doi.org/10.4324/9781315181981-3
Susser, D., Roessler, B., & Nissenbaum, H. (2019).
Technology, autonomy, and manipulation. Internet
Policy Review, 8(2), 1-22. https://doi.org/10.14763/
2019.2.1410
Thaler, R. H., & Sunstein, C. R. (2008). Nudge: Improving
Decisions About Health, Wealth and Happiness. Yale
University Press, New Haven and London.
Timmer, J., Kool, L., & van Est, R. (2015). Ethical
Challenges in Emerging Applications of Persuasive
Technology. In: MacTavish, T., & Basapur, S. (eds)
Persuasive Technology. PERSUASIVE 2015. Lecture
Notes in Computer Science, vol 9072. Springer, Cham.
https://doi.org/10.1007/978-3-319-20306-5_18
Tversky, A., & Kahneman, D. (1974). Judgment under
Uncertainty: Heuristics and Biases. Science, 185(4157),
1124–1131. https://doi.org/ 10.1126/science.185.4157.
1124
Verbeek, P.-P. (2009). Ambient Intelligence and Persuasive
Technology: The Blurring Boundaries Between Human
and Technology. Nanoethics, 3(3), 231–242.
https://doi.org// 10.1007/s11569-009-0077-8
Zuboff, S. (2019). The Age of Surveillance Capitalism: The
Fight for a Human Future at the New Frontier of
Power. Profile Books, London, UK.
Surveillance based Persuasion: The Good, the Bad and the Ugly
127